DAGs and the problem with decentralization.

Quan Nguyen
4 min readMar 26, 2022

Many people commented on addressing the issues some have with DAGs and decentralization. So here we go…

1) DAG 1.0

1.1) Some examples are Obyte, IOTA, and Nano, just to name a few

https://www.radixdlt.com/post/dags-dont-scale-without-centralization

https://cointelegraph.com/explained/what-is-a-directed-acyclic-graph-in-cryptocurrency-how-does-dag-work

1.2) So first, what is a DAG?

DAG stands for Directed Acyclic Graph.

And….it’s not a blockchain, but a graph. The best way to describe it is by comparing the two…

1.3) Most of the issues people are talking about are actually about this DAG 1.0 model.

1.4) Avanlanche, and it’s predecessor — called Snowball, are based on this DAG 1.0 model.

In particular, they claimed to be more decentralized, but they sacrifice security for speed (using subsampling).

2) DAG 2.0

Fantom introduces a new DAG model — DAG 2.0

White Paper v2.0 https://arxiv.org/pdf/2108.01900.pdf

https://fantom.foundation/blog/upcoming-fantom-protocol-upgrades/

More info about the project:
Homepage: https://fantom.foundation

Github: https://github.com/Fantom-foundation
Technical doc: https://docs.fantom.foundation/

2.1) Fantom’s Lachesis

Nodes create event blocks, which form a DAG. When an Atropos event block is found, a new block is created.

2.2) This new block will include all event blocks in the Atropos’s subgraph that haven’t not been confirmed previously. All the txns of the newly confirmed event blocks are added into this new block.

2.3) Fantom uses both DAG (of event blocks) and a blockchain (a chain of confirmed blocks).

This is DAG 2.0.

2.4) Fantom does more than that.

After each epoch, a new DAG is created in the new epoch. It consumes a little resource and so there is a little overhead. But it gives many advantages.

2.5) Fantom’s model has several advantages over the traditional approach (aka nodes have to create and confirm each event block one by one in blockchain platforms).

First, it is much faster, as transactions are approved simultaneously. They are essentially stacked.

2.6) When an Atropos is found, it will confirm transactions in the event blocks under Atropos’s subgraph. All is confirmed at once.

2.7) Blockchains have to wait as each event block is added one at a time.

But DAG 2.0 utilizes past event blocks (that are yet confirmed) so as they fan out more and more event blocks can be added and approved at once (when a new Atropos event is found).

2.8) This makes Fantom significantly faster (than others).

2.9) Also because the DAG uses txns it doesn’t use miners. So there are no miner fees.

This makes DAGs cheaper.

2.10) There is no single miner that can mine a whole block, so it’s not be controlled by a particular validator. Hence, it is more decentralized and more secure.

2.11) So there are understandable reasons why the DAG model was chosen for Fantom.

Before we get into the challenges, let’s go over the blockchain trilemma…

3) The blockchain trilemma!

Every blockchain, or rather #DeFi as a whole, strives to maximize three crucial points.

1.Decentralization

2. Scalability

3. Security

Fantom’s DAG 2.0 are super scalable, for the reasons just discussed.

3.1) There are security concerns at lower transaction rates (since with the aBFT consensus model you only need 2/3 consensus)…but Fantom has scaled beyond those concerns.

So the last challenge is decentralization…

3.2) This was the reason people brought up the conversation in the first place.

And this is where the tech gets confusing for many guys. But they’re talking about DAG 1.0 problems.

3.3) Data pruning

Basically, a DAG data is additional storage. Old DAG data is not needed once new blocks are formed. So old DAGs info and the old blocks can be “pruned.”

3.4) In order to prune, a snapshot of the current state of the network has to be taken. Each node can work it out from it’s local DAG + blockchain data.

It’s not related to decentralization.

3.5) But we come up with a more decentralized approach to create snapshots in DAG 2.0. Snapsync is coming soon.

4) Delegated proof of stake (dPoS).

Consensus on this state is done, with “witness nodes” which are elected (delegated) by validator nodes.

There are many instances where nodes might get a disproportionate number of delegations, localizing witness node power to a small number of validators.

And here is where decentralization challenges is allegedly rumoured.

4.1) Some solutions to more decentralization in PoS and dPoS systems in general.

4.2) For starters, lowering the stake cost of validator nodes will allow for more nodes on the network, and therefore more diverse array of validators and therefore delegates.

4.3) People have more thoughts towards a true decentralization.

Some future measures will make things a little better…

Ideas actually very between people.
It’s out of scope of this thread.

4.4) PoS and dPoS may have some an issue where nodes have more incentivizes to accummulate more power. But PoS and dPoS are a better alternative to PoW.

4.5) Fantom is DAG 2.0 + dPoS, and hence it’s model is significanly better (security + scalability) than all existing dPoS and PoW models.

Check our White Paper v2.0 https://arxiv.org/pdf/2108.01900.pdf

4.6) In terms of decentralization, the justification based on the number of nodes is ‘hilarious’. A more correct measurement of “decentralization index” (deIndex) is the actual number of nodes that matter.

4.7) In reality, the deIndex value in “claimed-to-be-better-decentralized” platforms is either equal or actually much less than Fantom’s deIndex.

So, the battle for the perfect trilemma ratios rages on. Verbally. It’s more about honesty.

5) Conclusion

Hopefully this helps explain the DAG model and the “untrue” decentralization challenges for Fantom in layman’s terms.

Keep building mates.

Thanks for reading …

--

--

Quan Nguyen

Quan Nguyen is interested in R&D on Blockchains and DLT. He is currently the CTO at Fantom Foundation. His background is Cloud, Web Apps, InfoVis, PL, VM.