Cryptoeconomic networks that implement Nakamoto consensus1, like Bitcoin and Ethereum 1.0 exhibit $O(1)$ message complexity2—these networks, in theory, can scale indefinitely in terms of communication (though not in terms of physical resources). On the other hand, numerous new projects such as CasperLabs, Ethereum 2.0 and Cosmos/Tendermint are implementing a new type of consensus protocol, called Proof of Stake, which relies on Byzantine Agreement3.

Typical BFT protocols such as DLS4 and PBFT5 have at least $O(n^2)$ message complexity, because they require all nodes send messages to all other nodes. On the other hand, BFT protocols cannot have a message complexity less than $O(n)$6. Unfortunately, this guarantees that communication overhead will increase with each new node7. Combined with network latency, this might put a cap on the total number of nodes—or validators in PoS terminology—such a system can accommodate.

Moreover, the effects of decreased network throughput might be more adverse due to the economic layer. A decrease in transaction throughput would result in a decrease in collected transaction fees. Thus, in a network where validator revenue consists mainly of transaction fees, a self-interested validator joining the network can decrease the revenue for everyone else. Such an externality would lead to a typical tragedy of the commons situation.

We can model such market behavior by rendering supplied throughput $T_s$ a function of network size $N$ (number of nodes). Although the exact relationship would depend on the protocol, network topology, latency and so on, any monotonically decreasing function is sufficient to demonstrate the situation described above.

Let us choose a linear function to make the subsequent analytic solution simpler:

with $\alpha_0 < 0$, $\alpha_1 > 0$, and $T_s, N \geq 0$. Note that this assumes maximum throughput when $N=0$, i.e. when there are no validators. We could also choose $T_s(N) = \alpha_0 (N-1) + \alpha_1$ and shift the domain, but it wouldn’t make much of a difference and make the results below look cluttered; so we’ll go with it for simplicity.

Similarly, let us assume a linear law of demand for demanded throughput $T_d$ as a function of $P$, the price (or fee) of submitting a single transaction:

with $\beta_0 < 0$, $\beta_1 > 0$, and $T_d, P \geq 0$. The functions look like in the figure below:

Supplied throughput as a function of network size (left), and demanded throughput as a function of price (right). The maximum values read in terms of the parameters: $T_{s,\max} = \alpha_1$, $N_\max = -\alpha_1/\alpha_0$, $T_{d,\max} = \beta_1$ and $P_\max = -\beta_1/\beta_0$.

It is also helpful to present supply and demand in the more familiar economics convention:

Throughput versus transaction price. Supplied throughput is perfectly inelastic with respect to price, and shifts with changing network size. The equilibrium throughput $T^\ast$ and price $P^\ast$ are found where the supply curve intersects the demand curve.

At equilibrium, throughput demanded will be equal to throughput supplied, i.e. $T_d(P^\ast) = T_s(N^\ast) = T^\ast$, which allows us to express equilibrium network size in terms of the price:

Let us assume a symmetric equilibrium where all nodes exert the same effort and share transaction fees equally. Then the expected value of a node’s revenue $R$ is calculated as

Assuming that each node incurs a fixed cost $C$, the expected value of a node’s profit $\Pi$ is equal to

The fee market has no barriers to entry and the product is identical, so we assume perfect competition and solve for the case where profits are reduced to zero:

Solving (*) and (**) together, we obtain the following equilibrium values in terms of the cost:

where $\Delta = (\alpha_0\beta_1 + C\beta_0)^2-4C\alpha_0\alpha_1\beta_0$. The variables may enter the complex domain iff $T_{d,\max} < T_{s,\max}$ depending on $C$—so better choose maximum demanded throughput higher than that of supplied.

One case of interest is when validator costs are set to zero; in other words, it’s very very cheap to be a validator:

In this case, maximum network size and price are reached and throughput is zero. Low costs allow more and more validators to join the network, effectively choking it, due to protocol based externalities.8

The other case of interest is when validator costs approach infinity:

This represents the scenario where costs are too high. Network size is zero (not 1 due to our choice for simplicity), supplied throughput is maximum, and the price is the smallest possible, depending on the ratio of supplied versus demanded throughput:

Then we can show the asymptotic relationships between $N$, $P$, $T$ and $C$.

Validator costs versus network size (left), transaction price (middle), and throughput (right).

Or to summarize the results in a single figure,

Network size versus transaction price.

One would expect that transaction fees decrease as more nodes join the network; however, this is not the case due to the externalities.

Conclusion

BFT protocols don’t scale like regular cloud services—e.g. Amazon EC2—due to the message complexity required for consensus. After a certain threshold, joining of new nodes may actually decrease overall network performance and cause externalities. Even worse, this could result in increased transaction fees for the users. Where the equilibrium ends up would be determined by demand and costs of being a validator.

Protocol design and the selection of parameters determines the total number of validators a network can accommodate. That number not being high enough could introduce the risk of centralization and cartel formation.

On the other hand, such hard caps could be alleviated by scalability solutions such as sharding and sidechains. These could enable the type of horizontal scalability that regular cloud services benefit from. In that case, new shards/sidechains would be introduced as existing ones become optimally saturated, trading off monolithic data storage/access for uninterrupted growth potential.

This research is being conducted on behalf of CasperLabs, where we are building the truly decentralized, scalable, next generation Proof-of-Stake network.

  1. Stifter N., Judmayer A., Schindler P., Zamyatin A., & Weippl E., 2018. Agreement with Satoshi – on the formalization of Nakamoto consensus. Cryptology ePrint Archive, Report 2018/400. 

  2. To understand this in simple terms, imagine a miner that has just mined a block. Imagining the set of miners forming a queue with said miner at the beginning, the miner needs to pass the block only to the next one in the queue. Thus Nakamoto consensus has O(1) message complexity. 

  3. Lamport L., Shostak. R & Pease M., 1982. The Byzantine Generals Problem. ACM Trans. Program. Lang. Syst. 4, 3 (July 1982), 382-401. 

  4. Dwork C., Lynch N. & Stockmeyer L., 1988. Consensus in the presence of partial synchrony. J. ACM 35, 2 (April 1988), 288-323. 

  5. Castro M. & Liskov B, 1999. Practical Byzantine fault tolerance. In Proceedings of the third symposium on Operating systems design and implementation (OSDI ‘99). USENIX Association, Berkeley, CA, USA, 173-186. 

  6. Amdur E.S., Weber S.M. & Hadzilacos V., 1992. On the message complexity of binary byzantine agreement under crash failures, Distributed Computing (1992) 5: 175. 

  7. At least with deterministic BA. There are also probabilistic BFT protocols. 

  8. The model we used was linear, but the $T_s$–$N$ curve could optimistically be asymptotic in real life. In other words, externalities might not halt the network, but just cause it to run very suboptimally.