In its next iteration dubbed Ethereum 2.0 or Serenity, Ethereum will move on from Proof-of-Work to Proof-of-Stake, implementing a major change with how the network nodes are incentivized to participate. This is an analysis of the proposal that’s going to be implemented as a part of Phase 0. It’s based on the following work that’s been published:

1. Beacon Chain Specification which started on HackMD, later migrated to GitHub.
2. Beacon Chain PoC implemented in Python.
3. Initial version of validator economics by Eric Conner.
5. A discussion at ETH Research.

This is work in progress and will most probably change in the following months. I won’t be concentrating on the profit & loss of validators, but rather capture the definition of the underlying model. I hope this will clarify how validator deposits, total deposits, total supply, issuance rate and validator interest relate to each other.

As the first globally successful cryptocurrency, Bitcoin’s economics was a starting point for the design of PoS systems to come. It was really simple: rewards are issued at a constant rate and distributed randomly to miners. Bitcoin’s creators aimed to minimize non-technical governance from the very beginning, so they designed it to have a supply cap by having the rewards halve every 4 years.

Ethereum has a different approach to monetary policy. Issuance of ETH is not decided by a priori consensus, but BDFL governance by the core devs, like other aspects of the project. When you buy ETH, you implicitly agree that the core devs—whose livelihoods depend on ETH—are going to act in their best interest and take the decisions that will maximize ETH’s value, that is, until a more efficient and democratic method of governance is developed.

In Ethereum 1.0, the issuance rate started as a constant 5 ETH per block, and was reduced first to 3 ETH, and then 2 ETH in subsequent hard forks. In Ethereum 2.0, the protocol will be able to keep track of the total stake. This makes it possible to implement different mechanisms. Rewards minted per epoch will change with total staked ETH. I will refrain from making absolute statements about the numbers, because they changed even while I was writing this. The general idea is to target a reference validator return rate A% for B ETH staked. If less than B ETH is staked, the return rate will be greater than A%, and vice versa. This is to give the validators an extra incentive to stake following the launch, and also to ensure that total stake never becomes too low.

The rule of thumb the core devs seem to be using is to target roughly 2.5~3% annual return rate at optimal amount of ETH staked. This was previously thought of as 10%, but seems to have increased to 30% with recent feedback from the community. Notice how the projected return rates more than doubled since the first proposal (current ETH supply is 105m):

ETH validating Max annual issuance % (first) Max annual issuance % (current) Max annual return rate (first) Max annual return rate (current)
1,000,000 0.08% 0.17% 8.02% 18.10%
3,000,000 0.13% 0.30% 4.63% 10.45%
10,000,000 0.24% 0.54% 2.54% 5.72%
30,000,000 0.42% 0.94% 1.46% 3.30%
100,000,000 0.77% 1.71% 0.80% 1.81%

What is more interesting is the “sliding scale” model of reward issuance seen above. A look at the Beacon chain spec reveals how this is achieved:

def get_base_reward_from_total_balance(state: BeaconState, total_balance: Gwei, index: ValidatorIndex) -> Gwei:
if total_balance == 0:
return 0

return get_effective_balance(state, index) // adjusted_quotient // 5


Rewards are scaled with the square root of the total ETH staked. Specifically, per validator reward has the form

$r = \frac{c}{\sqrt{S}}$

for a single epoch, where $c$ is a constant and $S$ is the total ETH staked. This results in a system where validator return rate decreases with increasing ETH staked, but the issuance rate increases:

In fact, this behavior holds for every reward function $r = c S^{-\alpha}$ where $0<\alpha<1$. See my previous blog post for more details on such models, and how different parameters affect the economic equilibrium.

One of the reasons for choosing such a model is to mitigate the chicken and egg problem every nascent network faces. Another one is to minimize the profit that can be obtained from censoring minorities, as demonstrated by Buterin1.

## Economics of Continuous Griefing

Continuous griefing by a majority coalition is one of the worst types of attacks, where messages from a minority are continuously ignored by the coalition, effectively censoring them. The fault in this case is non-attributable, since the protocol can’t observe whether the minority is being censored or simply offline. Since it’s not possible for the protocol to determine the attacker, the only remaining option is to penalize everyone collectively. Therefore in Ethereum 2.0, rewards will be scaled down linearly with decreased participation as observed by the protocol.

The metric to take into account here is the griefing factor, defined by Buterin1 as follows: A factor of n means that the attacker can sacrifice \$1 of their own funds to cause \$n losses to the victim. For a linear scaling of the rewards, he calculates the griefing factors as 3 for the smallest minority and 2 for the biggest minority.

Although griefing is penalized by the protocol, there is a side effect to the attack: since there are now fewer validators, per validator rewards actually increase a bit (recovery). If that increase is higher than the cost of the attack, then it’s still possible to make a profit.

Let us now formulate the economics in terms of number of validators $N$ instead of staked ETH, which are roughly analogous to the protocol. Let the per validator reward have the form

$r = c_1 N^{-\alpha}$

which is analogous to a demand curve. Then let the number of validators that would economically exist for a given per validator reward have the form

$N = (r/c_2)^{1/k}$

which yields the inverse relation $r = c_2 N^k$. This is analogous to a supply curve. Here, $c_1$ and $c_2$ are constants and $k$ is an external parameter governing the economics of the network.

Griefing will cause rewards to decrease for everyone, pushing down the demand curve. Given that the demand curve is pushed down by $\epsilon$, the figure below demonstrates how to calculate the recovery as a fraction of $\epsilon$:

Some simplifications done by Buterin1 do not actually hold, but for the given relations, the recovery indeed simplifies to

$\frac{\alpha}{\alpha + k} \epsilon$

locally at the intersection point $N^\ast = (c_1/c_2)^{1/(\alpha+k)}$.

Assuming the worst case scenario allows us to further simplify:

Griefing factors are highest when the attacker has exactly half of the validators. This is convenient, because it means that the size of the attacker and victim sets are the same, so the griefing factor is also the ratio of the losses of average rewards of each validator.

Then we assume that the victims lose $\epsilon$ and the attacker loses $\epsilon/F$ where $F$ is the griefing factor. To ensure unprofitability of griefing, recovery should be smaller than losses for the attacker $\epsilon\frac{\alpha}{\alpha + k} \leq \epsilon\frac{1}{F}$ yielding the condition

$\alpha \leq \frac{k}{F-1}.$

The griefing factor for 51% censoring the rest was 2, but Buterin1 assumes a factor of 3—presumably for additional safety. The parameter $k$ is determined partly by external factors and partly by design choices; Buterin1 assumes a linear relationship $k=1$. This is interpreted as “a $1/c_2$ increase in per validator reward will always result in one more validator joining the validator set”. Substituting these values, we obtain the condition

$\alpha \leq \frac{1}{2}.$

This provides additional justification for the choice of $\alpha = 1/2$ for Ethereum 2.0. Furthermore, it shows that PoS networks with $\alpha = 1$ are more susceptible to censorship.

This research is being conducted on behalf of CasperLabs, where we are building the truly decentralized, scalable, next generation Proof-of-Stake network.

### Reference

1. Buterin V., Discouragement Attacks, 16.12.2018.  2 3 4 5