Response to Buterin’s Criticism of my Proof-of-Stake Piece

I was pleased that a few days ago, Vitalik Buterin responded to my “Critique of Buterin’s ‘A Proof of Stake Design Philosophy”.

I counter his points one by one below, and remain unconvinced that proof-of-stake has a solid philosophical foundation as a standalone security mechanism for public blockchains.

TD: This statement is misleading, because he is really only talking about what a 51% attacker could do to the very last blocks in the blockchain.

VB: No, I’m talking about what a 51% attacker can do to *any* block in the blockchain. The great majority of costs of mining are capital costs, not operating costs; last time I did the math the ratio was something like 3:1. So if an attacker has the capital to do one attack on six blocks, they are 75% of the way to being able to do attacks on years of history.

Thanks, think I better understand your general argument now. Let’s look at the quote from your article I suggested is misleading:

VB: “Hence, the size of the mining network has to be so large that attacks are inconceivable. Attackers of size less than X are discouraged from appearing by having the network constantly spend X every single day. I reject this logic because (…) it fails to realize the cypherpunk spirit — cost of attack and cost of defense are at a 1:1 ratio, so there is no defender’s advantage.”

You see, to me you seemed to be using the fact that it is the _daily_ defense cost that had to exceed the attacker’s _daily_ hashing expense to argue that it the _general_ cost/defense ratio was at 1:1. I think this is a minor detail though, my main argument is that it is meaningless to talk about an abstract general cost/defense ratio in general.

TD: After the multi-billion mining equipment acquisition costs, the cost of running the Bitcoin network for 200 days would be over $700 million (7.5 TWh at 10 cents/KWh).

VB: Ok, seems like we agree on the above ratio.

I don’t necessarily agree that the cost of a 51% attack will consist for three quarters out of purchasing the ASIC equipment, and only one quarter to operate it. There’s a long history of nationalization to prove that after governments (IMO the most likely attacker) take over, operational efficiency tends to drop strongly. One study estimates that oil mining companies see their profitability drop by over 50% post nationalization. So that $700 million may be a very low estimate, as it may operate at much lower efficiency post takeover.

TD: there will always be a tug of war between attackers and defenders — no matter which security mechanism one uses. To speak of a cost/defense ratio of 1:1 is quite meaningless in my opinion.

VB: How so? In order to have the $2b cost of attack, it was necessary for bitcoin miners to have burned substantially more than $2b worth of resources. That’s a 1:1 ratio (in fact, worse than 1:1), which is very meaningful. With PoS we can have a tug of war where the defenders have a 10:1 advantage, or better.

I’m glad you ask about this, though I do think I explained the economic phenomenon of cost quite clearly. It is true that under proof of work, the minimum economic cost to execute a 51% attack can be reasonably approximated. However that does not mean that the cost/attack ratio is 1:1 or less, it merely means that the vulnerability of the system is to some extent known.

Under proof-of-stake one can point at the architecture and argue that the cost of attack was _designed_ to be X, or X relative to the Y investment of honest operators, but that doesn’t mean that this ratio will hold. One entire avenue of PoS attack vectors are for example scenarios where attackers borrow the ETH with which they attack the chain, thus profiting from a plummeting price and offsetting most or all of the designed “punishment”.

The argument I provided in my article is that PoS is effectively obfuscated PoW, and so rather than that it has a different or lower defense cost associated with it (10:1 improvement over Bitcoin seems baselessly optimistic to me), it is simply unknown.

TD: Or the attacker can strategically target a huge amount of users, making sure to only inflict a small amount of financial damage per user — so that the cost per individual to rally against the attacker is higher than the loss incurred by the attack.

VB: This is not feasible. You’re talking about a 4-month-long chain reversion here; there is no way to make that inflict only a “small amount of financial damage”. This is a consequence of the inherent “all-or-nothing” property of a blockchain.

I’m not sure I follow. Yes a resource based attack that rewrites a big chunk of blockchain history will likely be very disruptive and affect the (dollar denominated) value of the tokens, but in terms of the tokens themselves, the attacker could restrain himself and confiscate only 1–5% of each balance for example, so that for each individual saver it would likely still be worthwhile to keep using the system. A confiscatory strategy of this sort was suggested recently by a creator of Siacoin, where the idea was to “hardfork 1 or 2 billion new coins directly into our wallets and then selling like a million a day to fund our operations”. With a total supply of 28 billion coins, the effect of this attack would dilute the holdings of Siacoin savers by 3.5% to 7%.

TD: they often entirely disagree on how it should be dealt with

VB: Sure, but in the case of a long range attack the way to deal with it is incredibly clear: follow the chain that showed up earlier, and not the chain that showed up later.

It could be argued that Bitcoin’s largest mining company Bitmain, with its support of Segwit2X and its BitcoinABC fork blueprint, is executing a long range attack on Bitcoin in an attempt to change its economic model. Yet I’ve never seen you argue that the solution is incredibly clear, and that the community should simply follow the legacy chain. Again, my point is that community consensus is an impractical and undesirable mechanism to rely on for public blockchain robustness. In this thread you seem to agree with that: “I’m not convinced “rough consensus” is a sustainable governance model in genuinely adversarial environments. No country runs on it.

TD: but what we can not count on is the idealistic concept of social consensus.

VB: Then how do you know what software to run when running a full node?

I don’t need social consensus to chose which Bitcoin client to run, there’s no imperative. I decide whether I trust Wladimir van der Laan and I can chose to download software signed by his release signing key. If many others do that as well, my client will connect to theirs in a network. This is what reddit user deviatefish calls ‘opt-in governance’. He contrasts it with Ethereum’s ‘opt-out governance’, where users are confronted with a stream of non-reverse compatible changes in the form of hard-forks: “Making changes opt-out is a huge strategic advantage if your goal is centralized control over a network. It flips the stakes from ‘making changes important enough to garner support of the entire network’ to ‘making changes that aren’t controversial enough to alienate a significant portion of the network.’ By negating the incentive structure like that, so long as any one change doesn’t alienate a significant minority of the network, it will be accepted with little to no risk to the health of the network.”

Like I argued in my article, collective blockchains shouldn’t rely on collective decision-making for their security. Of course aggregated individual decisions can result in positive network effects for , but that’s not my point.

If you want to argue that Bitcoin somehow runs on social consensus, then at least you should make the distinction that it’s the ‘opt-in’ variant of social consensus, and perhaps also acknowledge that Ethereum/PoS runs on the much more centralization friendly ‘opt-out’ social consensus.

TD: To my knowledge, proof-of-stake has no equivalent applications in either human history or biology.

VB: Government agencies use security deposits in various contexts all the time. Money transmitter and banking license surety bonds are one example. Bail bonds are another. The use of hostages in various forms of negotiation throughout history is a third.

Thanks, very interesting examples of “putting up economic value-at-loss”. Still, I would argue that all these applications imply an external source of trust, making them invalid for our purposes.

1/ Surety bonds

As per wikipedia, a surety bond is defined as a contract among at least three parties. 1) the obligee: the party who is the recipient of an obligation, 2) the principal: the primary party who will perform the contractual obligation, 3) the surety: who assures the obligee that the principal can perform the task.

So the surety is the outside source of trust required here. You can argue that PoS in blockchains tries to blend the obligee and the surety into one, but that doesn’t make this example a valid application of trustless PoS in other contexts.

2/ Bail bonds

In a bail bond, money is deposited in favor of a court ensuring the accused return to trial. Again, a trusted third party (the bank) is involved to hold the funds.

3/ Use of hostages in various forms of negotiations

This is another form of a bail bond, whereby the asset that’s entrusted to the counterparty has no fungibility and is assumed to be very valuable to the hostage-giver: an actual person who’s part of your clan, often a family member. Hostageship was used throughout history to validate peace treaties and guarantee political alliances.

‘Les Otages’ (Hostages), Jean-Paul Laurens, 1896. The painting is speculated to represent either the ‘Princes in the Tower’, the two only sons of Edward IV who were held hostage in London and later likely murdered, or inspired by the prisoner protagonist held by the Spanish Inquisition in Edgar Allen Poe’s The Pit and the Pendulum.

In the hostage contract, the external source of trust is the hostage giver—assumption is that he values the life and health of the hostages more than the potential benefits of breaking the political treaty. This example requires the least amount of trust in a third party because, like I argued, the asset entrusted is unique (non-fungible) and valuable. Under a bonded proof-of-stake system, it’s possible for neither to be true: the bonded tokens are usually very fungible, and an attacker can offset their value by hedging his position, for example by taking on an equally great short position against ETH.

The reason why this example requires the least amount of trust in a third party, lack of fungibility, is also why it cannot be applied to Ethereum—where a potential attacker will likely care very little about the individual Ether tokens he may stand to lose if he gets punished by the protocol. There are many ways for him to hedge his risk of getting caught, for example by taking on a short position.

Let’s compare these examples to the handicap principle I brought up as a PoW example from darwinian evolution. Organisms operating under the handicap principle don’t require an outside source of trust. It’s enough for the validator (in nature e.g. the female peacock, in cryptocurrency the software client) to run the data through its recognition algorithm & verify that the minimum work threshold has been exceeded. Of course in nature the recognition algorithms can be fooled, for example by humans, but with cryptography proof-of-work can be made near entirely waterproof.

TD: A PoW 51% attacker can significantly slow down the network, but even a single attempt to revert historical transactions requires a huge and long-running expense

VB: A cost at best equal to, and in most real-world cases substantially less than, the cost paid by legitimate miners to create the blockchain. PoS can achieve a much more favorable ratio.

See my earlier comments above. Your claim that “PoS can achieve a much more favorable ratio” is an argument from authority, unless you can point me to a working public blockchain that operates on pure bonded proof-of-stake, and to pentesting evidence of its security.

TD: SolidX’s Bob McElrath makes the point that the strategy of ‘economic punishment’ of attackers is moot if the punishment itself can be forked away.

VB: Sure, though if a chain censors punishments, then *that chain* can once again be forked away, much like Bitcoin Core developers advocate changing the proof of work in response to 51% miner coalitions censoring everyone else’s blocks.

A.k.a. Proof of Vitalik. Someone has to decide which of the 10,000 forks I create is valid. I guess it’s Vitalik. (These are McElrath’s own words, I reached out to him for a response)

TD: Another criticism of bonded PoS, as recently voiced by BitTorrent creator Bram Cohen, is the question how one prevents honest stakers from being tricked into interacting with the network in a way that triggers the punishment that is supposed to protect them. (Think of it as the crypto equivalent of large scale swatting.)

VB: This is not possible; a validator cannot lose their security deposit unless they violate one of a set of slashing conditions, and compliance with these conditions can be verified client-side.

Validators can also be penalized for appearing to be offline, but the algorithm is designed so that causing others to lose a large amount of money also requires the attacker to lose a similarly large amount of money. (…)

I reached out to Bram Cohen for a response, he said that “If they take that track then in the event of a fork the system can get frozen with nobody willing to take the hit for being ‘wrong’.”, and when someone pointed to Vitalik’s “Minimal Slashing Conditions” article, added that “all these layers of crap add complexity and make the attacks harder to explain but don’t seem to fix the underlying problems.” I think Cohen is right, in that a maze of rules to prevent abuse will likely result in a much more fuzzy definition of what abuse really is, the interpretation of which can than be delegated to bureaucratic powerbrokers — an Ethereum politburo if you will.

TD: An alternative attack scenario, suggested by Galois Capital’s Kevin Zhou, is one where the attacker tricks enough honest people onto his network, so that it becomes these honest peoples interest to support the attacking chain as the true chain.

VB: Except that if this attack succeeds at reverting finality, it still costs the attacker a really huge amount of money.

Again, like McElgrath argues, your attack is only expensive if you get slashed—the attacker can choose to fork away the punishment also. And then again, some developers can launch a counter-attack. And the attacker can strike again. And eventually a source of external trust is needed to decide which chain is the truthful chain: proof-of-Vitalik / proof-of-EthereumFoundation.

Economist & investor. Mainly Bitcoin.

Economist & investor. Mainly Bitcoin.