Some Crypto Quibbles with Threadneedle Street

Last week the Bank of England published its Quarterly Bulletin, which contained two detailed papers on digital currencies. The Bank deserves credit for writing such a thoughtful review of this space, which was clearly the product of thorough and open-minded research.

One of the two papers titled Innovations in payment technologies and the emergence of digital currencies is noteworthy for pointing out the potential applications of decentralised crypto ledger systems for financial services. Given that I’m co-founder of a new company devoted to such applications, I’m delighted to see that a G10 central bank has the foresight to see this. There will be many points of intersection between private sector innovation here and the regulatory mandate of a central bank.

But you, my readers, are interested in the cutting edge of cryptonomics thinking, so I want to instead discuss the Bank’s second paper, The economics of digital currencies, because I take issue with parts of its analysis. In brief, I believe that the authors have incorrectly analysed the cost structure of digital currency systems and, as a result, incorrectly generalise some problems faced by digital currencies like Bitcoin to digital currencies in general.

The Costs of Mining and Transaction Fees

Ok, first a quick review of mining. We’ll assume the Bitcoin protocol as our template. The micro economics of mining are actually quite simple. To win the mining award (currently, a 25 bitcoin “coinbase” award + block’s TX fees), you have to solve a hash-based proof-of-work problem, which involves using a machine to compute the double sha256 hash of a block of TX over and over until you hit a value below a certain target, which is defined by the protocol’s current difficulty. Difficulty resets every 2016 blocks, increasing if the average duration between solved blocks is below 10 minutes, decreasing if the average duration is above 10 minutes. The scheme insures that average time between blocks approximates 10 minutes.

Now, the probability that a given hash “solves” the problem is precisely defined by difficulty. So, mining profitability is a function these four variables:

  • Current difficulty
  • The efficiency of mining (converting electricity into hashes)
  • The price of bitcoin (market value of mining award)
  • The price of electricity

The efficiency of mining really boils down to this simple ratio: GHs/kWh. How many Gigahashes per second can your hardware compute for a given unit of electricity (including electricity consumed in cooling the machines, etc).

And the dynamic reset of difficulty basically ensures that only those running machines with the highest GHs/kWh ratio and paying the lowest cost per kWh will mine in the long run, as everyone else will be mining unprofitably and drop out of the network. It’s almost a textbook model of perfect competition.

Now, what I think often gets missed here is that the costs of mining bitcoin are entirely a function of the price of bitcoin. If the price of bitcoin goes up, mining becomes profitable and more nodes join the network, which drives up difficulty making mining no longer profitable again. If the price goes down, mining becomes unprofitable at current difficulty, more nodes drop off the network, which drives difficulty down, making mining profitable again.

This dynamic is obscured by the fact that investments in mining hardware have driven GHs/kWh up relentlessly over the last few years, so difficulty rarely declines with bitcoin price, but the dynamic still applies: the price of bitcoin determines the cost of mining.

Failure to appreciate this fact leads to arguments like this one in the BoE paper:

Moreover, to the extent that miners’ expected marginal revenue exceeds their expected marginal costs, miners’ costs are likely to increase over time. This should occur even if no additional people start to mine and independently from any increase in the number of transactions per block. This is because distributed systems involve a negative externality that causes overinvestment in computer hardware. The negative externality emerges because the expected marginal revenue of individual miners is increasing in the amount of computing power they personally deploy, but the difficulty of the problem they must each solve (and hence their marginal cost) is increasing in the total amount of computing power across the entire network. Individual miners do not take into account the negative effect on other miners of their investment in computing resources. Economic theory would therefore suggest that in equilibrium, all miners inefficiently overinvest in hardware but receive the same revenue as they would have without the extra investment.

Can you spot the error? It’s right there in the first sentence: “to the extent that miners’ expected marginal revenue exceeds their expected marginal costs, miners’ costs are likely to increase over time.” This is a fallacy of composition.

What we really have here is the familiar pattern of Knightian uncertainty faced by entrepreneurs. The miner must make a capital outlay in advance for his mining equipment (to get an edge over the competition with new kit delivering a higher GHs/kWh ratio), but he doesn’t know what the price of bitcoin will be once he starts winning blocks, nor does he know what the difficulty will be, which will be a function of Bitcoin price and the capital investment of his competitors.

Boom, bust.. it’s all the same, the fixed costs of mining hardware are internalised by the miner, those costs are not an externality as the author’s argue. The cost of mining will be dictated by the price of bitcoin, that is, the market value of the mining award, and difficulty reset enforces the long-run equilibrium condition whereby hashing costs = market value of mining award. Even changes in the variable costs of mining (the price of a kWh, basically), don’t change the long-run costs of mining, as an increase in electricity prices ceteris paribus should cause difficulty to decline and a decrease should cause difficulty to increase. The cost of mining a block will converge to the market value of the mining award.

I’m not an economist, but back in January I suspected that the economics profession would have trouble with this implication of the Bitcoin protocol, when I wrote:

So the exchange value of the mining award determines the marginal costs rather than the other way round. An economist might find that pretty weird, but that is how it works.

And that is how it works. Economists are used to thinking in terms of prices (ephemeral market stuff) being a function of costs (stuff that is “material” and “real”, like a production function). But the way the Bitcoin protocol works, the hashing costs of the network are a function of the mining award’s market value. I’m not saying it’s a nice feature of the protocol. But it is what it is.

And that’s why all that investment in mining equipment is not a negative externality, at least from the perspective of mining costs.

But there is different way in which capital investment in mining equipment creates an externality, a way that the authors did not address.

Mining Centralisation

If there is a negative externality to the relentless quest to make mining a positive expected value lottery by investing in new gear that increases GHs/kWh, it’s that the number of mining nodes decreases as a result of this process. Mining becomes concentrated in fewer and fewer hands. To run a profitable mining operation, one must run machines with an above average GHs/kWh and below average electricity cost. This is specialised hardware with limited production runs and requiring a non-trivial capital outlay.

A network made up of a few large mining nodes is basically a centralised system with none of the benefits centralisation might bring. What Satoshi envisioned was a one-cpu-one-vote distributed system, people mining on commodity CPU or GPU hardware. That’s not what has evolved, and this is a serious problem for Bitcoin and other digital currencies with protocols designed along a similar pattern.

It might be tempting to think that the amortised cost of that hardware somehow gets baked into the cost of mining the network, as the author’s of the BoE paper do, but those costs are only faced by the miner. It may turn out that the ROI of the latest 28nm mining rigs is negative at current prices. Too bad for the miner who purchased it, but he’ll still mine if the variable costs (the electric bill) are less than the expected mining award. The market value of the hardware itself will decline to the point where the ROI is no longer negative.

And this Knightian boom/bust dynamic raises some questions about the future of mining investment. R&D in specialised mining gear can really go one of two ways. The first scenario is that there will continue to be an edge in capital investment in improving GHs/kWh, in which case capital investment in mining will continue to concentrate mining in few hands, a “bad” outcome.

The other scenario is that the gains from further optimisation’s reach a stage where they are too costly to be worth it and R&D switches to commoditizing the currently most efficient designs, in which case the centralising effects of mining investment go into reverse, a “good” outcome. Whichever way it goes, it is still the case that the mining costs of the network are determined by the market value of the mining award. That’s equilibrium.

So, if there is a negative externality inherent in Bitcoin mining, it is the negative externality of centralisation not of costs.

On the sustainability of low transaction fees

I want to focus on another dimension to this mining cost story. So far we have focused on the role that miners play in hashing blocks. But miners actually do two things: in addition to hashing blocks, they also perform transaction verification. The author’s of the BoE paper seem to conflate the two processes:

Low transaction fees for digital currency payments are largely driven by a subsidy that is paid to transaction verifiers (miners) in the form of new currency. The size of this subsidy depends not only on the current price of the digital currency, but also on miners’ beliefs about the future price of the digital currency. Together with the greater competition between miners than exists within centralised payment systems, this extra revenue allows miners to accept transaction fees that are considerably below the expected marginal cost of successfully verifying a block of transactions.

It’s that last sentence I take issue with. The “marginal cost of successfully verifying a block of transactions” is the cost of running the scripts on each TX in the block and verifying the digital signatures. The computational costs here are tiny compared to the cost of hashing the block, which plays no role in TX verification whatsoever. Hashing is there to raise the cost of a Sybil attack, nothing more.

What’s confusing about cryptocurrency is that there are these two different costs, hashing and verification, and two different sources of paying for them: seigniorage (the coinbase award) and TX fees. How the pair of costs and revenues match up is a protocol design consideration.

Costs Revenue
Proof-of-Work (SHA256 hash problem) Coinbase (25 bitcoins)
Transaction verification Transaction fees

In the case of Bitcoin, a miner has no control over the size of the coinbase award, but he does control which TX’s go into the block he’s currently hashing. So basic economic theory dictates that a miner will include a transaction if and only if the expected value of the TX’s fee is greater than his marginal cost of verifying that transaction. The costs due to proof-of-work do not come into the decision at all.

It makes sense to think of proof-of-work and TX verification as two separate subsystems with their own respective sources of financing: proof-of-work financed by coinbase and transaction verification financed by transaction fees. A digital currency protocol could follow this pattern and some do (Ethereum, for example). Bitcoin, however, is different. Its protocol dictates that the coinbase award halves every two years and never exceeds a cumulative total of 21m coins, which means that at some point both hashing costs and verification costs must be paid out of TX fees alone.

I’ve pointed out before that this aspect of Bitcoin’s protocol design is self-defeating in the long run. The market for media of exchange will gravitate towards those systems with the lowest transaction costs, and in the case of proof-of-work digital currencies, that means those protocols that forever subsidise hashing costs with the coin’s seigniorage (no supply cap). And even if that were not the case and Bitcoin remained the dominant digital currency, the protocol will need to change to incorporate a mandatory minimum fee that is sufficiently large to incentivise enough hashing to secure the network. I say “mandatory” because there is a collective action problem here in that an individual miner has no incentive to exclude a transaction whose fee exceeds his marginal verification costs, even if the aggregate effect of this rational behaviour is that the total TX fees are insufficient to support a hashing difficulty that secures the network.

Which brings me to the author’s bleak conclusion:

The eventual supply of digital currencies is typically fixed, however, so that in the long run it will not be possible to sustain a subsidy to miners. Digital currencies with an ultimately fixed supply will then be forced to compete with other payment systems on the basis of costs. With their higher marginal costs, digital currencies will struggle to compete with centralised systems unless the number of miners falls, allowing the remaining miners to realise economies of scale. A significant risk to digital currencies’ sustained use as payment systems is therefore that they will not be able to compete on cost without degenerating — in the limiting case — to a monopoly miner, thereby defeating their original design goals and exposing them to risk of system-wide fraud.

This is only partly right, and partly right for the wrong reasons. First of all, it’s not digital currencies that face this problem, but a subset of them that, like Bitcoin, eventually require TX fees to shoulder the entire burden of incentivising proof-of-work. But that is an accidental rather than essential feature of digital currencies. So, the conclusion is only partly right because it does not apply to protocols that finance hashing costs with a perpetual coinbase award.

And right for the wrong reasons… this sentence “With their higher marginal costs, digital currencies will struggle to compete with centralised systems unless the number of miners falls, allowing the remaining miners to realise economies of scale” is wrong because the authors have conflated hashing and verification costs.

I’m not sure I get the “economies of scale” thing in transaction processing systems, but perhaps the author’s are thinking of the extreme redundancy that distributed systems require. Transaction verification in a distributed system is redundantly performed by every node, so if there are 5,000 nodes verifying nodes on the system, every TX is verified 5,000 times. Compared to a centralised system that only needs to verify a TX once, it would seem that there is a simple economy of scale linear in the number of nodes in system.

But a centralised system must do much more than verify TX, it must do lots of things that nodes on a distributed system do not have to worry about. The centralised system must protect the server(s) against error and attack, as a centralised system is by definition a system with a single point of failure. You don’t have to be a network security expert to appreciate that this is hostile and difficult technical territory. I can’t offer estimates on what these additional costs are, but what I do know is that they are a large multiple of transaction verification costs, and exponentially more complicated processes. TX verification–parsing the blockchain and doing a bunch of ECDSA signature verifications–is easy and cheap by comparison.

So it is by no means obvious that the total costs of TX verification are lower in a centralised system than in a decentralised or distributed one, and it may in fact be the other way round. But either way, we can say two things with confidence:

  • The costs of distributed TX verification are a small fraction of the fees charged by legacy payment systems. Unlike hashing, this is not a costly computation even when multiplied by a large number of verifying nodes.
  • The costs of distributed TX verification will decline over time with improvements in computational efficiency, bandwidth, etc.

But the one thing that distributed systems must do that centralised systems do not have to worry about is a mechanism for achieving consensus on the authoritative state of the ledger. For Bitcoin and many other digital currencies, this mechanism is hash-based proof-of-work, and it is crucial to appreciate the fact that verification and proof-of-work hashing are separate processes with independent cost functions.

And it may turn out that the proof-of-work blockchain isn’t the best mechanism for achieving consensus anyway. There are other decentralised consensus algorithms used in projects like Ripple, Stellar, and Hyperledger that do not rely on energy intensive hashing problems to achieve consensus.

Proof-of-Work as “manufactured scarcity”

Now that proof-of-work is liberated from the misconception that it is somehow behind TX verification we can bring some really interesting economic properties of proof-of-work into relief.

As long as there is long-term growth in demand for the coin and the coinbase award is perpetual, seigniorage should be more than sufficient to cover the costs of proof-of-work. That idea alone is, I think, really interesting. Here’s what I mean.

There is a long-standing objection to private fiat money schemes advocated by Hayek and others that goes something like this. Media of exchange are near-substitutes. (This maybe false assumption, but lets go with it and set aside the economics of network effects, etc.) And the marginal costs of producing the media are almost zero, so if a privately produced fiat money is a success, the seigniorage that accrues to the issuer will be substantial. This will invite more and more competition producing more and more media of exchange. Invoke that near-substitutes assumption and, bingo, privately produced money gets driven down to the marginal cost of its production, which is basically zero. Privately produced money is impossible because of free market competition and the the near-zero marginal cost of producing it.

In my opinion, the most important innovation of hash-based proof-of-work isn’t its solution to the problem of distributed consensus, for which there are arguably better solutions. Rather, the real innovation is the way in which this energy intensive defence against the Sybil attack makes the marginal cost of proof-of-work fiat money meaningfully non-zero, refuting the argument above. The scheme’s seigniorage doesn’t really accrue to anyone. Instead, it gets burned up in hashing blocks, where the marginal cost of producing a new set of coins equals the cost of solving the hash problem on the block that brings the new coins into existence. There is no coin “issuer”, scarcity comes into existence ex nihilo.

And this “seigniorage burning” isn’t a complete waste, as my metaphor might suggest and an economist will wrongly suspect as “inefficient”, for it has the side-effect of bootstrapping a solution to the distributed consensus problem and thereby creating a distributed payment system on which the value can be transferred (a coin can’t be scarce if it can be double spent). After all, shouldn’t seigniorage be spent on a public good? I think that this is conceptually beautiful, and it deserves to be a chapter in the micro foundations of money economics, whatever its ultimate fate ends up being. The first credible scheme for credibly rationing the supply of privately produced fiat currency.

The dominant narrative to-date has been that digital currencies like Bitcoin have value because of the utility of the distributed payments system combined with an (eventually) fixed coin supply. I think that the latter belief is unfounded. It’s not the fixed supply of a coin that makes it scarce, but rather the marginal cost of producing the coin that makes it so.

There can be demand for coin because of the expectation that it will be demanded more in future and therefore increase in price (speculative demand), and there can be demand for coin because you want to hold a coin balance to facilitate transactions (transactional demand). A coin may embody demand from both sources, but the former implies the latter or else the coin’s value rests on some sort of “greater fool” phenomena.

The entire history of monetary thinking can probably be told from the perspective of the tension between these two sources of demand, the tension created by a single object embodying the properties of both store-of-value and medium-of-exchange. The pursuit of purchasing power stability in this object isn’t some hubristic policy ideal like the taming of the business cycle or full employment. It is intrinsic to the very idea of money.

A volatile medium-of-exchange is a poor medium-of-exchange, and it is almost inconceivable that a free market would ever converge on a unit of account where the numeraire of all exchange was among the most volatile of assets. Just consider the cost of extracting relative prices in that scenario! We’d have to develop an alternative unit-of-account, which is another way of saying the market would never select such a coin as the unit-of-account in the first place. Trade requires a reliable measuring stick.

This is my favourite paragraph from the BoE paper:

In order to address a need to respond to variation in demand, a more flexible rule would be required. For example, the growth rate of the currency supply could be adjusted to respond to transaction volumes in (close to) real time. Alternatively, a decentralised voting system could be developed. Finally, variant schemes could embrace existing monetary systems by seeking to match official broad money data or to target a fixed exchange rate, although this would require the abandonment of part of the schemes’ original ideology.

A more flexible money supply rule behind digital currencies is required. After all, even commodity money has a somewhat elastic supply function. If the price of gold hovers above the marginal cost of pulling gold out of the ground, more gold supply will hit the market. Digital currency with a deterministic money supply function is not a feature but a limitation of early, first designs. And a capped supply function like Bitcoin’s is a bug on microeconomic grounds alone, as we discussed above.

But there’s no reason to jump to the conclusion that the “original ideology” must be abandoned in order to implement certain stability schemes. For example, the coinbase award could be made a function of difficulty deflated by the change in GHs/kWh (improvements in hardware efficiency). Such a scheme would keep the coin price of a kWh roughly constant. A fixed exchange rate without abandoning the “original ideology”. Ok, it’s not a complete proposal, but you get the point. We’ve barely scratched the surface of this technology.

One of the (many) ways in which fiat money is weird and counter-intuitive is how it has value in the first place. The stock and bond markets have value because of the NPV of expected future income flows. But the aggregate value of the money stock is like value created out of nothing. It’s the value of pure liquidity.

So I want to offer a variation on the trust-less theme here: nobody can be trusted with the seigniorage generated from this value.