The Marginal Cost of Cryptocurrency

I would count myself in the camp who believe that cryptocurrencies could do to finance what TCP/IP did to communications. Yet I also believe that Bitcoin and most of its current variations suffer from a fatal economic design flaw that will not survive the evolution of cryptocurrencies. That flaw is logarithmic money supply growth, and in this post I will explain why it is flawed. My argument is a microeconomic analysis of cryptocurrency and has nothing to do with the much debated “deflationary bias”. As far as I am aware, the argument in this post has not been made before.

In a recent post Tyler Cowen adapts an old chestnut of the money literature to cryptocurrencies. Cowen’s argument is based on some false assumptions, but it has the virtue of starting from the right microeconomic principles, so it’s an excellent point of departure.

Once the market becomes contestable, it seems the price of the
dominant cryptocurrency is set at about $50, or the marketing costs
faced by its potential competitors. And so are the available rents on
the supply-side exhausted.

There is thus a new theorem: the value of WitCoin should, in
equilibrium, be equal to the marketing costs of its potential
competitors.

In defence of the dominant cryptocurrency, Bitcoin, one might accept this argument yet object to its pessimistic conclusions by pointing out that Cowen is ignoring powerful network externalities. After all, coins residing on different blockchains are not fungible with one another. Cowen seems to treat these things as if they’re near substitutes, but maybe it’s a Visa/Mastercard sort-of-thing.

I want to dismiss this objection right away. We should be sceptical of ambitious claims about the network externalities of any one cryptocurrency. The network externalities of established fiat currencies are, of course, enormous, but this is largely due to their having a medium-of-account (MOA) function (to say nothing of legal tender and taxation), as well as a medium-of-exchange (MOE) function. Transactions settled in a cryptocurrency consult the exchange rate at the time of settlement and therefore piggy-back off the numeraire of an established fiat currency. Cryptocurrencies are not MOA, they are MOE only.

And given the near-frictionless fx between cryptocurrencies themselves, it’s not difficult to imagine a payment front-end for routine payees like on-line retailers that accepts a wide range currencies as MOE. And multi-coin wallet software for payers is a no-brainer.

So, for the sake of argument, I’m going to assume that the network externalities of any given cryptocurrency are close to zero. On Cowen’s analysis, this would imply that the marginal cost of cryptocurrency is near-zero. And this means:

Marginal cost of supply for the market as a whole is perhaps the
(mostly) fixed cost of setting up a new cryptocurrency-generating
firm, which issues blocks of cryptocurrency, and that we can think of
as roughly constant as the total supply of cryptocurrency expands
through further entry. In any case this issue deserves further
consideration.

This is a long-time objection to the workability of competitive, privately issued fiat currencies. The cost structure of their production cannot be rationalised with their value. A market of competing fiat currencies with “stable” purchasing power will generate too much seigniorage to their issuers, inviting more competition until the purchasing power of these media rationalise their cost of production.

If we can’t lean on the economics of network externalities, what’s wrong with this argument?

The marginal cost of new coins is the cost of hashing a block

First of all, Cowen speaks of a “cryptocurrency-generating firm” that issues “blocks of cryptocurrency”. The idea here seems to be that the marginal costs of creating a crypto coin are close to zero (it’s just data after all), most costs being the fixed costs of setting up the cryptocurrency system.

But this has things the wrong way round. Creating a new crypto currency is as easy has forking the Bitcoin source code, hacking it, and throwing the fork up on a code repo. Fixed costs are practically zero. Marginal costs, however, equal the electricity costs (and amortised hardware costs) of solving a new block of transactions, as each new block contains a mining award for the peer whose hashing finds a solution to the system’s hash problem. This is how new coins are created.

Mining in equilibrium

To compensate a peer for the costs of doing this costly hashing work, he is allowed to pay himself a certain number of new coins in a special coinbase tx each time he solves the hash problem on a block. But the protocol ensures that the expected value of this mining award is offset by the cost of the requisite kilowatt hours needed to do the hashing. There are no issuers here “collecting rents”; it’s as if the seigniorage is sacrificed to the entropy gods.

Miners (the peers who choose to do the hashing) will work on new blocks only when the expected value of the mining award exceeds the cost of electricity required to run the hashing hardware. There are no restrictions of entry to mining, and the equilibrating mechanism is the protocol’s hashing difficulty. If the coin’s exchange value increases, making mining profitable at current difficulty, more miners will join the hashing effort and because of this, after 2016 blocks the protocol will adjust the difficulty upward making expected value of mining = costs of mining again. The same process works in reverse in the scenario where exchange value decreases. In the creation of crypto coins, MC = MP.

(It should be noted that this is a stochastic rather than deterministic equilibrium, as the difficulty resets approximately every two weeks. Furthermore, the miner is paying for electricity today for an award he will get at some point in future, so it’s really more of a case of MC = E[MP]. But these details are not relevant to the conclusions we want to draw in this post, so I’ll continue to speak as if the marginal cost of making new coins equals the exchange value of coin at any given point in time.)

Why isn’t it obvious that MC = MP?

There are two properties of hash-based proof-of-work that obscure these microeconomics. The first is the multi-factored economics of mining difficulty. Improvements in specialised hashing hardware increase mining difficulty but do not increase its cost. (These improvements should eventually converge to a Moore’s Law function of time when the mining rig manufacturers exhaust all of the low-hanging fruit and run into the same photolithography constraints faced by Intel, etc.) The efficiencies merely result in a higher hashing difficulty, a sort of digital Red Queen Effect.

Similarly, increases (decreases) in the price of electricity will decrease (increase) the difficulty without changing the costs of mining. (It should also be noted that mining will gravitate towards regions like Iceland where it is cold and electricity is relatively cheap.) The only variable that does change the cost of mining is the exchange value of the currency itself.

And this is the other barrier to realising that MC = MP. In Bitcoin and most of the alt-coins, money supply is a logarithmic function of time. As money supply growth is deterministic, changes in money demand are reflected in the exchange value of the coin, raising or lowering the cost of producing the next coinbase as the protocol adjusts the difficulty up or down in response to the entry or exit of hashing power. So the exchange value of the mining award determines the marginal costs rather than the other way round. An economist might find that pretty weird, but that is how it works.

Network security and the crypto money demand function

It costs nothing to fork Bitcoin, hack the source, and create your very own alt-coin. But by itself, such a system is broken and has no bearing whatsoever on the economics of working cryptocurrencies. To make your alt-coin a working system, a sufficiently diverse group of miners must burn some costly kilowatt hours mining each block of transactions. Someone has gotta spend some capital to keep the lights on.

And the more kilowatt hours burned, the better, as the demand for a given cryptocurrency is a function of that system’s hashing costs (among other things, of course). The reason this is so has to do with the integrity of the most recent blocks on the distributed tx ledger, the blockchain. The amount of capital collectively burned hashing fixes the capital outlay required of an attacker to obtain enough hashing power to have a meaningful chance of orchestrating a successful double-spend attack on the system.

A double-spend is an event whereby the payee sees his payment N blocks deep and decides to deliver the goods or services to the payer, only to have this transaction subsequently not acknowledged by the network. Payments in cryptocurrency are irreversible, but double-spends are possible, and in economic terms they have the same effect that fraudulent chargebacks have in conventional payment systems like Visa or Paypal. The mitigation of this risk is valuable, and the more capital burned up hashing a crypto currency’s network, the lower the expected frequency of successful double-spend attacks.

Given that such events undermine confidence in the currency and drive its exchange value down (harming all holders, not just the victims of a double-spend), it should be axiomatic that a cryptocurrency’s hash rate is an argument of its money demand function.

This is also why it doesn’t make sense to speak of new cryptocurrencies expanding the aggregate crypto money supply without limit (or limited only by the fixed costs of creating one). What matters is how the aggregate hashing power, which is scarce, gets distributed over the set of extant cryptocurrencies. The obove reasoning predicts that hashing power will not spread itself arbitrarily thinly, keeping MC well-above 0. (The distribution currently looks more like a power law.)

Who pays to keep the lights on?

From the perspective of mitigating double-spend risk, the more capital that is burned hashing the better because the frequency of double-spend attacks is inversely related to the amount of capital burned. But the marginal benefits of hashing are at some point diminishing and the cost of hashing is linear, so for the end-user of a cryptocurrency, there is some level of hashing that is optimal.

In our argument above for why MC = MP, we made a simplification in saying that the mining award consisted entirely of coinbase. In fact, it consists of coinbase plus tx fees. In a protocol like Bitcoin’s where money growth is logarithmic, most of the early hashing costs are paid for out of new money supply, but as time goes on, tx fees become a greater and greater proportion of the mining award (currently, tx fees are about 1/3rd of Bitcoin’s mining award).

Now here we do see a genuine network externality. Imagine that all hashing costs are paid out of tx fees (as will eventually be the case with Bitcoin). There will be a natural tendency for demand for crypto MOE to gravitate towards the system with a higher tx volume, as it will have lower fees per-transaction for a given level of hashing.

Now imagine that we have a range of cryptocurrencies along a spectrum. On one end of the spectrum is the logarithmic money supply protocol–we’ll call these “log coins”. On the other end of the spectrum is a protocol with perfectly elastic money supply–we’ll call these “growth coins”. Growth coins have a non-deterministic money growth rule, an algorithm that enlarges the coinbase just enough to offset any increase in money demand, so the exchange value is roughly stable as long as money demand is not in decline. (In a future post, we will outline a protocol that can actually implement something approximating this.)

Where can we expect demand for MOE to gravitate along this spectrum of cryptocurrencies? This is where the logarithmic money growth rule hits the skids. At the margin, seigniorage for the log coins is eaten up by hashing costs, but as money demand outpaces the (rapidly declining) growth rate of money supply, the exchange value of the currency increases and existing coin holders are the recipients of the seigniorage on all of the existing, revalued coin.

Growth coins, by contrast, generate most of the seigniorage in the form of a larger coinbase rather than revalued coin, meaning that most of the seigniorage is spent on hashing. The result is lower tx fees for the those who use growth coins as a MOE.

Given that tx fees will be shared between payer and payee, it’s hard to see how magic network economics will maintain the dominance of the log coins in the long run. Money demand coming from the transaction motive will gravitate towards the MOE with the lowest tx costs.

Free-riding not gold buying

The scenario gets worse when we relax the monetarist assumptions (latent in the above analysis) of stable money velocity and demand proportional to tx growth. You don’t have to be a Keynesian to see how a large quantity of Bitcoin balances are held for speculative reasons. The high level of coin dormancy in the Bitcoin blockchain is as conclusive empirical evidence of this as there can be.

Bitcoin, therefore, has a free rider problem, whereby speculative coin balances, which benefit from the system’s costly hashing rate are effectively subsidised by those who use bitcoins primarily as a MOE. These speculative balances repay the favour by adding a toxic amount of exchange rate volatility, providing yet another reason for the transaction motive to run away from log coin MOE. As time goes on and the coinbase declines, this inequitable arrangement only gets worse.

Optimal cryptocurrency

As long as the growth rate of a growth coin’s money demand is sufficient to generate enough seigniorage in coinbase to cover the hashing rate demanded of its MOE users, transactions in growth coin are basically free. Some negligible fee will likely be required to deter DoS attacks (which has the interesting consequence of putting the goals of Adam Back’s Hashcash back into the cryptomoney that appropriated its designs), and its hard to see how one who wishes to hold crypto coin balances for the purpose of actually making transactions would prefer a log coin over a growth coin.

So maybe here is a new theorem: the value of a cryptocurrency will converge to its optimal level of hashing costs?

Fiat money via hash-based proof-of-work breaks new ground and we need to give the concept the attention and analysis it deserves. After all, we can dispense with the barbarous relic of logarithmic money supply and keep the good bits.

Is It Nuts to Give to the Poor Without Strings Attached?

That’s the title of this New York Times Magazine article. As a long-time advocate of replacing Western welfare states with a negative income tax, I obviously don’t think it’s nuts at all, and it’s encouraging to see that this idea is getting some traction in international aid circles (if not in domestic policy making).

There’s evidence that this works.

After Mexico’s economic crisis in the mid-1990s, Santiago Levy, a
government economist, proposed getting rid of subsidies for milk, tortillas and other staples, and replacing them with a program that just gave money to the very poor, as long as they sent their children to school and took them for regular health checkups.

Cabinet ministers worried that parents might use the money to buy alcohol and cigarettes rather than milk and tortillas, and that sending cash might lead to a rise in domestic violence as families fought over what to do with the money. So Levy commissioned studies that compared spending habits between the towns that received money and similar villages that didn’t. The results were promising; researchers found that children in the cash program were more likely
to stay in school, families were less likely to get sick and people ate a more healthful diet. Recipients also didn’t tend to blow the money on booze or cigarettes, and many even invested a chunk of what they received. Today, more than six million Mexican families get cash transfers.

A new charity called GiveDirectly is pushing the idea further. They’re giving away money to villagers in Kenya with no conditions attached at all. The initial results are encouraging and http://www.givewell.org, which ranks charities by their effectiveness, puts them as #2, just under the Against Malaria Foundation.

But most aid is still of the traditional teach-a-man-to-fish variety, with bloated expense ratios to pay the salaries of all those upper middle class graduates too righteous to work in the private sector. After all, someone has gotta teach the wretched how to fish.

I don’t know why the paternalistic assumptions regarding the poor still dominate. It just seems natural to the non-poor that the poor are where they are because they were brought up with the wrong habits or beliefs or something, so helping them out requires elaborate schemes (eg food stamps, training programmes) to save these people from themselves. Perhaps paternalism regarding the poor comes from the fact that it flatters the rest of us. After all, the corollary of that view is that we’re well-off because we have the right habits and beliefs.

The Fed’s September Surprise

The Fed surprised the markets yesterday by keeping the rate of QE asset purchases on hold, contrary to the widely telegraphed intention to taper them this month. The Fed’s forward guidence says that the fed funds target will not move from where it is now until unemployment goes below 6.5%. Given that ending QE must precede any change to the fed funds target (technically, they could raise the IOR rate and keep QE going, but that would be pointless as the extra cash would just wind up in excess reserve balances), it stands to reason that the current unemployment rate near 7% gives ample reason for the fed to at least slow down the pace of asset purchases. Fed speak over the summer signaled such an intention pretty clearly, hence all the taper talk and the focus on this month’s meeting.

So why didn’t the FOMC taper yesterday? One aspect of their reasoning surely has to do with the employment numbers themselves. U/E is a poor proxy for underlying growth when most of the recent reductions have been due to a decline in the labour market participation rate rather than the creation of new jobs. Recent downward revisions to the payroll numbers also point to a less robust labour market than previously assumed.

But there is a more interesting dimension to this decision. In the first paragraph of the statement the committee says (my emphasis):

Some indicators of labor market conditions have shown further
improvement in recent months, but the unemployment rate remains elevated. Household spending and business fixed investment advanced, and the housing sector has been strengthening, but mortgage rates have risen further and fiscal policy is restraining economic growth.

Fiscal policy is mentioned again in the third paragraph. My bet is that when the minutes to this week’s meeting are published at the end of October they will reveal an FOMC very concerned that recent political events have increased the odds of a drawn out budget showdown later this year.

There is a theory that the fed alters policy to offset what congress does to the budget, rendering the fiscal multiplier impotent (regardless of your neoclassical or Keynesian theoretical preconceptions). I think that this is true, at least for the last five years, and what this means is that Fed policy today is influenced by what happens on Capital Hill far more than is commonly supposed.

If you adopt this perspective, today’s decision makes allot of sense. What has changed over the last month? A foreign policy circus by the Obama Administration, and a huge realignment of priorities with respect to how the administration intends to spend its dwindling political capital.

Larry Summers’ withdrawal from the Fed chair job was the first signal. Well, actually, the first signal was the leak on Friday, 13 September to an Asian paper that the Administration would announce Larry’s appointment the following week, a piece of information that should have actually lowered one’s probability of his appointment. Why would the White House feel the need to telegraph if they weren’t concerned that Larry might not get through congress?. Days later, Larry withdraws. These are signs of a weakened Obama administration.

No, the shift in the Fed’s stance had nothing to do with anticipating a Yellen chairmanship instead of a Summers one. It had everything to do with the failure of POTUS to get his man through congress and what that signals for the budget fight with House Republicans later this year (according to the CBO the debt ceiling will need to be raised by October or November of this year). That, I gather, is the reason why fiscal policy has such prominent place in this month’s FOMC statement.

So there you have it folks, a gambit by Obama on Syria caused the fed to taper the taper talk. This brings new meaning to the term “macro economics”, but I think that it is true, and interpreting Fed policy is going to have allot more to do with the budget than the economic stats for the next quarter or two at least.

Hospital death rates in the UK

The headline in the Guardian reads “Hospital death rates in England 45% higher than in US, report finds”, and the story reports on Channel 4 coverage on Wednesday of a new study by Brian Jarman, a professor of health statistics at Imperial College London.

Jarman devised an index called the Hospital Standardised Mortality Ratio (HSMR), which compares a hospital’s mortality rates to expected mortality (given diagnosis). According to a paper by Dr Foster (an indpendent group devoted to providing health care data to the public):

The HSMR is a method of comparing mortality levels in different years,
or for different sub-populations in the same year, while taking
account of differences in casemix. The ratio is of observed to
expected deaths (multiplied conventionally by 100). Thus if mortality
levels are higher in the population being studied than would be
expected, the HSMR will be greater than 100. For all of the 56
diagnosis groups, the observed deaths are the number that have
occurred following admission in each NHS Trust during the specified
time period. The expected number of deaths in each analysis is the sum
of the estimated risks of death for every patient.

The HSMR has become a controversial index. It was credited with bringing to light the Stafford Hospital scandal, which continues to grab the headlines of UK papers with grim stories of how patients were left in their own urine and forced to drink water from flower pots for lack of nursing care. It’s controversial because many people (and not just NHS staff) refuse to believe things can be so bad. Sophisticated apologists for the Trusts poke holes at the methodology of the HSMR index. For example, it’s obviously very sensitive to the way patient’s diagnoses etc are coded, e.g., someone with cancer may be coded as a death from pneumonia.

The latest concerns a cross-sectional HSMR study of the UK and 6 other countries including Canada, Holland, Japan and the US. The UK’s hospital mortality rates are 22% higher than the average of the 7 countries and 45% higher than the US. The comparison with the US is enough for many to dismiss the results right away, as America has a lower life expectancy and its healthcare system is widely distrusted by Brits.

No statistical model is without flaws and data must be interpreted. But what rankles me are those who criticize a quantitative metric that produces uncomfortable results without offering up an alternative. Hospitals must be held accountable to some objective, quantifiable proxy for “quality care” or else the are accountable to nothing at all. The coding-error thesis is particularly pathetic, as that is itself a hospital failure. Imagine a company defending its poor performance by saying that the financial statements are misleading because there were errors in the data provided to the auditors!

And “coding errors” might reveal a different aspect of the problem all together. Maybe it’s not just fat fingers at the keyboard and other flaws in reporting procedures.. maybe patients aren’t being diagnosed properly.

Goldman Sachs v Russian Programmer

If you haven’t already read it, you should read Michael Lewis’ story on Vanity Fair about Goldman’s prosecution of Sergey Aleynikov in 2009. I recall the case when it first hit the wires, thinking that Goldman’s complaint that the code Sergey took when he left the firm could destabilize the financial system was incredibly silly. But like most people, I assumed that Sergey must have nicked Goldman’s HFT strategies and was guilty of theft.

That’s clearly what we were supposed to think. But in fact, Sergey wasn’t involved in the trading side of things at all. He was the brilliant back-end programmer who was brought in to speed up the uncompetitive latency of GS’s HFT systems.

There were many problems with Goldman’s system, in Serge’s view. It
wasn’t so much a system as an amalgamation. “The code-development practices in IDT were much more organized and up-to-date than at Goldman,” he says. Goldman had bought the core of its system nine years earlier in the acquisition of one of the early electronic-trading firms, called Hull Trading. The massive amounts of old software (Serge guessed that the entire platform had as many as 60 million lines of code in it) and nine years of fixes to it had created the computer equivalent of a giant rubber-band ball. When one of the rubber bands popped, Serge was expected to find it and fix it.

After studying its system, Sergey concluded it was such a mess that they should just scrap it and build something again from scratch, but his bosses vetoed that plan. So Sergey set out to fix the latency of GS’s systems by decentralizing it.

But most of his time was spent simply patching the old code. To do this he and the other Goldman programmers resorted, every day, to open-source software, available free to anyone for any purpose. The tools and components they used were not specifically designed for financial markets, but they could be adapted to repair Goldman’s plumbing.

So lots of little patches where put into GS’s code base, most of them hacked from code under an open source license, and GS’s managers would go in and strip OS licenses off and replace it with Goldman legalese.

Anyway, Sergey’s name starts to circulate around the street as one of the best HFT programmers in NYC, and he gets poached by a new hedge fund startup to build an HFT system from scratch. Double the sallary and the opportunity to do something cooler than constantly fix Goldman’s dogfood system.

Before leaving, he uploads a bunch of code to a subversion repository. Not the strats–he didn’t work with those–just a collection of patches and fixes that he used, most of it OS code taken from the internet. It was like taking a notebook, a collection of reminders for solving IT plumbing problems. It was of no use to anybody else and certainly wasn’t Goldman’s secret sauce. Arguably, it wasn’t even Goldman’s property.

What transpires is a total miscarriage of justice. One can’t help but speculate that GS’s motives were to use Sergey’s prosecution to create the impression that its systems were better than they actually were. Who knows. Read the whole story.

Healthcare Costs and Technology

There is an excellent article in the MIT Technology Review this month (The Costly Paradox of Health-Care Technology) pointing out how healthcare is the only industry where technological progress appears to raise rather than lower costs.

The reasons are not a mystery:

Unlike many countries, the U.S. pays for nearly any technology (and
at nearly any price) without regard to economic value. This is why, since 1980, health-care spending as a percentage of gross domestic product has grown nearly three times as rapidly in the United States as it has in other developed countries, while the nation has lagged behind in life-expectancy gains.

Other researchers have found that just 0.5 percent of studies on new medical technologies evaluated those that work just as well as existing ones but cost less. The nearly complete isolation of both physicians and patients from the actual prices paid for treatments ensures a barren ground for these types of ideas. Why should a patient, fully covered by health insurance, worry about whether that expensive hip implant is really any better than the alternative
costing half as much? And for that matter, physicians rarely if ever know the cost of what they prescribe—and are often shocked when they do find out.

Yet the article concludes when some policy recommendations that range from vague (organisational change, innovations in health care delivery) to downright dumb (“drug container caps with motion detectors that let a nurse know when the patient hasn’t taken the daily dose.”).

The solution, as I seed it, is straightforward: health insurance that pays out a lump sum of cash per diagnosis, to be spent however the patient sees fit (some sort of trust/trustee mechanism needs to exist for those too ill to make the decision themselves). The current framework, whereby insurance pays for whatever treatment doctor thinks best, provides absolutely no incentive to make the inevitable tradeoffs between cost and expected benefit.

Perhaps when healthcare inflation eventually leads to rationing, patients in America will reconsider the wisdom of this paternalistic model and demand the right to make those decisions themselves.

Big Data and Price Discrimination

There is an article in Forbes on how big data is brining about more first degree price discrimination. It summerises a recent paper by Reed Shiller at Brandeis on the subject, who studied Netflix’s pricing.

Simulations show using demographics alone to tailor prices raises
profits by 0.14%. Including web browsing data increases profits by much more, 1.4%, increasingly the appeal of tailored pricing, and resulting in some consumers paying twice as much as others do for the exact same product.

Even the price is tailor-made for you, Sir.

Crowdworking

I’m a big fan of this concept, so this bit of anecdotal evidence is discouraging. A BBC reporter spent a week trying to earn a living from crowdwork. In total: 37 hours worked, 19.16 GBP earned. I’m sure others can do better (and this was only an experiment), but still.. does anyone know of some proper stats on the hourly pay distribution of crowdsourced work?

I can’t help but think that these skills would be renumerated better if they were hired as employees or contractors. If that’s correct, I have some guesses as to why:

  • Buyers of the skills get less value from the workers because of the quasi-anonymity and one-off nature of the relationship.
  • The middle-man doesn’t add much value.

In light of the news of Ronald Coase’s recent passing away, we might want to consider whether this crowdworking phenomena can be understood in terms of his transaction cost analysis of firms.

I’ve had some limited experience as a consumer of crowdwork. My impression so far is that you have to wade through a sea of rubbish before you find someone worth paying. And when you’re there, you’d really rather deal with the person one-on-one in an on-going, no-commitments basis. What makes all of this work is search and reputation, and a fragmented hodge podge of different crowdwork platforms doesn’t really perform either of those functions very well.

Indeed, the very term “crowdwork” suggests the wrong framework, in my opinion. Most work takes place over time and needs to be integrated by the entrepreneur or manager with other work to make the whole. Both of those factors require lots of tacit knowledge on the part of both worker and employer. Tacit knowledge is the sort of knowledge that only really exists in the minds of individuals or small groups. You can’t share it with a paid “crowd”.

To make this empowering vision work, I think we need a protocol rather than crowdworking platforms.

Oh, and the highest paying gig the journo did was.. getting paid to click “likes” on websites, something that requires no skill at all!

What’s News in The Classics

Ancient Egypt

The BBC has this story covering a Royal Society paper: New timeline for origin of ancient Egypt.

Radiocarbon dating suggests that Egyption civilisation came into
being much later than previously thought.

Previous records suggested the pre-Dynastic period, a time when early
groups began to settle along the Nile and farm the land, began in
4000BC. But the new analysis revealed this process started later,
between 3700 or 3600BC.

The Palermo Stone is inscribed with the names of early Egyptian kings
The team found that just a few hundred years later, by about 3100BC,
society had transformed to one ruled by a king.

So the pre-Dynastic period where people began to settle along the Nile and become agricultural transformed into a society with a state and single king in the short space of a few hundred years, much more rapid than historians thought. Very interesting stuff.

Caligula

Mary Beard reminds us that smear campaigns can lest well into posterity. On Caligula:

But even the more extravagant later accounts – for example the
gossipy biography of Caligula by Suetonius, written about 80 years
after his death – are not quite as extravagant as they seem.

If you read them carefully, time and again, you discover that they
aren’t reporting what Caligula actually did, but what people said he
did, or said he planned to do.

It was only hearsay that the emperor’s granny had once found him in
bed with his favourite sister. And no Roman writer, so far as we
know, ever said that he made his horse a consul. All they said was
that people said that he planned to make his horse a consul.

The most likely explanation is that the whole horse/consul story goes
back to one of those bantering jokes. My own best guess would be that
the exasperated emperor one day taunted the aristocracy by saying
something along the lines of: “You guys are all so hopeless that I
might as well make my horse a consul!”

And from some such quip, that particular story of the emperor’s
madness was born.

It’s a short piece and worth reading (sorry, it’s a month old.. but hey, this is ancient history).

Classics in East London

This week’s Economist has a piece “Latin, innit” (it’s behind a paywall) about BSIX, a sixth form college in Hackney, a poor part of East London. Unusually, they have a classics programme with 17 enthusiastic students.

Several students say they plan to apply to Oxford. And on August
23rd, the East End Classics Centre ws given some money from London’s
Schools Excellence Fund to expand and link with other similiar
projects. In time, it may seem odd that the sex and violence of the
ancient world were ever absent from the classrooms of London’s East
End.

I really hope there is more of this. It is a scandel that the subject is viewed as an irrelevant past time of the upper classes, a stupid misconception popularised by Labour politicians that only holds back the very people they claim to represent.

Outside Perspectives

It’s always enlightening to get an outside perspective on one’s country and times.

A Pole wakes from a 19-year coma to find the Communists ousted from power and no more petrol queues:

http://news.bbc.co.uk/go/em/fr/-/1/hi/world/europe/6715313.stm

An Indian international student’s observations on America:

http://www.businessinsider.com/the-weirdest-things-about-america-2013-8

After reading that, you will enjoy watching this:

http://www.dailymotion.com/video/x8m5d0_everything-is-amazing-and-nobody-i_fun