The Economics of On-line Learning

Virtual Growth Projects has an interesting essay on the economics of on-line learning. The essence of the idea is that the teacher/student relationship in the on-line world has allot in common with the economics of file sharing:

The seed/leech terminology is borrowed from file-sharing where a
‘seeder’ is someone who possesses 100% of a file and is in a position
to share it with others. A ‘leech’ is someone who does not yet have
the file, and hence cannot share it.

  • a knowledge-leech has not yet achieved mastery of the subject
  • a knowledge-seeder has achieved mastery (and hence is in a position to share it with others)
  • knowledge transfer is converting leeches to seeders

The knowledge-transfer process creates ‘seeders’ who are capable of
productively contributing to it.

Given that the best way to demonstrate understanding of a subject is the ability to teach it to someone else, it seems plausible that the business model that on-line education is likely to gravitate towards is one where students evolve into roles involving teaching, essay grading, exam question writing, etc as they gain mastery of the subject that they are learning. Success in your studies would, in this model, act as a sort of currency that would allow one to recoup some or all of the up-front costs involved in taking a course.

It might also take us away from the traditional production model of pedagogical contents–expert writes a text book and course materials–to one of edited but crowd-sourced content, much like Wikipedia.

Noteworthy Links

Bradley Manning and “hacker madness” scare tactic

Prosecutors in the Manning case portrayed the use of the unix utility wget as if it were a dark art of criminal hackers.

The relevance of the latest Snowden files

Once again, the issue concerns what sort of oversight can we possibly have when the legality of surveillance rests on blanket search warrants, rather than warrants targeting specific individuals and groups. The statutory authority for these warrants clearly rests on interpretations that go against the spirit of the law. In the UK the relevant statues are

  • 1994 Intelligence Services Act
  • Regulation of Investigatory Powers Act 2000 (RIPA).

It seems that the warrant authority relies on paragraph 4, Section 8 of RIPA. Read it and decide for yourself whether you think this gives a legislative mandate to GCHQ’s programmes.

The statutory analog in the US is of course the Patriot Act. Section 205 of the Act is what is being used by the administration to justify PRISM, an interpretation of the law that one of the Act’s authors, Congressman Jim Sensenbrenner, claims was not congress’ intent.

Back to the UK, the other aspect of its “oversight” is the Intelligence and Security Committee (ISC), made up of 9 MP’s. Unlike the Senate and House intelligence committees in the US–who in theory at least represent a check to executive authority–the nine members of the ISC are appointed by the PM.

It’s worth remembering that the British Government’s blanket search warrants were one of the grievances of the American colonialist and inspired the 4th Amendment of the US Constitution.

Woman in Chelsea strip-searched by police

Charts on decline in registered gold inventories

Noteworthy Links

U.S. issues global travel alert, cites al Qaeda threat

I guess it’s just a coincidence that this popped up at a time of growing hostility towards America’s security apparatus.

Court Rulings Blur the Line Between a Spy and a Leaker

“A majority of the Supreme Court not only left open the possibility of prior restraints in other cases but of criminal sanctions being imposed on the press following publication of the Pentagon Papers themselves,” Floyd Abrams, who also represented The Times in the case, wrote in a new book, “Friend of the Court.”

“To the extent that you have aided and abetted Snowden, even in his current movements, why shouldn’t you, Mr. Greenwald, be charged with a crime?” Mr. Gregory asked.

Mr. Greenwald responded, “If you want to embrace that theory, it means that every investigative journalist in the United States who works with their sources, who receives classified information, is a criminal.”

Latvia resists US call to extradite ‘virus maker’

Bradley Manning protests – in pictures

Microsoft Research looking to hire expert in “Algorithmic Economics”

It is interesting to see the sort of boffin that tech giants are looking to hire these days. MSR is looking to hire a postdoc in “Algorithmic Economics”.

Here is the text of the add:

Market design, the engineering arm of economics, benefits from an
understanding of computation: complexity, algorithms, engineering
practice, and data. Conversely, computer science in a networked world
benefits from a solid foundation in economics: incentives and game

Increasingly, online service design teams require dual expertise in
social science and computer science, adding competence in economics,
sociology, and psychology to more traditionally recognized
requirements like algorithms, interfaces, systems, machine learning,
and optimization. Our researchers combine expertise in computer
science and economics to bridge the gap between modeling human
behavior and engineering web-scale systems.

Scientists with hybrid expertise are crucial as social systems of all
types move to electronic platforms, as people increasingly rely on
programmatic trading aids, as market designers rely more on
equilibrium simulations, and as optimization and machine learning
algorithms become part of the inner loop of social and economic

Application areas include auctions, crowdsourcing, gaming, information
aggregation, machine learning in markets, market interfaces, market
makers, monetization, online advertising, optimization, polling,
prediction engines, preference elicitation, scoring rules, and social

Measuring Digital Influence

Michael Wu on TechCrunch touches upon what I have long thought is the fundamental problem with attempts to measure someone’s influence in social networks.

One of the reasons that brands don’t understand digital influence is
because they don’t seem to realize that no one actually has any
measured “data” on influence (i.e. explicit data that says precisely
who actually influenced who, when, where, how, etc.). All influence
scores are computed from users’ social activity data based on some
models and algorithms of how influence works. However, anyone can
create these models and algorithms. So who is right? How can we be
sure your influence score is correct?

Influence is not measured directly, it is measured through other variables (likes, retweets, etc.) that stand proxy for influence. So instead of this influence \sim f(\dots) you have this g(\dots) \sim f(\dots), which is not by itself a problem if you make sure that the variables in your influence model f(\dots) are different from the variables in the model g(\dots) standing proxy for influence.

But this is not easy to do when the thing that you are trying to measure and predict is something that people care about and can influence by altering their behavior.

As we learn from the behavior economics of humans, when we put a score
on something, we create an incentive for some people to get a better
score. This is human nature. Because people care about themselves,
they care about any comparisons that concern them, whether it is their
websites, cars, homes, their work, or just themselves. Some would go
so far as to cheat the algorithm just to get a better score. In fact,
Google’s PageRank algorithm has created an entire industry (i.e. SEO)
around gaming their score.

It’s not so hard to “cheat the algorithm”. Influence scores that include variables that you control directly (how many posts you make, for example) can be gamed by simply changing your own behavior. But even influence models that carefully avoid such variables by only including variables that represent what other people think of your behavior can be cheated.

Reciprocity is one way that this is done. Bob tweeted something you couldn’t care less about, but by retweeting Bob you’re helping to nudge up his influence score (on Klout, say), and you do that because Bob is the kind of guy who rewards retweets with retweets (or follows with follows, likes with likes, etc).

Suppose that an influence model f(...) has as one of its variables how many tweets and retweets you make. The model is validated against g(...), the proxy for influence, which has as one of its variables the number of times your tweets are retweeted by someone else. Now, a set of people who reciprocate retweets among themselves will have not only their f(...)‘s go up, but those scores will also correlate with g(...), leaving the model’s designers/promoters with the impression that they are predicting influence. But all you have here is a bunch of people promoting one another’s influence scores.

One way that an influence modeler could hedge this reciprocity attack on his model would be to measure, for every retweet, the length of the shortest retweet cycle. If you retweet Bob and he retweets you, the cycle is length 1. If you retweet Bob, Bob retweets Alice, and Alice retweets you, the cycle is 2. The idea is that the longer a cycle is the more likely it is evidence of genuine influence; very short cycles smell like reciprocity.

The problem with this approach is that it makes the concept of influence look like an asymmetrical relationship: if you influence Bob, Bob cannot influence you, otherwise it’s classified as reciprocity. But a key stylized fact of social networks is that they are mostly made up of groups of people who identify with one another, and sometimes the group is self-contained.. there is no path to a node outside the group. Nobody would be happy with construing the concept of influence in such a way that it can’t apply locally, with people in the group influencing each other.

Did you retweet/like/comment Bob because he influenced you or did you do it in anticipation of his reciprocity? The intention seems inscrutable from data on social network graph alone.

Measuring influence is hard, and it is especially hard when people have an incentive to manipulate the measurement of it. Personally, I think this is just an instance of a larger problem all over the internet. It’s just so damn hard to find signal in data that is costless to generate. There is no cap on the number of tweets you can make, the number of likes you click, the number of posts you make, and so on. If there were some external friction placed on these activities so that people had to ration them, the tide of noise would recede and bring more signal into view.

So here’s an idea: make every tweet cost three minutes of CPU time, during which your machine runs some computations for a socially useful distributed computing project (climate prediction, genome sequencing, alien searching.. you choose). The friction caused by this would not only cause people to ration their social networking, greatly improving the signal-to-noise ratio, but the friction itself would benefit society. Who could complain (except the spoofers)?

The velocity and dormancy of bitcoin

Dorit Ron and Adi Shamir (R&S) of The Weizmann Institute of Science wrote a paper Quantitative Analysis of the Full Bitbcoin Transaction Graph that has received allot of attention in the Bitcoin community and some press coverage. One of the paper’s main claims is that vast majority of bitcoins are not “in circulation”.

Here is our first surprising discovery, which is related to the
question of whether most bitcoins are stored or spent. The total number of BTC’s in the system is linear in the number of blocks. Each block is associated with the generation of 50 new BTC’s and thus there are 9,000,050 BTC’s in our address graph (generated from the 180,001 blocks between block number zero and block number 180,000). If we sum up the amounts accumulated at the 609,270 addresses which only receive and never send any BTC’s, we see that they contain 7,019,100 BTC’s, which are almost 78% of all existing BTC’s.

By itself, this is uninteresting. It is part of the Bitcoin protocol that 100% of the input to a tx must be assigned to its output, so when the former is not commensurable with the latter, the spender generates a new address to which the remainder is paid (i.e., he pays himself the “change”). Also, it is recommended practice that you generate a new address for every tx where you are payee. For both of these reasons, at any given point in time most bitcoin will be in an address that has never spent. In fact, if the recommended practice were universally followed, 100% of coin would be in such an address.

However, 76.5% of these 78% (i.e., 59.7% of all the coins in the
system) are “old coins”, defined as bitcoins received at some address more than three months before the cut off date (May 13th 2012), which were not followed by any outgoing transactions from that address after they were received… This is strong evidence that the majority of bitcoins are not circulating in the system… Note that the total number of bitcoins participating in all the transactions since the establishment of the system (except for the actual minting operations) is 423,287,950 BTC’s, and thus each coin which is in circulation had to be moved a large number of times to account for this total flow.

Now this is more interesting. That about 60% of bitcoins have been dormant for at least the three months prior to the study’s cut off date is consistent with the thesis that the majority of coin is not held for the purposes of conducting transactions but rather held as a store-of-value. But lets put aside the theoretical preconceptions for the moment. In this post I want to help tighten-up some concepts so that we can actually start testing some monetary theories on Bitcoin.

What is bitcoin velocity?

The velocity of a currency is basically the number of times a currency unit changes hands over a given interval of time. Conventionally, this interval is taken to be a calendar quarter because economists estimate money velocity as quarterly GDP divided by the average money supply over the quarter. Their calculation is indirect, because there is no centralized record of all fiat transactions in a given currency, but that transaction history is implied in the GDP stats. Bitcoin has the opposite problem: there is no GDP calculation for Bitcoin (yet!), but we do have a complete transaction log in the form of the blockchain. So calculating Bitcoin velocity should be straight forward.

We can make a back-of-envelope calculation right now. We’ll estimate the average (quarterly) Bitcoin velocity over the same time window studied in the paper, Jan 2009 to mid-May 2012 (13.5 quarters).

According to R&S the sum of all transactions (excluding minted coins) for the period is 423,287,950. As money growth over this period is linear and the first block starts at 50 and the last block is 9 million, the average money supply is 4,500,00. Divide the former by the latter and multiply by \frac{1}{13.5} and you get a quarterly money velocity for bitcoin of just under 7.

Is that high or low? As a benchmark, look at US M1 money velocity, which we can get from the St. Louis Fed. The average quarterly US money velocity over the same period was about 8 (it’s currently about 6.9), and this has been on a downward trend since 2008.

We should really work these numbers into a timeseries, but the average is at least in line with USD velocity numbers, which in itself should cast some doubt on the level of economic activity that gets done in bitcoins. But we should also note that the numerator in our rough calculation includes change, which should be subtracted out; paying yourself doesn’t exactly count as coin “changing hands”. Devising an estimator for this is a task for a rainy day, but suffice it to say that our velocity estimate of \approx 7 is biased upwards.

The “Shadow” Bitcoin system

There is however an offsetting factor that may even bias velocity estimates downward: the Shadow Bitcoin system. Exchanges like MtGox, on-line wallets like, and some other bitcoin services allow transfers of coin between their users. The service holds coin in many addresses that are in effect “client” accounts, and transfers between such accounts are recorded only by the third party’s servers, not the bitcoin blockchain.

So despite the fact that a transfer of bitcoin takes place within a trusted third party, presumably such transfers should still be included in the velocity figure, but we have no way of knowing directly what these volumes are. I am going to set this question aside in this post.

How should we measure “dormant” coins?

Velocity is a basic concept in monetary economics and is easy to calculate. But the key statistic in the R&S paper is the percentage of “old coins”. This is a related but different concept.

If every address spends its entire balance 7 times over the quarter, velocity is 7. But if two addresses ping 1 BTC between each other 64 million times over the quarter whilst the remaining 8,999,999 coins aren’t spent at all, velocity is still 7. But in the first case there are no “old coins”, in the latter case all but two are “old coins”. Let’s call this concept dormancy.

Dormancy is related to velocity. If bitcoin money velocity is 7, that means that on average a coin sits inside an account for about 13 days before it is spent. If dormancy is not commensurate with velocity, then the distribution of dormancy across the money supply is going to be very wide. For example, an “old coin” is defined by R&S as one that hasn’t been spent in more than 90 days. So if about 60% of bitcoins are old coins, then the remaining 40% of coins have a velocity of at least 17.5, so on average each of those coins is dormant for no more than 5 days.

One of the problems with defining “dormant coin” as coin in an address that has not spent or received any coin in the last three months is that a single tx at an address–no matter how small–will put the entire balance of the address outside the set of dormant coins. This identification rule seems to be a lower limit estimate of dormant coin rather than a definition of it.

Any anyway, “dormant coin” is a binary attribute and rests on some arbitrary cut off based on duration, when what we are really interested in is the duration itself. So instead of measuring percentage of dormant coins, we should instead measure a coin’s dormancy: the time passed since the coin was last spent.

How do we measure the dormancy of a coin? Strictly speaking, this is nonsense, as coin input into a transaction is fungible. So dormancy is actually a property of a bitcoin address rather than a bitcoin (or some fraction thereof). We can define it as the weighted average of time since coin was paid into the address.

For example, if a new address A is created and 10BTC is paid into it at noon on Monday, the dormancy of A is 0. At noon on Tuesday, the dormancy of A is 1 (taking a day as the unit of time), by Wednesday it is 2. But suppose another 20BTC is paid into A on Wednesday. Dormancy goes down to 2/3 (the coins with dormancy=2 are now only 1/3rd of the address balance and the other 2/3 coin have zero dormancy). By noon on Thursday, dormancy is now 1 2/3.

In other words, an account’s dormancy increases by 1 every 24hrs when there is no activity in the address. Whenever R coins are paid into an address, dormancy is reduced by the factor 1 - \frac{R}{B}, where B is the address balance after the coins are paid in. Whenever coins are spent by an address, its dormancy is unchanged (dormancy is a property of the remaining coins). But spends reduce the address balance, so subsequent coins received will reduce dormancy even more.

So what this definition gives us is a distribution of dormancies over every address in the blockchain at a given point in time. The dormancy of the Bitcoin network at a given point in time is simply the weighted-average of the account dormancies, where the weight for an address is its balance.