Tag Archives: design

The dampened contagion: spreading memes in an economy of attention

I really enjoyed a recent paper by Nathan Hodas and Kristina Lerman called The Simple Rules of Social Contagion. It resonates strikingly with my own work. They start by asking themselves why is it that “social contagion” (the spreading of memes) does not behave like contagion proper as described by SIR models – in the sense that, for a given network of interactions, social contagion spreads slower than and not as far as actual epidemics. The way they answer this question is really nice, as are they results.

Their results is the following: social contagion effects can be broken down into two components. One is indeed a simple SIR-style epidemic model; the other is a dampening factors that takes into account the cognitive limits of highly connected individuals. The idea here is that catching the flu does not require any expenditure of energy, whereas resharing something on the web does: you had to devote some attention to it before you could make the decision it was worth resharing. The critical point here is this: highly connected individuals (network hubs) are exposed to more information than less connected ones, because their richer web of relationships entails more exposure. Therefore, they end up with a higher attention threshold. So, in contagion proper wherever the infection hits a network hub diffusion skyrockets: hubs unambiguously help the infection spread. In social contagion on hitting a hub diffusion can still skyrocket if the meme makes it past the hub’s attention threshold, but it can also decrease if it does not. Hubs are both enhancers (via connectivity) and dampeners (via attention deficit) of contagion. This way of looking at things resonates with economists: their models work well only where there is a scarce resource (attention).

Their method is also sweet. They consider two social networks, Twitter and Digg. For each they build an exposure response function, which maps the probability of users exposed to a certain URL to retweet it (Twitter) or vote it (Digg). This function is in turn broken into two components: the visibility of incoming messages (exposures) and a social enhancement factor – if you know that your friends are spreading a certain content, you might be more likely to spread it yourself. So, the paper tracks down the visibility of each exposure through a time response function (probability that a user retweets or votes a URL as a function of the time elapsed since exposure and their number of friends). At the highest level, this is modeled as a multiplication: the probability of becoming infected by the meme for a in individual with n_f friends after n_e exposures is the product of the social enhancement factor times the probability of finding n of the n_e exposure occurring during the time interval considered.

At this point, the authors do something neat: they model the precise form of the user response function based on the specific characteristics of the user interfaces of, respectively, Twitter and Digg. For example, in Twitter, they reason, the user is going to scan the screen top to bottom. Her probability of becoming infected by one tweet can be reasonably assumed to be independent of her probability of becoming infected by any other tweet. Suppose the same URL is exposed twice in the user’s feed (which would mean two of the people she follows have retweeted the same URL): then, the overall probability of the user not to become infected is given by the probability of not becoming infected by the first of the tweets times that of not becoming infected by the second tweet. For Digg, they model explicitly the social signal given by “a badge next to the URL that shows the number of friends who voted for the URL”. So, they are accounting for design choices in social software to model how information spreads across it – something I have myself been going on about for a few years now.

This kind of research can be elusive: for example, Twitter is at core a set of APIs that can be queried in a zillion different ways. Accounting for the user interfaces of the different apps people use to look at the Twitter stream can be challenging: the paper itself at some point mentions that “the Twitter user interface offered no explicit social feedback”, and that is not quite the way I perceive it. But never mind that: the route is traced. If you can quantify the effects of user interfaces on the spreading of information in networks, you can also design for the desired effects in a rigorous way. The implication for those of us who care about collective intelligence are straightforward: important conversations should be moved online, where stewardship is easy(-ier) and cheap (-er).

Noted: there are some brackets missing from equation (2).

The apprentice crowdsorcerer: learning to hatch online communities

I am working on the construction of a new online community, that will be called Edgeryders. This is still a relatively new activity, that deploys a knowledge not entirely coded down yet. There is no instruction manual that, when adhered to, guarantees good results: some things work but not every time, others work more or less every time but we don’t know why.

It is not the first time I do this, and I am discovering that, even in such a wonderfully complex and unpredictable field, one can learn from experience. A lot. Some Edgeryders stuff we imported from the Kublai experience, like logo crowdsourcing and recruiting staff from the fledgling community. Other design decisions are inspired from projects of people I admire, projects like Evoke or CriticalCity Upload; and many are inspired by mistakes, both my own and other people’s.

It is a strange experience, both exhalting and humiliating. You are the crowdsorcerer, the expert, the person that can evoke order and meaning from the Great Net’s social magma. You try: you say your incantations, wave your magic wand and… something happens. Or not. Sometimes everything works just fine, and it’s hard to resist the temptation of claiming credit for it; other times everything you do backfires or fizzles out, and you can’t figure out what you are doing wrong to save your life. Maybe there is no mistake – and no credit to claim when things go well. Social dynamics is not deterministic, and even our best efforts can not guarantee good results in every case.

As far as I can see, the skill I am trying to develop – let’s call it crowdsorcery – requires:

  1. thinking in probability (with high variance) rather than deterministically. An effective action is not the one that is sure to recruit ten good-level contributors, but the one that reaches out to one thousand random strangers. Nine hundred will ignore you, ninety will contribute really lame stuff, nine will give you good-level contibutions and one will have a stroke of genius that will turn the project on its head and influence the remaining ninety-nine (the nine hundred are probably a lost cause in every scenario). The trick is that no one, not even him- or herself, knows in advance who that random genius is: you just need to move in that general direction, and hope he or she will find you.
  2. monitoring and reacting rather than planning and controlling (adaptive stance). It is cheaper and more effective: if a community displays a natural tropism, it makes more sense to encourage it and trying to figure out how to use it for your purposes than trying to fight it. In the online world, monitoring is practically free (even “deep monitoring” à la Dragon Trainer), so don’t be stingy with web analytics.
  3. build a redundant theoretical arsenal instead of going pragmatic (“I do this because it works”). Theory asks interesting questions, and I find that trying to read your own work in the light of theory helps crowdsorcerers and -sorceresses to build themselves better tools and encourages their awareness of what they do. I am thinking a lot along a complexity science approach and using a little run-of-the-mill network math. For now.

These general principles translate into design choices. I have decided to devote a series of posts to the choices my team and I are making in the building of Edgeryders. You can find them here (for now, only the first one is online). If you find errors or have suggestions, we are listening.

Dragon Trainer begins (Italiano)

Una bella notizia: un progetto di ricerca che ho contribuito a scrivere è stato approvato per un finanziamento nell’ambito del programma Future and Emerging Technologies della Commissione Europea. Il progetto è guidato da uno degli scienziati che ammiro di più, David Lane, e si inserisce fortemente nella tradizione di scienze della complessità associata al Santa Fe Institute. Intendiamo attaccare un problema molto grande e molto fondamentale: l’innovazione è fuori controllo. L’umanità inventa per risolvere problemi, ma finisce per crearne di nuovi: l’automobile migliora la mobilità, ma comporta riscaldamento globale e l’isolamento dello stile di vita suburbano; l’agroindustria hi-tech attenua la scarsità di cibo, ma partorisce l’epidemia dell’obesità. Dice uno dei nostri documenti di lavoro:

While newly invented artifacts are designed, innovation as a process is emergent. It happens in the context of ongoing interaction between agents that attribute new meanings to existing things and highlight new needs to be satisfied by new things. This process displays a positive feedback […] and is clearly not controlled by any one agent or restricted set of agents. As a consequence, the history of innovation is ripe with stories of completely unexpected turns. Some of these turns are toxic for humanity: phenomena like global warming or the obesity epidemics can be directly traced back to innovative activities. We try to address these phenomena by innovation, but we can’t control for more unintended consequences, perhaps even more lethal, stemming from this new innovation.

Noi vogliamo (1) costruire una teoria solida che colleghi progettazione e emergenza nell’innovazione e (2) usarla per costruire strumenti che la società civile possa usare per prevenire le conseguenze negative del progresso tecnico. Una cosa da niente! E infatti la valutazione del progetto è stellare: 4,5 su 5 per l’eccellenza tecnica e scientifica, e 5 su 5 per l’impatto sociale.

Il progetto contiene la realizzazione di Dragon Trainer, un software che dovrebbe aiutare i responsabili di comunità online ad “ammaestrarle” come si ammaestrerebbe un animale molto grande e forte (un drago, appunto), che non si può costringere con la forza ma solo influenzare. Il responsabile della creazione di Dragon Trainer sono io, ed è una bella responsabilità.

Sono molto contento, ma anche preoccupato. Ci sono fondi pubblici di ricerca, e quindi è ancora più importante produrre il miglior risultato che siamo in grado di portare a casa. Dovrò studiare come un dannato. Sto pensando seriamente di dedicarmi alla ricerca a tempo pieno per un paio d’anni a partire dal 2012. Che ne dite, faccio bene?