Category Archives: complexity economics

Leaving innocence behind: why open government is failing, and how it will still win in the end

Many people dislike and mistrust backroom deals and old boys networks in government. They prefer open and transparent governance. It makes for better institutions, and a better human condition. I am one of those people. Since you’re reading this, chances are you are, too.

Like many of those people, I watched the global Internet rise and saw an opportunity.  I put a lot of work into exploring how Internet-enabled communication could make democracy better and smarter. Most of that work was practical. It consisted of designing and delivering open government projects, first in Italy (Kublai, Visioni Urbane, both links in Italian) and later in Europe (Edgeryders). Since 2006, I have kept in touch with my peers who, all over the world, were working on these topics. Well, I have news: this debate is now shifting. We are no longer talking about the things we talked about in 2009. If you care about democracy, this is both good and bad news. Either way, it’s big and exciting.

Back in the late 2000s, we thought the Internet would improve the way democracy worked by lowering the costs of coordination across citizens. This worked across the board. It made everything easier. Transparency? Just put the information on a website, make it findable by search engines. Participation? Online surveys are dirt cheap. Citizens-government collaboration? Deploy fora and wikis; take advantage of the Net’s ubiquity to attract to them the people with the relevant expertise. We had the theory; we had (some) practice. We were surfing the wave of the Internet’s unstoppable diffusion. When Barack Obama made President of the United States in 2008, we also had the first global leader who stood by these principles, in word and deed. We were winning.

We expected to continue winning. We had a major advantage: open government did not need a cultural shift to get implemented. Adoption of new practices was not a revolution: it was a retrofit. We would use words familiar to the old guard: transparency, accountability and participation. They were like talismans. Senior management would not always show enthusiasm, but they could hardly take position against those values. Once our projects were under way, then they caused cultural shifts. Public servants learned to work in an open, collaborative way. Later, they found it hard to go back to the old ways of information control and need-to-know. So, we concluded, this can only go one way: towards more Internet in government, more transparency, participation, collaboration. The debate reflected this, with works like Beth Noveck’s The Wiki Government (2009) and my own Wikicrazia (2010).

All that’s changed now.

What brought the change home was reading two recent books. One is Beth Noveck’s Smart Citizens, Smarter Governance. The other is Complexity and the Art of Public Policy, by David Colander and Roland Kupers. I consider these two books an advance on anything written before on the matter.

Beth is a beacon for us opengov types. She pioneered open governments practices in a project called Peer2Patents. Because of that, President Obama recruited her on his transition team first, and to the White House proper later. She has a ton of experience at all levels, from theory to project delivery to national policy making. And she has a message for us: Open Government is failing. Here’s the money quote:

Despite all the enthusiasm for and widespread recognition of the potential benefits of more open governance, the open government movement has had remarkably little effect on how we make public decisions, solve problems, and allocate public goods.

Why is that? The most important proximate cause is that government practices are encoded in law. Changing them is difficult, and does need a cultural shift so that lawmakers can pass reforms. The ultimate cause is what she calls professionalized government. The reasoning goes like this:

  1. Aligning information with decision making requires curation of information, hence expertise.
  2. The professions have long served as a proxy for expertise. Professionalized government is new in historical terms, but it has now set in.
  3. So, “going open is a call to exercise civic muscles that have atrophied”.
  4. When professions set in, they move to exclude non-members from what they consider their turf. Everybody important in government is by now a professional, and mistrusts the potential of common citizens to contribute. And everybody reinforces everybody else’s convictions in this sense. So, you get a lot of  “meaningless lip service to the notion of engagement”, but little real sharing of power.

We now take professionalized government for granted, almost as if it were a law of nature. But it is not. Part of Beth’s book is a detailed account of how government became professionalized in the United States. At their onset, the US were governed by gentlemen farmers. Public service was guided by a corpus of practice-derived lore (called citizen’s literature) and learned on the job.  But over time, more and more people were hired into the civil service. As this happened, a new class of government professionals grew in numbers and influence. It used part of that influence to secure its position, making bureaucracy more an more into a profession. Codes of conduct were drawn. Universities spawned law and political science departments, as the training and recruiting grounds of the new breed of bureaucrats. All this happened in sync with a society-wide movement towards measurement, standardization and administrative ordering.

Beth paints a rich, powerful picture of this movement in one of my favourite parts of the book.  She then explains that new ways of channeling expertise to policy makers are illegal in the United States. Why? Because of a law drafted with a completely unrelated purpose, the Paperwork Reduction Act. And how did that come about? Lawmakers were trying to preserve the bureaucracy from interference and pressure from the regulated. To do this, it relegated non-government professionals in the role of interest representation. In other words, citizens are important not because of what they know, but because of who they speak for. A self-enforcing architecture of professionalized government had emerged from the state’s activities, without an architect .

Wait. Architecture with no architect? That’s complexity. Beth’s intellectual journey has led her to complex systems dynamics. She does not actually say this, but it’s clear enough. Her story has positive feedback loops, lock-in effects, emergence. She has had to learn to think in complex systems terms to navigate real-world policy making. I resonate with this, because the same thing happened to me. I taught myself network math as my main tool into complexity thinking. And I needed complexity thinking because I was doing policy, and it just would not damn work in any other way.

David Colander and Roland Kupers start from complex systems science. Their question is this: what would policy look like if it were designed with a complex systems perspective from the ground up?

They come up with fascinating answers. The “free market vs. state intervention” polarization would disappear. So would the dominance of economics, as economic policy becomes a part of social policy. The state would try to underpin beneficial social norms, so that people would want to do things that are good for them and others instead of needing to be regulated into them. Policy making agencies would be interdisciplinary. Experiments and reversibility would be built into all policies.

As they wrote, Colander and Kupers were not aware of Beth’s work and viceversa. Still, the two books converge on the same conclusion: modern policy making is a complex systems problem. Without complexity thinking, policy is bound to fail. I resonate with this conclusion, because I share it. I started to study complexity science in 2009. For four years now I have been in a deep dive into network science. I did this because I, too, was trying to do policy, and I was drawn to the explanatory power of the complexity paradigm. I take solace and pride in finding myself on the same path as smart people like Beth, Colander and Kupers.

But one thing is missing. Complexity thinking makes us better at understanding why policy fails. I am not yet convinced that it also makes us better at actually making policy. You see, complexity science has so far performed best in the natural sciences. Physics and biology aim to understand nature, not to change it. There is no policy there. Nature makes no mistakes.

So, understanding a social phenomenon in depth means, to some extent, respecting it. Try showing a complexity scientist a social problem, for example wealth inequality. She will show you the power-law behaviour of wealth distribution; explain it with success-breeds-success replicator dynamics; point out that this happens a lot in nature; and describe how difficult it is to steer a complex system away from its attractor. Complexity thinking is great at warning you against enacting ineffective, counterproductive policy. So far, it has not been as good at delivering stuff that you can actually do.

The authors of both books do come up with recommendations to policy makers. But they are not completely convincing.

Beth’s main solution is a sort of searchable database for experts. A policy maker in need of expertise could type “linked data” into a search box and connect with people who know a lot about linked data. This will work for well-defined problems, when the policy maker knows with certainty where to look for the solution. But most interesting policy problems are not well defined at all. Is air pollution in cities a technological problem? Then we should regulate the car industry to make cleaner cars. Is it an urban planning problem? Then we should change the zoning  regulation to build workplaces near to homes to reduce commuting. Is it an labour organization issue? Should we encourage employers to ditch offices and give workers groupware so they can work from home? Wait, maybe it’s a lifestyle problems: just make bicycles popular. No one knows. It’s probably all of these, and others, and any move you make will feed back onto the other dimensions of the problem.

It gets worse: the expertise categories themselves are socially determined and in flux. Can you imagine a policy maker in 1996 looking for an expert in open data? Of course not, the concept was not around. Beth’s database can, today, list experts in open data only because someone repurposed exiting technologies, standards, licenses etc. to face some pressing problems. This worked so well that it received a label, which you can now put on your resumé and can be searched for in a database. Whatever the merits of Beth’s solution, I don’t see how you can use it to find expertise for these groundbreaking activities. But they are the ones that matter.

Colander and Kupers have their own set of solutions, as mentioned above. They are a clean break with the way government works today. It is unlikely they would just emerge. Anyone who tried to innovate government knows how damn hard it is to get any change through, however small. How is such a full redesign of the policy machinery supposed to happen? By fiat of some visionary leader? Possible, but remember: the current way of doing things did emerge. “Architecture with no architect”, remember? Both books offer sophisticated accounts of that emergence. For all my admiration for the work of these authors, I can’t help seeing an inconsistency here.

So, where is 21st policy making going? At the moment, I do not see any alternatives to embracing complexity. It delivers killer analysis, and once you see it you can’t unsee it. It also delivers advice which is actionable locally. For example, sometimes you can persuade the state to do something courageous and imaginative in some kind of sandbox, and hope that what happens in the sandbox gets imitated. For now, this will have to be enough. But that’s OK. The age of innocence is over: we now know there is no easy-and-fast fix. Maybe one day we will have system-wide solutions that are not utopian; if we ever do, chances are Beth Noveck, David Colander and Roland Kupers will be among the first to find them.

Photo credit: Cathy Davey on

Masters of Networks 3: designing the future of online debate

Back in the day, the emergence of the global Internet was saluted with joy and hope by lovers of democracy. Many activists saw an opportunity for an electronic agora, endowed with always-on operations mode and total recall, that would finally deliver an Athenian-style participatory democracy at the planetary scale, and win power to the collective intelligence of people. It turned out things were not so simple. Online communities have been around for at least 30 years: some of them led interesting, deep debates, and even built amazing things like Wikipedia or OpenStreetMap; others, not so much. A large-scale participatory democracy is very far from being realized.

Masters of Networks 3: communities is an event that tries to learn from the experience of 30 years of online debate. Why is debate fruitful and creative in some contexts, sterile and conflictual in others? Are there reliable tests for a debate’s good health? Can we predict how conversations will evolve? We will tackle these questions starting from a key idea: any conversation, both on- and offline, is a network of interactions across humans, i.e. a social network. In the course of the CATALYST project, Wikitalia and its partners have built Edgesense, a simple software for real-time, interactive network analysis of online communities (video demoexample).

Masters of Networks 3: communities is a two-day hackathon for network scientists, active members of online communities and people interested in participatory democracy to get together, discuss these themes and make sense of what we already know about them. We will visualize and analize the networks of several online communities, using the deep knowledge of its active members and moderators as our guiding star; our goal is figuring out what a “healthy” conversation network looks like, and if we can tell them apart from the networks of “sick” conversations (too conflictual, superficial, polarized etc.).

Masters of Networks 2: communities happens in Rome on 10-11 March 2015. Several scientists, developers and community managers from the CATALYST project will attend, but we have set aside about ten places to allow any interested person to participate. In particular, if you are running an online community and would like to visualize and analyze its interaction network, we can probably help – get in touch and we will see what we can do. Participation is free, but registration is necessary – go here to register. The working language will be English.

I will be there. I think this is a central issue; I tried to argue as much in the video below

Learning from the Twitterstorm: an architecture for effortless collaboration

“We have no idea how a press conference on Twitter is going to pan out, of course. But it sounds like fun, so we’ll try it anyway.” In their typical just-trying-stuff-out style, about a month ago, a bunch of people over at Edgeryders invented, more or less accidentally, a format we now call Twitterstorm (how-to). The idea is to coordinate loosely in pushing out some kind of content or call to action using Twitter. The first Twitterstorm was aimed at raising awareness of the unMonastery and its call for residencies; it worked so well the community scheduled immediately another one, this time to promote the upcoming Living On The Edge conference, affectionately known as LOTE3.

LOTE3 is to take place in Matera, Italy: the same city is to host the unMonastery prototype. So, we thought we would try to get people in Matera involved in the Twitterstorm, as an excuse to build some common ground with the “neighbors”. This second Twitterstorm took place on October 14th at  11.00 CET: like the first, it was a success, involving 187 Twitter users and 800 tweets (in English, Italian, Portuguese, Russian, Swedish, French, German, Romanian) in the space of two hours. Apparently we reached 120,000 people worldwide, with almost 800,000 timeline deliveries (source). We hit number 1 trending topic in Italy (in the first Twitterstorm we hit number 1 in Italy and Belgium). Traffic to the conference website spiked. All of this was achieved by a truly global group of people: I have counted 23 nationalities. We promoted an event in Italy, but Italian accounts were less than 40% of those involved.


All this came at surprisingly little effort. People came out of the woodwork and participated, each with their own style, language and social media presence: despite all the diversity, the T-storm seemed to have some sort of coherence that made it simple to understand: people would notice the hashtag popping up in their timelines and go “Whoa, something’s going on here”. How is it possible that people with minimal coordination over the Internet; with such diverse backgrounds and communication styles; that don’t speak the same language and don’t even know each other, can cohere in an instant smart swarm and deliver a result? And, just as important: did we build community? 

As I am fond of saying when complex questions are asked concerning online social interaction, turns out I can measure that. Let’s start with the first question, how can such coherence arise from so little coordination. The picture above (hi-res image) visualizes the Twitterstorm as a network, where nodes are Twitter accounts and edges represent relationships between them. Relationships can be of three kinds, and all are represented by edges. An edge from Alice to Bob is added to the network if:

  • Alice follows Bob;
  • Alice retweets one of Bob’s tweet that includes the hashtag #LOTE3;
  • Alice mentions Bob with a tweet that includes the hashtag #LOTE3.
  • Tweets that are neither replies nor retweets containing the hashtag are represented in the network as loops (edges going from Alice back to Alice herself).
  • multiple relations map on weighted edges and are represented by thicker edge lines.

In the visualization, the size of the fonts represents a node’s betweenness centrality; the size of the dot its eigenvector centrality; the color-coding represents subcommunities in the modularity-maximizing partition computed with the Louvain algorithm (this network is highly modular, with Q = ~ 0.3). The picture tells a simple, strong story: the Twitterstorm group consists of three subcommunities. The green people on the left are almost all Italians, living in or near Matera, or with a strong relationship to the city. The blue people on the right are mostly active members of Ouishare, a community based in Paris. The red people in the middle are the Edgeryders/unMonastery community (note: the algorithm is not deterministic. In some runs the red subcommunity breaks down into two, much like in the network of follow relationships described below). Coordination across different subcommunities is achieved by information relaying and relationship brokerage at two levels:

  1. at the individual level, some “bridging” people connect subcommunities to each other. For example, alberto_cottica, noemisalantiu, i_dauria and rita_orlando all play a role in connecting the Materans to the edgeryders crowd. On the other side, ladyniasan and elfpavlik are the main connectors between the latter and the Ouishare group.
  2. at the subcommunity level, the ER-uM subcommunity is clearly intermediating between the Materans and Ouisharers.
  3. Each subcommunity is held together by some locally central, active individuals. You can see them clearly: piersoft, matera2019 and ida_leone for the Materans; edgeryders and unmonastery for the ER-uM crowd (these have many edges connecting them to the Materans): ouishare and antoleonard for the Ouishare group.

So, this is why doing the Twitterstorm seemed so effortless: this architecture allows each participant to focus on her immediate circle of friends, with no need to keep track of what the whole group is going. Bridging-brokering structures ensure group-level coherence.

To answer the second question, “did we build community?”, I need to look into the data with some more granularity. We can distinguish edges between the ones that convey short-term static social relationships from those that represent active relationships. Following someone on Twitter is a static relationship: Alice follows Bob if she thinks Bob is an interesting person that shares good content. Typically, she will follow hime over a long time. Mentioning or retweeting someone, on the other hand, is an action that happens at a precise point in time. Based on this reasoning, I can resolve the overall Twitterstorm network in a “static” network  of follower relationships – representing more or less endorsement and trust – and an “instant” network of mentions and retweets – representing more or less active collaboration in the Twitterstorm. The first of the two can be assumed to represent the pattern of trust that was built: it would be nice to confront it with the same network as it was before the Twitterstorm, but unfortunately our data do not allow us to do that. We can think of the second network as the act in which community was (or not) built.

The network of trust is not so different from the overall one, but now there are four subcommunities instead of three (Q = ~ 0.33):

Twitterstorm – Follow Relationships

As before, the Matera group is clearly visible and depicted in green; the Ouishare group is also recognizable in blue. The red subcommunity now consists almost exclusively of Italians – most of them not in Matera, who have strong international ties. The ER-uM group is now depicted in purple. In terms of the static network, then, the coordination between Materans on one end and Ouisharers on the other end was intermediated twice: first by a (red) group of internationally connected Italians, then by a (purple) pan-European community gathered around Edgeryders. This is also another legitimate interpretation of the overall T-storm network.

When we consider the “active” network of mentions and retweets that developed on Monday 14th, we find a rather different situation. Again, the components are four: but this time, the two central subcommunities seem more a mathematical effect of the high level of activity of the most active users, chiefly edgeryders, unmonastery and alberto_cottica, than clearly delimited subcommunities. Most of the modularity (which is even higher than in the two previous networks, Q = ~ 0.35) stems for the very clearly marked subcommunities to the left (Materans) and right (Ouisharers). No surprises here.

Twitterstorm – Mentions

The picture below visualizes a higher weighted outdegree in redder colors on a blue-red spectrum. Redder nodes have been more active in mentioning and engaging the nodes they are connected to. The redder areas in the network are not within the subcommunities, but across them: most of the orange and red edges connect the Matera subcommunity (i_dauria, matera2019, rita_orlando, piersoft) with the ER-uM one (edgeryders, alberto_cottica). On the right, elfpavlik is busy building bridges between Ouisharers and ER-uM. So yes, we did build community. You are looking at community building in action!

Twitterstorm mentions outdegree heat map

Provisionally, as we wait for better data, we conclude that the Twitterstorm not only was dirt cheap, fun and good publicity, but it also left behind semi-permanent social effects, pulling the three communities involved closer together. Doesn’t get much better than that!