Tag Archives: complexity economics

Leaving innocence behind: why open government is failing, and how it will still win in the end

Many people dislike and mistrust backroom deals and old boys networks in government. They prefer open and transparent governance. It makes for better institutions, and a better human condition. I am one of those people. Since you’re reading this, chances are you are, too.

Like many of those people, I watched the global Internet rise and saw an opportunity.  I put a lot of work into exploring how Internet-enabled communication could make democracy better and smarter. Most of that work was practical. It consisted of designing and delivering open government projects, first in Italy (Kublai, Visioni Urbane, both links in Italian) and later in Europe (Edgeryders). Since 2006, I have kept in touch with my peers who, all over the world, were working on these topics. Well, I have news: this debate is now shifting. We are no longer talking about the things we talked about in 2009. If you care about democracy, this is both good and bad news. Either way, it’s big and exciting.

Back in the late 2000s, we thought the Internet would improve the way democracy worked by lowering the costs of coordination across citizens. This worked across the board. It made everything easier. Transparency? Just put the information on a website, make it findable by search engines. Participation? Online surveys are dirt cheap. Citizens-government collaboration? Deploy fora and wikis; take advantage of the Net’s ubiquity to attract to them the people with the relevant expertise. We had the theory; we had (some) practice. We were surfing the wave of the Internet’s unstoppable diffusion. When Barack Obama made President of the United States in 2008, we also had the first global leader who stood by these principles, in word and deed. We were winning.

We expected to continue winning. We had a major advantage: open government did not need a cultural shift to get implemented. Adoption of new practices was not a revolution: it was a retrofit. We would use words familiar to the old guard: transparency, accountability and participation. They were like talismans. Senior management would not always show enthusiasm, but they could hardly take position against those values. Once our projects were under way, then they caused cultural shifts. Public servants learned to work in an open, collaborative way. Later, they found it hard to go back to the old ways of information control and need-to-know. So, we concluded, this can only go one way: towards more Internet in government, more transparency, participation, collaboration. The debate reflected this, with works like Beth Noveck’s The Wiki Government (2009) and my own Wikicrazia (2010).

All that’s changed now.

What brought the change home was reading two recent books. One is Beth Noveck’s Smart Citizens, Smarter Governance. The other is Complexity and the Art of Public Policy, by David Colander and Roland Kupers. I consider these two books an advance on anything written before on the matter.

Beth is a beacon for us opengov types. She pioneered open governments practices in a project called Peer2Patents. Because of that, President Obama recruited her on his transition team first, and to the White House proper later. She has a ton of experience at all levels, from theory to project delivery to national policy making. And she has a message for us: Open Government is failing. Here’s the money quote:

Despite all the enthusiasm for and widespread recognition of the potential benefits of more open governance, the open government movement has had remarkably little effect on how we make public decisions, solve problems, and allocate public goods.

Why is that? The most important proximate cause is that government practices are encoded in law. Changing them is difficult, and does need a cultural shift so that lawmakers can pass reforms. The ultimate cause is what she calls professionalized government. The reasoning goes like this:

  1. Aligning information with decision making requires curation of information, hence expertise.
  2. The professions have long served as a proxy for expertise. Professionalized government is new in historical terms, but it has now set in.
  3. So, “going open is a call to exercise civic muscles that have atrophied”.
  4. When professions set in, they move to exclude non-members from what they consider their turf. Everybody important in government is by now a professional, and mistrusts the potential of common citizens to contribute. And everybody reinforces everybody else’s convictions in this sense. So, you get a lot of  “meaningless lip service to the notion of engagement”, but little real sharing of power.

We now take professionalized government for granted, almost as if it were a law of nature. But it is not. Part of Beth’s book is a detailed account of how government became professionalized in the United States. At their onset, the US were governed by gentlemen farmers. Public service was guided by a corpus of practice-derived lore (called citizen’s literature) and learned on the job.  But over time, more and more people were hired into the civil service. As this happened, a new class of government professionals grew in numbers and influence. It used part of that influence to secure its position, making bureaucracy more an more into a profession. Codes of conduct were drawn. Universities spawned law and political science departments, as the training and recruiting grounds of the new breed of bureaucrats. All this happened in sync with a society-wide movement towards measurement, standardization and administrative ordering.

Beth paints a rich, powerful picture of this movement in one of my favourite parts of the book.  She then explains that new ways of channeling expertise to policy makers are illegal in the United States. Why? Because of a law drafted with a completely unrelated purpose, the Paperwork Reduction Act. And how did that come about? Lawmakers were trying to preserve the bureaucracy from interference and pressure from the regulated. To do this, it relegated non-government professionals in the role of interest representation. In other words, citizens are important not because of what they know, but because of who they speak for. A self-enforcing architecture of professionalized government had emerged from the state’s activities, without an architect .

Wait. Architecture with no architect? That’s complexity. Beth’s intellectual journey has led her to complex systems dynamics. She does not actually say this, but it’s clear enough. Her story has positive feedback loops, lock-in effects, emergence. She has had to learn to think in complex systems terms to navigate real-world policy making. I resonate with this, because the same thing happened to me. I taught myself network math as my main tool into complexity thinking. And I needed complexity thinking because I was doing policy, and it just would not damn work in any other way.

David Colander and Roland Kupers start from complex systems science. Their question is this: what would policy look like if it were designed with a complex systems perspective from the ground up?

They come up with fascinating answers. The “free market vs. state intervention” polarization would disappear. So would the dominance of economics, as economic policy becomes a part of social policy. The state would try to underpin beneficial social norms, so that people would want to do things that are good for them and others instead of needing to be regulated into them. Policy making agencies would be interdisciplinary. Experiments and reversibility would be built into all policies.

As they wrote, Colander and Kupers were not aware of Beth’s work and viceversa. Still, the two books converge on the same conclusion: modern policy making is a complex systems problem. Without complexity thinking, policy is bound to fail. I resonate with this conclusion, because I share it. I started to study complexity science in 2009. For four years now I have been in a deep dive into network science. I did this because I, too, was trying to do policy, and I was drawn to the explanatory power of the complexity paradigm. I take solace and pride in finding myself on the same path as smart people like Beth, Colander and Kupers.

But one thing is missing. Complexity thinking makes us better at understanding why policy fails. I am not yet convinced that it also makes us better at actually making policy. You see, complexity science has so far performed best in the natural sciences. Physics and biology aim to understand nature, not to change it. There is no policy there. Nature makes no mistakes.

So, understanding a social phenomenon in depth means, to some extent, respecting it. Try showing a complexity scientist a social problem, for example wealth inequality. She will show you the power-law behaviour of wealth distribution; explain it with success-breeds-success replicator dynamics; point out that this happens a lot in nature; and describe how difficult it is to steer a complex system away from its attractor. Complexity thinking is great at warning you against enacting ineffective, counterproductive policy. So far, it has not been as good at delivering stuff that you can actually do.

The authors of both books do come up with recommendations to policy makers. But they are not completely convincing.

Beth’s main solution is a sort of searchable database for experts. A policy maker in need of expertise could type “linked data” into a search box and connect with people who know a lot about linked data. This will work for well-defined problems, when the policy maker knows with certainty where to look for the solution. But most interesting policy problems are not well defined at all. Is air pollution in cities a technological problem? Then we should regulate the car industry to make cleaner cars. Is it an urban planning problem? Then we should change the zoning  regulation to build workplaces near to homes to reduce commuting. Is it an labour organization issue? Should we encourage employers to ditch offices and give workers groupware so they can work from home? Wait, maybe it’s a lifestyle problems: just make bicycles popular. No one knows. It’s probably all of these, and others, and any move you make will feed back onto the other dimensions of the problem.

It gets worse: the expertise categories themselves are socially determined and in flux. Can you imagine a policy maker in 1996 looking for an expert in open data? Of course not, the concept was not around. Beth’s database can, today, list experts in open data only because someone repurposed exiting technologies, standards, licenses etc. to face some pressing problems. This worked so well that it received a label, which you can now put on your resumé and can be searched for in a database. Whatever the merits of Beth’s solution, I don’t see how you can use it to find expertise for these groundbreaking activities. But they are the ones that matter.

Colander and Kupers have their own set of solutions, as mentioned above. They are a clean break with the way government works today. It is unlikely they would just emerge. Anyone who tried to innovate government knows how damn hard it is to get any change through, however small. How is such a full redesign of the policy machinery supposed to happen? By fiat of some visionary leader? Possible, but remember: the current way of doing things did emerge. “Architecture with no architect”, remember? Both books offer sophisticated accounts of that emergence. For all my admiration for the work of these authors, I can’t help seeing an inconsistency here.

So, where is 21st policy making going? At the moment, I do not see any alternatives to embracing complexity. It delivers killer analysis, and once you see it you can’t unsee it. It also delivers advice which is actionable locally. For example, sometimes you can persuade the state to do something courageous and imaginative in some kind of sandbox, and hope that what happens in the sandbox gets imitated. For now, this will have to be enough. But that’s OK. The age of innocence is over: we now know there is no easy-and-fast fix. Maybe one day we will have system-wide solutions that are not utopian; if we ever do, chances are Beth Noveck, David Colander and Roland Kupers will be among the first to find them.

Photo credit: Cathy Davey on flickr.com

What we mean by “smart” in “smart cities”

There’s lots of talk about smart cities. There are two reasons for such attention.

The first one is structural: cities are our future as a species. Already, for the first time in history, over half of the world population lives in cities. Every week, 1.3 million people relocate from rural areaas to the cities of planet Earth. It’s plain common sense that we apply our best smarts to our dominant habitat. The second one is contingent: there’s money up for grabs if you hack smart cities. In Italy, the government is throwing over 600 million euro at research-and-deploy projects to “solve problems at the urban and metropolitan scale” in spaces like safety, aging, technologies for welfare, domotics, smart grids etc.

Interference between the two causes the expression “smart cities” to be interpreted in different ways. Simplifying a bit, there are two main interpretations. The dominant one (also the first one to be proposed) is associated with some large tech corporates: IBM and Cisco were the prime movers, but Google is in there too with projects like Latitude. The idea is to use networked sensors to increase the density of the flow of information that cities generate; and then move on to use this information to adapt our behavior and redesign the places we live in. “Redesign”, in this case, is an ambitious project; it aims to deploy new infrastructure (example: curbside recharging stations for electric cars), in their turns connected to more sensors. The most important sensors would live on our smartphones, that feed a non-stop stream of information about our surroundings onto large datasets. Technology and interdependence are the lynchpin of this vision. Its symbol is MIT’s Copenhagen Wheel.

The second interpretation is associated to hacker culture and the social innovation world. The idea here is to redesign cities to make them more comfortable, simple, sustainable – financially sustainable too. Sometimes this will imply introducing advanced technology (examples: microsolar and LED street lighting); others it will drive low tech solutions (examples: bicycles and urban farming). Social relationships, community building and awareness of the natural environment’s fragility are the lynchpin of this vision. Its symbol is the hackerspace. I will call cities evolving according to these two different interpretations “type 1 and -2 smart cities” respectively.

Type 1 smart cities have advanced technologies, cool design, researchers of proven excellence. Each component, taken individually, is definitely smart. And then a funny thing happens: once you piece them together you get a whole that does not look smart to me. Not at all. Take, for example, electric cars. They are silent, and don’t spew out greenhouse gases. But:

  • the electricity that powers them has to be produced somehow. In a world in which hydro is at capacity, nuclear is politically dead and solar not developed (yet?), installing additional capacity means burning fossil fuels. Cars’ emission, then, are not eliminated, just moved where you can’t see them. A shift to electric cars would increase or decrease total emissions depending on the existing power stations and the grid: fossil fuels power stations typically convert to electricity only about 50% of the energy harvested from combustion (the rest becomes heat); another 5% are lost along transport. So, of 100 KW embedded in fuel, only 45 are actually available to recharge that new, shiny electric car.
  • they require a costly infrastructure of recharging stations
  • electric cars are still cars. They embody the idea of associating to each human being a tin box of four meters by one and a half by one, that gets driven on average one hour a day and spends the remaining twenty-three squatting precious urban space. As such, they don’t solve mobility problems. They might even make them worse, since they are allowed into restricted entry areas.
  • they are a nonpermissive technology. You are not allowed to hack them, you are not allowed to charge them any way other than connecting them to the power grid. You are allowed to choose what color you want them, and how to pay for them. They relegate us to a passive role – the same we have with respect to internal combustion cars.


Now let’s look at another approach to mobility, not as innovative on first sight: congestion charges – schemes whereby drivers are charged some money to enter a city center. I had the privilege of witnessing the launch of a congestion charge scheme in Milan, Area C. Its results are impressive: 34% reduction in vehicles accessing the area (49% of high emission vehicles); 5% increase in commercial speed of public transport; 23% reduction of driving accidents (24% on injuries); 15 to 23% reductions acroo the spectrum of the main pollutants (source – Italian).

But the real advantage of Area C is that it creates space rather than occupying it. In perspective, it makes the central streets in Milan available to be a platform for social interaction, play, trade, food consumption, lifestyle innovation. Since fast and heavy (hence dangerous) vehicles are not reclaiming most of their surfaces anymore, people can attempt to do new, interesting things with city streets. They can and do explore other ways to move about – bicycles, rollerskates, running. Talented hobbyists and crafty mechanics can create new ecosystems of urban light mobility: in countries that have already undergone this transition you can see this in the sheer variety of bicycles – bicycles with trailers, or bicycles with loading surfaces for small freight. You can see children walking to school in safety, a big taboo in Italy (roads are perceived to be so dangerous that only nonconformist parents let their children walk to school: many schools go as far as to forbid it).

So, what do we mean by smart in smart city? The two approaches I tried to cover here are not clearly outlined in the current debate. And yet, it seems to me they are not only different, but mutually incompatible. Type 1 smart cities are centralized: all smarts are concentrated in the technologists in corporate and university labs, and the role of citizens is to consume their various gadgets. Type 2 ones are full of networks to purchase locally produced food, urban farming, sewing cafés, hackerspaces, fablabs. Type 1s invest huge amount of money on ultrafast mobile networks. Type 2s conjure, as from thin air, wifi city networks that ride on the back of routers already installed in cafés, public libraries and our own homes (this happened in a matter of hours during the earthquake in spring 2012). Students in Type 1 smart cities go to school with tablets. Those of Type 2s use creative commons syllabi – and probably can mix and match the lecturing of their local teachers with that of the Khan Academy or similar experiences. Smart cities of Type 1 concentrates production (agriculture, manufacturing, finance) to large companies, organized to take advantage of increasing returns to scale. Those of Type 2 distributes it, at least in part, across many small entities: permaculturists, makers, community lending agencies.

By now you will have figured out that I find decentralization much smarter an more modern. But there is a problem: almost anything that is smarter in that sense reduces GDP. If public transport works better, more people use them: traffic decreases, but so does the consumption of fuel and vehicles. If people engage more in sports and outdoor activities GDP goes down via the reduction in health care costs (health care is a gigantic business). Area C in Milan, by reducing driving accidents, is a scourge on GDP (fewer medical treatments, less rehab, less car repairs). Type 1 smart cities have no such problem: the Copenhagen Wheel costs 600 dollars and needs an onboard iPhone to work. In fact, the Guardian ended up wondering how smart it is to put over a thousand euro worth of sophisticated circuitry on a bicycle – an eminently stealable contraption.

Corporates love centralization. And so they should, because it gives them a pivotal role and lots of headroom to monetize what they do (when everything is centralized, people in the periphery have to buy everything from the center). I have no doubt that they will be the protagonists of the government’s smart cities call for projects. And still, I have a hunch that in the last few months the voices of the supporters of decentralized solutions started to be heard somewhat. Such voices come, as usual, from that most decentralized of places: the Internet.

What fascinates me in the discussion on smart cities is that it twists our arm into asking really relevant questions. What does GDP really measure? What is really this thing called growth that we are trying to drive? How do we want live with each other in our cities? Whatever the outcome (or the lack of one), I do hope we will take the time and effort to go deep into the debate. It’s not every day that we get to make collective decisions of such broad scope, forcing us to ask ourselves what we really want, how we really expect to live together. To fully rise to this challenge, I hope that the prime sensors of the new smart cities are deployed to listen to citizens (and by that I mean individuals, not just stakeholders); and that their prime enabling technologies are safe, detoxyfied, rational argument-oriented environments – located both online and offline – in which we can talk stuff through, and make, together, the relevant decisions. Even those of us who like centralized systems will surely agree that making collective decisions on our common future should stay decentralized. You see, we even have a name for decentralized public decision making: we call it democracy.

Debugging democracy with network analysis

A few weeks ago I presented at TEDx Bologna. I used the opportunity to try and stitch together the pieces of my intellectual journey of the past five-six years, and see if they form some coherent pattern.

The result was the video above. In a nutshell: collective intelligence is the most promising weapon we have to face the many dire problems threatening our species, and that transcend the individual scale. Climate change, feral finance, mounting inequalities; we can’t touch these (and others), because they don’t exist at the same scale as us – they emerge from the interaction of billions of us. To address these problems, it seems intuitive that we should deploy an equally emergent, same-level collective intelligence. Unfortunately, representative democracy is concocted in such a way that does not allow the emergence of solutions (in the sense that Wikipedia is an emergent solution to the problem of writing an encyclopedia) within democratic institutions. Participatory democracy could, in theory, lead to such a result, but it does not scale. To make it scale we can use the Internet. First, we design online interaction environments from which we think collective solutions might emerge; then we measure the social dynamics the these environments host and foster, as if they were coral reefs colonized by many species. I wrote “measure” because, thanks to network analysis, social interaction dynamics have become measurable, even for large communities.

Maybe, using these techniques, we’ll finally be able to make the dream of a working, large-scale participatory democracy come true. We have held on to it for twenty-five centuries! It’s certainly worth trying.