Category Archives: complexity economics

Photo credit: McTrent on flickr.com

What counts as evidence in interdisciplinary research? Combining anthropology and network science

Intro: why bother?

Over the past few years, it turns out, three of the books that most influenced my intellectual journey were written by anthropologists. This comes as something of a surprise, as I find myself in the final stages of a highly quantitative, data- and network science heavy Ph.D. programme. The better I become at constructing mathematical models and building quantitatively testable hypotheses around them, the more I find myself fascinated by the (usually un-quantitative) way of thinking great anthro research deploys.

This raises two questions. The first one is: why? What is calling to me from in there? The second one is: can I use it? Could one, at least in principle, see the human world simultaneously as a network scientist and as an anthropologist? Can I do it in practice?

The two questions are related at a deep level. The second one is hard, because the two disciplines simplify human issues in very different ways: they each filter out and zoom in to different things. Also, what counts as truth is different. Philosophers would say that network science and anthropology have different ontologies and different epistemologies. In other words, on paper, a bad match. The first one, of course is that this same difference makes for some kind of added value. Good anthro people see on a wavelength that I, as a network scientist, am blind to. And I long for it… but I do not want to lose my own discipline’s wavelength.

Before I attempt to answer these questions, I need to take a step back, and explain why I chose network science as my main tool to look at social and economic phenomena in the first place. I’m supposed to be an economist. Mainstream economists do not, in general, use networks much. They imagine that economic agents (consumers, firms, labourers, employers…) are faced with something called objective functions. For example, if you are a consumer, your objective is pleasure (“utility”). The argument of this function are things that give you pleasure, like holidays, concert tickets and strawberries. Your job is, given how much money you have, to figure our exactly which combination of concert tickets and strawberries will yield the most pleasure. The operative word is “most”: formally, you are maximising your pleasure function, subject to your budget constraint. The mathematical tool for maximising functions is calculus: and calculus is what most economists do best and trust the most.

This way of working is mathematically plastic. It allows scholars to build a consistent array of models covering just about any economic phenomenon. But it has a steep price: economic agents are cast as isolated. They do not interact with each other: instead, they explore their own objective functions, looking for maxima. Other economic agents are buried deep inside the picture, in that they influence the function’s parameters (not even its variables). Not good enough. The whole point of economic and social behaviour is that involves many people that coordinate, fight, trade, seduce each other in an eternal dance. The vision of isolated monads duly maximising functions just won’t cut it. Also, it flies in in the face of everything we know about cognition, and on decades of experimental psychology.

The networks revolution

You might ask how is it that economics insists on such a subpar theoretical framework. Colander and Kupers have a great reconstruction of the historical context in which this happened, and how it got locked in with university departments and policy makers. What matters to the present argument is this: I grasped at network science because it promised a radical fix to all this. Networks have their own branch of math: per se, they are no more relevant to the social world than calculus is. But in the 1930s, a Romanian psychiatrist called Jacob Moreno came up with the idea that the shape of relationships between people could be the object of systematic analysis. We now call this analysis social network analysis, or SNA.

Take a moment to consider the radicality and elegance of this intellectual move. Important information about a person is captured by the pattern of her relationships with others, whoever the people in question are. Does this mean, then, that individual differences are unimportant? It seems unlikely that Moreno, a practicing psychiatrist, could ever hold such a bizarre belief. A much more likely interpretation of social networks is that an individual’s pattern of linking to others, in a sense, is her identity. That’s what a person is.

Three considerations:

  1. The ontological implications of SNA are polar opposites of those of economics. Economists embrace methodological individualism: everything important in identity (individual preferences, for consumer theory; a firm’s technology, in production theory) is given a priori with respect to economic activity. In sociometry, identity is constantly recreated by economic and social interaction.
  2. The SNA approach does not rule out the presence of irreducible differences across individuals. A few lines above I stated that an individual’s pattern of linking to others, in a sense, is her identity. By “in a sense” I mean this: it is the part of the identity that is observable. This is a game changer: in economics, individual preferences are blackboxed. This introduces the risk of economic analysis becoming tautologic. If you observe an economic system that seems to plunge people into misery and anxiety, you can always claim this springs directly from people maximising their own objective functions because, after all, you can’t know what they are. This kind of criticism is often levelled to neoliberal thinkers. But social networks? They are observable. They are data. No fooling around, no handwaving. And even though there remains an unobservable component of identity, modern statistical techniques like fixed effects estimation can make system-level inferences on what is observable (though they were invented after Moreno’s times).
  3. Moreno’s work is all the more impressive because the mathematical arsenal around networks was then in its infancy. The very first network paper was published by Euler in 1736, but it seems to have been considered a kind of amusing puzzle, and left brewing for over a century. In the times of Moreno there had been significant progress in the study of trees, a particular class of graphs used in chemistry. But basically Moreno relied on visual representation of his social networks, that he called sociograms, to draw systematic conclusions.

By Martin Grandjean (Own work), strictly based on Moreno, 1934 [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

With SNA, we have a way of looking at social and economic phenomena that is much more appealing than that of standard economics. It puts relationships, surely the main raw material of societies and economies, right under the spotlight. And it is just as mathematically plastic – more, in fact, because you can more legitimately make the assumption that all nodes in a social network are identical, except for the links connecting them to other nodes. I embraced it enthusiastically, and spent ten years teaching myself the new (to me) math and other relevant skills, like programming and agent-based modelling.

Understanding research methods in anthropology

As novel as networks science felt to me, anthropology is far stranger. From where I stand, it breaks off from scholarship as I was trained to understand it in three places. These are: how it treats individuals; how it treats questions; and what counts as legitimate answers.

Spotlight on individuals

A book written by an anthropologist is alive with actual people. It resonates with their voices, with plenty of quotations; the reader is constantly informed of their whereabouts and even names. Graeber, for example, towards the beginning of Debt introduces a fictitious example of bartering deal between two men, Henry and Joshua; a hundred pages later he shows us a token of credit issued by an actual 17th century English shopkeeper, actually called Henry. This historical Henry did his business in a village called Stony Stratford, in Buckinghamshire. The token is there to make the case that the real Henry would do business in a completely different way than the fictional one (credit, not barter). 300 pages later (after sweeping over five millennia of economic, religious and cultural history in two continents) he informs us that Henry’s last name was Coward, that he also engaged in some honourable money lending, and that he was held in high standing by his neighbours. To prove the case, he quotes the writing of one William Stout, a Quaker businessman from Lancashire, who started off his career as Henry’s apprentice.

To an economist, this is theatrical, even bizarre. The author’s point is that it was normal for early modern trade in European villages to take place in credit, rather than cash. Why do we need to know this particular’s shopkeeper’s name and place of establishment, and the name and birthplace of his apprentice as well? Would the argument not be even stronger, if it applied to general trends, to the average shopkeeper, instead of this particular man?

I am not entirely sure what is going on here. But I think it is this: to build his case, the author had to enter in dialogue with real people, and make an effort to see things through their eyes. Ethnographers do this by actually spending time with living members of the groups they wish to study; in the case of works like Debt he appears to spend a great deal of time reading letters and diaries, and piecing things together (“Let me tell you how Cortés had gotten to be in that predicament…”). If the reader wishes to fully understand and appreciate the argument, she, too, needs to make that effort. And that means spending time with informants, even in the abridged form of reading the essay, and getting to know them. So, detailed descriptions of individual people are a device for empathy and understanding.

All this makes reading a good anthro book great fun. It also is the opposite of what network scientists do: we build models with identical agents to tease out the effect of the pattern of linking. Anthropologists zoom in on individual agents and make a point of keeping track of their unique trajectories and predicaments.

Asking big questions

Good anthropologists are ambitious, fearless. They zero in on big, hairy, super-relevant questions and lay siege to them. Look at James Scott:

I aim, in what follows, to provide a convincing account of the logic behind the failure  of some of the great utopian social engineering schemes of the twentieth century.

That’s a big claim right there. It means debugging the whole of development policies, most urban regeneration projects, villagization of agriculture schemes, and the building of utopian “model cities” like Kandahar or Brasilia. It means explaining why large, benevolent, evidence-based bureaucracies like the United Nations, the International Monetary Fund and the World Bank fail so often and so predictably. Yet Scott, in his magisterial Seeing Like a State, pushes on – and, as far as I am concerned, delivers the goods. David Graeber’s own ambition is in the title: Debt – The first 5,000 years.

Economists don’t do that  anymore.You need to be very very senior (Nobel-grade, or close) to feel like you can tackle a big question. Researchers are encouraged to act as laser beams rather than searchlights, focusing tightly on well-defined problems. It was not always like that: Keynes’s masterpiece is immodestly titled The General Theory of Employment, Interest and Money. But that was then, and now it is.

What counts as “evidence”?

Ethnographic analysis – the main tool in the anthropologist’s arsenal – is not exactly science. Science is about building a testable hypothesis, and then testing it. But testing implies reproducibility of experiments, and that is generally impossible for meso- and macroscale social phenomena, because they have no control group. You cannot re-run the Roman Empire 20 times to see what would have happened if Constantine had not embraced the christian faith. This kind of research is more like diagnosis in medicine: pathologies exist as mesoscale phenomena and studying them helps. But in the end each patient is different, and doctors want to get it right this time, to heal this patient.

How do you do rigorous analysis when you can’t do science? When I first became intrigued with ethnography, someone pointed me to Michael Agar’s The professional stranger. This book started out as a methodological treatise for anthropologists in the field; much later, Agar revisited it and added a long chapter to account for how the discipline had evolved since its original publication. This makes it a sort of meta-methodological guide. Much of Agar’s argument in the additional chapter is dedicated to cautiously suggesting that ethnographers can maintain some kind of a priori categories as they start their work. This, he claims, does not make an ethnographer a “hypothesis-testing researcher”, which would obviously be really bad. When I first read this expression, I did a double take: how could a researcher do anything else than test hypotheses? But no: a “hypothesis-testing researcher” is, to ethnographers, some kind of epistemological fascist. What they think of as good epistemology is to let patterns emerge from immersion in, and identification with, the world in which informants live. They are interested in finding out “what things look like from out here”.

It sounds pretty vague. And yet, good anthropologists get results. They make fantastic applied analysts, able to process diverse sources of evidence from archaeological remains to statistical data, and tie them up into deep, compelling arguments about what we are really looking at when we consider debt, or the metric system, or the particular pattern with which cypress trees are planted in certain areas. A hard-nosed scientist will scoff at many of the pieces (for example, Graeber writes things like “you can’t help feeling that there’s more to this story”. Good luck getting a sentence like that past my thesis supervisor), but those pieces make a very convincing whole. To anthropologists, evidence comes in many flavours.

Coda: where does it all go?

You can see why interdisciplinary research is avoided like the plague by researchers who wish to publish a lot. Different disciplines see the world with very different eyes; combining them requires methodological innovation, with a high risk of displeasing practitioners of both.

But I have no particular need to publish, and remain fascinated by the potential of combining ethnography with network science for empirical research. I have a specific combination in mind: large scale online conversations, to be harvested with ethnographic analysis. Harvested content is then rendered as a type of graph called a semantic social network, and reduced and analysed via standard quantitative methods from network science. With some brilliant colleagues, we have outlined this vision in a paper (a second one is in the pipeline) so I won’t repeat it here.

I want, instead, to remark how this type of work is, to me, incredibly exciting. I see a potential to combine ethnography’s empathy and human centricity, anthropology’s fearlessness and network science’s exactness, scalability and emphasis on the mesoscale social system. The idea of “linking as identity” is a good example of methodological innovation: it reconciles the idea of identity as all-important with that of interdependence within the social context, and it enables simple(r) quantitative analysis. All this implies irreducible methodological tensions, but I think in most cases they can be managed (not solved) by paying attention to the context. The work is hard, but the rewards are substantial. For all the bumps in the road, I am delighted that I can walk this path, and look forward to what lies beyond the next turns.

Photo credit: McTrent on flickr.com

 

Photo credit: Gerald Grote on flickr.com

Complexity and public policy: a very short reading list

I have a new talk out, sort of. So far I have delivered it only in Italian (slides with notes), and it’s still work in progress. But getting there. It addresses the following question: can we reform government by making it more open and smart? If so, how?

I know. It sounds like something from a B-list TEDx event. You can almost picture some eager junior civil servant talking about “innovation” and “design” and “disruption”, the sort of disruption that does not destroy anyone’s job, civil rights, or democratic institutions. What could possibly go wrong?

It turns out to be much more difficult than that. Even talking about it is difficult. To even address the question, I had to ask myself: what is government? Why did it come into existence? Whhich evolutionary pressures now constrain its evolution? Doing so set me on a strange journey. I have been on it for about ten years now. It led me to uncover relevant stuff in many disciplines: history, economics, anthropology, networks science, sociology, math, philosophy, archaeology, experimental psychology, biology (lots of biology). It does not look like it’ll be over any time soon.

I still don’t know if and how we can make government more open and smarter. But I did get something in return for ten years of hard thinking: my brain is now rewired. I now look at administrative action in a perspective borrowed from complexity science. I draw most of my metaphors from biology. I have (somewhat) learned to look for emergence and self-organisation, and I can’t unsee it. I have become (somewhat) aware of my own psychological biases and cognitive limits. This transformation has been so profound that I can barely discuss with my former war buddies anymore.

And what I see is not cute. It’s strong stuff, inebriating and scary. So: last week I did this talk to open the School of Civic Technologies in Torino, and some students asked me for a reading list. Here it is, but don’t say you have not been warned. This is a red pill-blue pill situation. “There’s no turning back.”

So, here’s a barebones reading list in chronological order. If your interests center on public policy, start from the end. If you are more curious about complexity science, skip Ostrom, read Waldrop first and work your way up. Whatever you do, read Scott.

  1. Elinor Ostrom, 1990, Governing the Commons. People can and do steward common resources over the long run, with no central control and no definition of property rights. Great example, solid theorizing.
  2. Mitchell Waldrop, 1992, Complexity: the Emerging Science at the Edge of Order and Chaos. Still the best account of the story of the Santa Fe Institute in the early days. Functions as an introduction to the main intuitions behind complexity science.
  3. James Scott, 1998, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Essential reading. It shows how statecraft and legibility are tightly coupled. Casts a dark light on the emphasis on “evidence based”  and “data driven” when the guy speaking these words is also the guy with the gun.
  4. David Graeber, 2011, Debt: The First 5,000 Years. A long-term history of money and debt (it turns out they are the same thing). The book is very rich, and most of its value is not in its main thesis. For my purposes, the main teaching lies in the incredible value brought to the table by disciplines apparently quite far removed from policy issues – and, conversely, of the intellectual danger of not being interdisciplinary.
  5. Duncan Watts, 2011, Everything is obvious (when you know the answer). One of my favourite networks scientists sets out an ambitious (but achievable) research plan for the social sciences. Its take on what constitutes “data” and “evidence”, and of their limits, are typical of complexity science. Vanilla policy people tend to not understand data even minimally crunched.
  6. David Colander and Roland Kupers, 2014, Complexity and the Art of Public Policy:
    Solving Society’s Problems from the Bottom Up. An account of what public policies would look like if both the government and the governed knew complexity science, and were prepared to take it seriously. Review, in English and Italian.
  7. Beth Noveck, 2015, Smart Citizens, Smarter State. An authoritative take on why open government is failing. My favourite part is the treatment of how government became professionalized (and therefore exclusionary) in the USA. Review, in English and Italian.

Leaving innocence behind: why open government is failing, and how it will still win in the end

Many people dislike and mistrust backroom deals and old boys networks in government. They prefer open and transparent governance. It makes for better institutions, and a better human condition. I am one of those people. Since you’re reading this, chances are you are, too.

Like many of those people, I watched the global Internet rise and saw an opportunity.  I put a lot of work into exploring how Internet-enabled communication could make democracy better and smarter. Most of that work was practical. It consisted of designing and delivering open government projects, first in Italy (Kublai, Visioni Urbane, both links in Italian) and later in Europe (Edgeryders). Since 2006, I have kept in touch with my peers who, all over the world, were working on these topics. Well, I have news: this debate is now shifting. We are no longer talking about the things we talked about in 2009. If you care about democracy, this is both good and bad news. Either way, it’s big and exciting.

Back in the late 2000s, we thought the Internet would improve the way democracy worked by lowering the costs of coordination across citizens. This worked across the board. It made everything easier. Transparency? Just put the information on a website, make it findable by search engines. Participation? Online surveys are dirt cheap. Citizens-government collaboration? Deploy fora and wikis; take advantage of the Net’s ubiquity to attract to them the people with the relevant expertise. We had the theory; we had (some) practice. We were surfing the wave of the Internet’s unstoppable diffusion. When Barack Obama made President of the United States in 2008, we also had the first global leader who stood by these principles, in word and deed. We were winning.

We expected to continue winning. We had a major advantage: open government did not need a cultural shift to get implemented. Adoption of new practices was not a revolution: it was a retrofit. We would use words familiar to the old guard: transparency, accountability and participation. They were like talismans. Senior management would not always show enthusiasm, but they could hardly take position against those values. Once our projects were under way, then they caused cultural shifts. Public servants learned to work in an open, collaborative way. Later, they found it hard to go back to the old ways of information control and need-to-know. So, we concluded, this can only go one way: towards more Internet in government, more transparency, participation, collaboration. The debate reflected this, with works like Beth Noveck’s The Wiki Government (2009) and my own Wikicrazia (2010).

All that’s changed now.

What brought the change home was reading two recent books. One is Beth Noveck’s Smart Citizens, Smarter Governance. The other is Complexity and the Art of Public Policy, by David Colander and Roland Kupers. I consider these two books an advance on anything written before on the matter.

Beth is a beacon for us opengov types. She pioneered open governments practices in a project called Peer2Patents. Because of that, President Obama recruited her on his transition team first, and to the White House proper later. She has a ton of experience at all levels, from theory to project delivery to national policy making. And she has a message for us: Open Government is failing. Here’s the money quote:

Despite all the enthusiasm for and widespread recognition of the potential benefits of more open governance, the open government movement has had remarkably little effect on how we make public decisions, solve problems, and allocate public goods.

Why is that? The most important proximate cause is that government practices are encoded in law. Changing them is difficult, and does need a cultural shift so that lawmakers can pass reforms. The ultimate cause is what she calls professionalized government. The reasoning goes like this:

  1. Aligning information with decision making requires curation of information, hence expertise.
  2. The professions have long served as a proxy for expertise. Professionalized government is new in historical terms, but it has now set in.
  3. So, “going open is a call to exercise civic muscles that have atrophied”.
  4. When professions set in, they move to exclude non-members from what they consider their turf. Everybody important in government is by now a professional, and mistrusts the potential of common citizens to contribute. And everybody reinforces everybody else’s convictions in this sense. So, you get a lot of  “meaningless lip service to the notion of engagement”, but little real sharing of power.

We now take professionalized government for granted, almost as if it were a law of nature. But it is not. Part of Beth’s book is a detailed account of how government became professionalized in the United States. At their onset, the US were governed by gentlemen farmers. Public service was guided by a corpus of practice-derived lore (called citizen’s literature) and learned on the job.  But over time, more and more people were hired into the civil service. As this happened, a new class of government professionals grew in numbers and influence. It used part of that influence to secure its position, making bureaucracy more an more into a profession. Codes of conduct were drawn. Universities spawned law and political science departments, as the training and recruiting grounds of the new breed of bureaucrats. All this happened in sync with a society-wide movement towards measurement, standardization and administrative ordering.

Beth paints a rich, powerful picture of this movement in one of my favourite parts of the book.  She then explains that new ways of channeling expertise to policy makers are illegal in the United States. Why? Because of a law drafted with a completely unrelated purpose, the Paperwork Reduction Act. And how did that come about? Lawmakers were trying to preserve the bureaucracy from interference and pressure from the regulated. To do this, it relegated non-government professionals in the role of interest representation. In other words, citizens are important not because of what they know, but because of who they speak for. A self-enforcing architecture of professionalized government had emerged from the state’s activities, without an architect .

Wait. Architecture with no architect? That’s complexity. Beth’s intellectual journey has led her to complex systems dynamics. She does not actually say this, but it’s clear enough. Her story has positive feedback loops, lock-in effects, emergence. She has had to learn to think in complex systems terms to navigate real-world policy making. I resonate with this, because the same thing happened to me. I taught myself network math as my main tool into complexity thinking. And I needed complexity thinking because I was doing policy, and it just would not damn work in any other way.

David Colander and Roland Kupers start from complex systems science. Their question is this: what would policy look like if it were designed with a complex systems perspective from the ground up?

They come up with fascinating answers. The “free market vs. state intervention” polarization would disappear. So would the dominance of economics, as economic policy becomes a part of social policy. The state would try to underpin beneficial social norms, so that people would want to do things that are good for them and others instead of needing to be regulated into them. Policy making agencies would be interdisciplinary. Experiments and reversibility would be built into all policies.

As they wrote, Colander and Kupers were not aware of Beth’s work and viceversa. Still, the two books converge on the same conclusion: modern policy making is a complex systems problem. Without complexity thinking, policy is bound to fail. I resonate with this conclusion, because I share it. I started to study complexity science in 2009. For four years now I have been in a deep dive into network science. I did this because I, too, was trying to do policy, and I was drawn to the explanatory power of the complexity paradigm. I take solace and pride in finding myself on the same path as smart people like Beth, Colander and Kupers.

But one thing is missing. Complexity thinking makes us better at understanding why policy fails. I am not yet convinced that it also makes us better at actually making policy. You see, complexity science has so far performed best in the natural sciences. Physics and biology aim to understand nature, not to change it. There is no policy there. Nature makes no mistakes.

So, understanding a social phenomenon in depth means, to some extent, respecting it. Try showing a complexity scientist a social problem, for example wealth inequality. She will show you the power-law behaviour of wealth distribution; explain it with success-breeds-success replicator dynamics; point out that this happens a lot in nature; and describe how difficult it is to steer a complex system away from its attractor. Complexity thinking is great at warning you against enacting ineffective, counterproductive policy. So far, it has not been as good at delivering stuff that you can actually do.

The authors of both books do come up with recommendations to policy makers. But they are not completely convincing.

Beth’s main solution is a sort of searchable database for experts. A policy maker in need of expertise could type “linked data” into a search box and connect with people who know a lot about linked data. This will work for well-defined problems, when the policy maker knows with certainty where to look for the solution. But most interesting policy problems are not well defined at all. Is air pollution in cities a technological problem? Then we should regulate the car industry to make cleaner cars. Is it an urban planning problem? Then we should change the zoning  regulation to build workplaces near to homes to reduce commuting. Is it an labour organization issue? Should we encourage employers to ditch offices and give workers groupware so they can work from home? Wait, maybe it’s a lifestyle problems: just make bicycles popular. No one knows. It’s probably all of these, and others, and any move you make will feed back onto the other dimensions of the problem.

It gets worse: the expertise categories themselves are socially determined and in flux. Can you imagine a policy maker in 1996 looking for an expert in open data? Of course not, the concept was not around. Beth’s database can, today, list experts in open data only because someone repurposed exiting technologies, standards, licenses etc. to face some pressing problems. This worked so well that it received a label, which you can now put on your resumé and can be searched for in a database. Whatever the merits of Beth’s solution, I don’t see how you can use it to find expertise for these groundbreaking activities. But they are the ones that matter.

Colander and Kupers have their own set of solutions, as mentioned above. They are a clean break with the way government works today. It is unlikely they would just emerge. Anyone who tried to innovate government knows how damn hard it is to get any change through, however small. How is such a full redesign of the policy machinery supposed to happen? By fiat of some visionary leader? Possible, but remember: the current way of doing things did emerge. “Architecture with no architect”, remember? Both books offer sophisticated accounts of that emergence. For all my admiration for the work of these authors, I can’t help seeing an inconsistency here.

So, where is 21st policy making going? At the moment, I do not see any alternatives to embracing complexity. It delivers killer analysis, and once you see it you can’t unsee it. It also delivers advice which is actionable locally. For example, sometimes you can persuade the state to do something courageous and imaginative in some kind of sandbox, and hope that what happens in the sandbox gets imitated. For now, this will have to be enough. But that’s OK. The age of innocence is over: we now know there is no easy-and-fast fix. Maybe one day we will have system-wide solutions that are not utopian; if we ever do, chances are Beth Noveck, David Colander and Roland Kupers will be among the first to find them.

Photo credit: Cathy Davey on flickr.com