Tag Archives: Kublai

Leaving innocence behind: why open government is failing, and how it will still win in the end

Many people dislike and mistrust backroom deals and old boys networks in government. They prefer open and transparent governance. It makes for better institutions, and a better human condition. I am one of those people. Since you’re reading this, chances are you are, too.

Like many of those people, I watched the global Internet rise and saw an opportunity.  I put a lot of work into exploring how Internet-enabled communication could make democracy better and smarter. Most of that work was practical. It consisted of designing and delivering open government projects, first in Italy (Kublai, Visioni Urbane, both links in Italian) and later in Europe (Edgeryders). Since 2006, I have kept in touch with my peers who, all over the world, were working on these topics. Well, I have news: this debate is now shifting. We are no longer talking about the things we talked about in 2009. If you care about democracy, this is both good and bad news. Either way, it’s big and exciting.

Back in the late 2000s, we thought the Internet would improve the way democracy worked by lowering the costs of coordination across citizens. This worked across the board. It made everything easier. Transparency? Just put the information on a website, make it findable by search engines. Participation? Online surveys are dirt cheap. Citizens-government collaboration? Deploy fora and wikis; take advantage of the Net’s ubiquity to attract to them the people with the relevant expertise. We had the theory; we had (some) practice. We were surfing the wave of the Internet’s unstoppable diffusion. When Barack Obama made President of the United States in 2008, we also had the first global leader who stood by these principles, in word and deed. We were winning.

We expected to continue winning. We had a major advantage: open government did not need a cultural shift to get implemented. Adoption of new practices was not a revolution: it was a retrofit. We would use words familiar to the old guard: transparency, accountability and participation. They were like talismans. Senior management would not always show enthusiasm, but they could hardly take position against those values. Once our projects were under way, then they caused cultural shifts. Public servants learned to work in an open, collaborative way. Later, they found it hard to go back to the old ways of information control and need-to-know. So, we concluded, this can only go one way: towards more Internet in government, more transparency, participation, collaboration. The debate reflected this, with works like Beth Noveck’s The Wiki Government (2009) and my own Wikicrazia (2010).

All that’s changed now.

What brought the change home was reading two recent books. One is Beth Noveck’s Smart Citizens, Smarter Governance. The other is Complexity and the Art of Public Policy, by David Colander and Roland Kupers. I consider these two books an advance on anything written before on the matter.

Beth is a beacon for us opengov types. She pioneered open governments practices in a project called Peer2Patents. Because of that, President Obama recruited her on his transition team first, and to the White House proper later. She has a ton of experience at all levels, from theory to project delivery to national policy making. And she has a message for us: Open Government is failing. Here’s the money quote:

Despite all the enthusiasm for and widespread recognition of the potential benefits of more open governance, the open government movement has had remarkably little effect on how we make public decisions, solve problems, and allocate public goods.

Why is that? The most important proximate cause is that government practices are encoded in law. Changing them is difficult, and does need a cultural shift so that lawmakers can pass reforms. The ultimate cause is what she calls professionalized government. The reasoning goes like this:

  1. Aligning information with decision making requires curation of information, hence expertise.
  2. The professions have long served as a proxy for expertise. Professionalized government is new in historical terms, but it has now set in.
  3. So, “going open is a call to exercise civic muscles that have atrophied”.
  4. When professions set in, they move to exclude non-members from what they consider their turf. Everybody important in government is by now a professional, and mistrusts the potential of common citizens to contribute. And everybody reinforces everybody else’s convictions in this sense. So, you get a lot of  “meaningless lip service to the notion of engagement”, but little real sharing of power.

We now take professionalized government for granted, almost as if it were a law of nature. But it is not. Part of Beth’s book is a detailed account of how government became professionalized in the United States. At their onset, the US were governed by gentlemen farmers. Public service was guided by a corpus of practice-derived lore (called citizen’s literature) and learned on the job.  But over time, more and more people were hired into the civil service. As this happened, a new class of government professionals grew in numbers and influence. It used part of that influence to secure its position, making bureaucracy more an more into a profession. Codes of conduct were drawn. Universities spawned law and political science departments, as the training and recruiting grounds of the new breed of bureaucrats. All this happened in sync with a society-wide movement towards measurement, standardization and administrative ordering.

Beth paints a rich, powerful picture of this movement in one of my favourite parts of the book.  She then explains that new ways of channeling expertise to policy makers are illegal in the United States. Why? Because of a law drafted with a completely unrelated purpose, the Paperwork Reduction Act. And how did that come about? Lawmakers were trying to preserve the bureaucracy from interference and pressure from the regulated. To do this, it relegated non-government professionals in the role of interest representation. In other words, citizens are important not because of what they know, but because of who they speak for. A self-enforcing architecture of professionalized government had emerged from the state’s activities, without an architect .

Wait. Architecture with no architect? That’s complexity. Beth’s intellectual journey has led her to complex systems dynamics. She does not actually say this, but it’s clear enough. Her story has positive feedback loops, lock-in effects, emergence. She has had to learn to think in complex systems terms to navigate real-world policy making. I resonate with this, because the same thing happened to me. I taught myself network math as my main tool into complexity thinking. And I needed complexity thinking because I was doing policy, and it just would not damn work in any other way.

David Colander and Roland Kupers start from complex systems science. Their question is this: what would policy look like if it were designed with a complex systems perspective from the ground up?

They come up with fascinating answers. The “free market vs. state intervention” polarization would disappear. So would the dominance of economics, as economic policy becomes a part of social policy. The state would try to underpin beneficial social norms, so that people would want to do things that are good for them and others instead of needing to be regulated into them. Policy making agencies would be interdisciplinary. Experiments and reversibility would be built into all policies.

As they wrote, Colander and Kupers were not aware of Beth’s work and viceversa. Still, the two books converge on the same conclusion: modern policy making is a complex systems problem. Without complexity thinking, policy is bound to fail. I resonate with this conclusion, because I share it. I started to study complexity science in 2009. For four years now I have been in a deep dive into network science. I did this because I, too, was trying to do policy, and I was drawn to the explanatory power of the complexity paradigm. I take solace and pride in finding myself on the same path as smart people like Beth, Colander and Kupers.

But one thing is missing. Complexity thinking makes us better at understanding why policy fails. I am not yet convinced that it also makes us better at actually making policy. You see, complexity science has so far performed best in the natural sciences. Physics and biology aim to understand nature, not to change it. There is no policy there. Nature makes no mistakes.

So, understanding a social phenomenon in depth means, to some extent, respecting it. Try showing a complexity scientist a social problem, for example wealth inequality. She will show you the power-law behaviour of wealth distribution; explain it with success-breeds-success replicator dynamics; point out that this happens a lot in nature; and describe how difficult it is to steer a complex system away from its attractor. Complexity thinking is great at warning you against enacting ineffective, counterproductive policy. So far, it has not been as good at delivering stuff that you can actually do.

The authors of both books do come up with recommendations to policy makers. But they are not completely convincing.

Beth’s main solution is a sort of searchable database for experts. A policy maker in need of expertise could type “linked data” into a search box and connect with people who know a lot about linked data. This will work for well-defined problems, when the policy maker knows with certainty where to look for the solution. But most interesting policy problems are not well defined at all. Is air pollution in cities a technological problem? Then we should regulate the car industry to make cleaner cars. Is it an urban planning problem? Then we should change the zoning  regulation to build workplaces near to homes to reduce commuting. Is it an labour organization issue? Should we encourage employers to ditch offices and give workers groupware so they can work from home? Wait, maybe it’s a lifestyle problems: just make bicycles popular. No one knows. It’s probably all of these, and others, and any move you make will feed back onto the other dimensions of the problem.

It gets worse: the expertise categories themselves are socially determined and in flux. Can you imagine a policy maker in 1996 looking for an expert in open data? Of course not, the concept was not around. Beth’s database can, today, list experts in open data only because someone repurposed exiting technologies, standards, licenses etc. to face some pressing problems. This worked so well that it received a label, which you can now put on your resumé and can be searched for in a database. Whatever the merits of Beth’s solution, I don’t see how you can use it to find expertise for these groundbreaking activities. But they are the ones that matter.

Colander and Kupers have their own set of solutions, as mentioned above. They are a clean break with the way government works today. It is unlikely they would just emerge. Anyone who tried to innovate government knows how damn hard it is to get any change through, however small. How is such a full redesign of the policy machinery supposed to happen? By fiat of some visionary leader? Possible, but remember: the current way of doing things did emerge. “Architecture with no architect”, remember? Both books offer sophisticated accounts of that emergence. For all my admiration for the work of these authors, I can’t help seeing an inconsistency here.

So, where is 21st policy making going? At the moment, I do not see any alternatives to embracing complexity. It delivers killer analysis, and once you see it you can’t unsee it. It also delivers advice which is actionable locally. For example, sometimes you can persuade the state to do something courageous and imaginative in some kind of sandbox, and hope that what happens in the sandbox gets imitated. For now, this will have to be enough. But that’s OK. The age of innocence is over: we now know there is no easy-and-fast fix. Maybe one day we will have system-wide solutions that are not utopian; if we ever do, chances are Beth Noveck, David Colander and Roland Kupers will be among the first to find them.

Photo credit: Cathy Davey on flickr.com

Meet the dragon hatchling: herding emergent behavior in an online community (long)

This is not a post, but rather an essay, longer than my normal posts. It concerns the Dragon Trainer project, that I already mentioned in this blog: together with my colleagues at University of Alicante and European Center for Living Technology I am researching a software for early-stage diagnose of emergent social dynamics in online communities, and for helping community managers to make informed decisions. I think it will be only the first one of a series.

Why is this important?

For some time now policy makers have been fascinated by entities like Wikipedia: non-organizations, loose communities of individuals with almost no money and no command structure that manage, despite this apparent lack of cohesion, to collaborate everyday in producing complex, coherent artifacts. Such phenomena are made even more tantalizing by the uncanny speed and efficiency with which they do what they do. Can we summon Wikipedia-like entities into existence, and order them to produce public goods? Can we steer them? Can we do public policy with them?

In order to do so, we will need to learn to craft policy into a new space, which – following Lane and others – we call the meso level. Such policies will not be targeted at individual behavioral change (micro policies); nor at the economy or the whole society (macro policies). They will be targeted at achieving certain patterns of interaction between a large group of people. Individual may and will move in and out of these patterns, just like individual water molecules move in and out of clouds; but this does not much affect the behavior of the cloud.

Operating at the meso level – running an online community of innovators, for example – entails managing a paradox. Structuring interaction among participants as a network of relationships, of which participants themselves are the nodes, can result in extremely effective and rewarding participation, because – under certain circumstances – each participant is exposed to information that is relevant to them, while not having to browse all the information the community knows. This results in a very high signal to noise ratio from the point of view of the participants; they often report experiences of greatly enhanced serendipity, as they seem to stumble into useful information that they did not know they were looking for and was sent their way by other participants.

This extraordinary efficiency cannot be planned a priori by community managers, who – after all – do not and cannot know what each individual participant knows and what she wants to know. The desirable properties of networks as information sharing tools arise from the link structure being emergent from the community’s endogenous social dynamics. The paradox stems from the fact that endogenous social dynamics can and often do steer online communities away from its goals and onto idle chitchat or “hanging out”, that seems to be the default attractor for large online networks. So, managers of communities of innovators need to let endogenous dynamics create a link structure to transport information efficiently across the network while ensuring that the community does not lose its focus on helping members to do what they participate in it to do.

Building Dragon Trainer: a case study

With this in mind, I joined forces with emergence theorists, network scientists and developers to build the prototype of Dragon Trainer, an online community management augmentation tool. It models an online community as a network of relationships, and uses network analysis as its main tool for drawing inferences about what goes on in the community. Generally speaking community managers build knowledge of their communities by spending a lot of time participating rather than using formal analysis; and they act on the basis of that knowledge by resorting to a repertoire of steering techniques learned by trial and error. The error component in trial-and-error is usually fairly large, because by construction there is no top-down control in online communities; the community manager can only attempt to direct emergent social dynamics towards the result that she sees as desirable. Control over the software does give her top-down control in the trivial case of prohibition: by disabling access, or comments, she can always dampen activity directly. What she cannot do without directing emergence is enhancing activity – which is what online communities of innovators are for.

DT aims at augmenting this approach in two ways. Firstly, it allows the community manager to enrich the “local” knowledge she acquires by simply spending time interacting with the community. Such knowledge is extremely rich and fairly accurate for small communities, but it does not scale well as the network grows. Network analysis, on the other hand, scales well: computing network metrics on large networks is conceptually not harder than doing it on small ones, though it can get computationally more intensive. In an ideal situation, a community manager might start to use DT when her network is still small and she has a good informal understanding of what goes on therein simply by participating in it; she could then build a repertoire of recipes. We define recipes as formalisms that map from changes in the mathematical characteristics of the network to social phenomena in the community represented by that network. Recipes of this kind enhance the community manager’s diagnostic abilities, and take the form:

Network metric A is a signature of social phenomenon B.

As she tries out different management techniques to yield her desired results, she would then proceed to build more recipes, this time mapping from management techniques to their outcomes – the latter being also be measured in terms of changes in the metrics of the network representing the community. Recipes of this kind enhance the community manager’s policy making, and take the form:

To get to social outcome C, try doing D. Success or failure would show up in network metric E.

She then might be able to lean on repertoires of recipes of both kinds to run the network as it gets larger, because the software does not lose its ability to monitor those changes. These repertoires of correspondences are going to be built by integrating inputs from two different sources. The first one is theoretical: the systemic theory of emergence in the social world that some of my colleagues are engaged in developing. The second one is practical: the firsthand experience of community managers, myself included. Once built, the two repertoires would make up DT’s knowledge base, its computational intelligence core.

What follows is an account of a concrete case in which network science helped formulate a policy goal, the completion of which could then be monitored through, again, network analysis. It is only a small example, but we believe directed emergence is at work in it. And if emergence can really be directed then yes, in principle public policies can happen in the mesolevel and become closer to Wikipedia.

Context

In March 2009 I was the director of an online community called Kublai, a project of the Italian Ministry of Economic Development. People use Kublai to develop business plans for creative industries projects or companies. At the time it was about 10 months old; we had about 600 registered members working on 80 projects. I directed a small team that did its best to encourage people to try and think out new things, and to help each other to do so. Most creatives find it hard to achieve critical distance from their pet ideas, and an external, disenchanted eye might help them become aware of weaknesses or untapped sources of strength. Even simple questions like “why do we need your project?” or “how do I know this is better than the competition?” can help.

These conversations happen online, in the context of a small, dedicated social network that used the Ning software-as-service platform. We customized Ning’s translation to change the names of the “groups” functionality into “projects”: the object in the database was still the same, but rhetorically we were encouraging people to come together to collaborate to a project’s business plan. Ning groups lent themselves well to the task, because each sports (1) a project wall for comments; (2) a forum for threaded discussion; (3) group-wide broadcast message functionalities at the disposal of the group creator. In March 2009 the largets projects/groups in Kublai had about 60/70 members.

Ruggero Rossi, like me, is passionate about self-organizing behavior in the social world. When he proposed to do his thesis by running a network analysis of the Kublai social graph, I supported him in every way I could. The thesis was supervised by David Lane, a complexity economist I admire, which was an added bonus.

March 2009: diagnosis

The first problem was to specify our network. We decided nodes would correspond to people: each user is a node. The links could be several things, since there are several types of relationships between members of a Ning social network: relationships might be created by adding someone as a friend, leaving a comment on her wall, sending her a message, joining the same group/project etcetera. We decided to focus on collaboration in writing business plans, which is Kublai’s core business; we also decided that, in the context of Kublai, only writing in the context of a group/project counts as collaboration.

So we defined the link as follows: Alice is connected to Bob if they both have posted a comment on the same project. This is a somewhat bold assumption, because positing some kind of communication between the two implies that everybody who ever posted anything within a project reads absolutely everything that is posted in that project. I thought that was reasonable in the context of Kublai, also given the short time frame in which the comments had been piling up. This implies a bidirectional relationship: in network parlance, the graph is called undirected, and its links are called edges. The edges are weighted: the edge connecting Alice to Bob has an intensity, or weight, equal to the number of comments posted by Alice on the project times the number of comments posted by Bob on the same project. If they collaborate on more than one project, we simply add the weight of the link created across all projects on which they are both posting comments.

Eventually Ruggero crunched the data and showed me his results, that boil down to this:

  • all active (posting) members of Kublai were connected in a giant component: there was no “island”.
  • a kernel of people who were highly connected to each other acted as Kublai’s hub, connecting each participant to everybody else in the network. All of the paid team members were part of this kernel: no surprise here. More surprisingly, many non-team members were also part of the kernel. So many, in fact, that if you removed all of the team members from the graph it would still not break down; everybody would still be connected to everybody else.

Summer-fall 2009: policy

This was an epiphany for me. I discussed these results with the team, and our interpretation was this: a core of dedicated community members was forming that was buying into Kublai’s peering ethics. They took time off developing their own projects to help others with theirs. This was a very good thing, in two ways.

  1. it implied efficiency. With more people participating in more than one project, Kublai could do a better job of transporting information from one project to another, and that is the whole point of the exercise. Alice is stuck with her project on some issue, and it turns out that Charlie, somewhere else in the network, has run into the same problem before. Alice does not know this, but she does collaborate with Bob, and Bob is a collaborator on Charlie’s project. So Bob can point Alice to what Charlie already did: Alice needs not walk Charlie’s learning curve all over again.
  2. it implied resilience. If enough people do this, we thought, maybe the Ministry can turn Kublai over to the community, which will keep running it at little or no cost to the taxpayer. This would have created a public good out of thin air. Not bad!

So, we decided to encourage this self-organizing feature. How to do it? A way to go about it was to encourage especially people who were developing projects (progettisti) to interact more. What could bring them together? Purely social stuff like football or celebrities discussion groups were not even considered: they would mar the informal, yet hardworking atmosphere of Kublai. According to my readings on the early days of online communities, something that any community loves to do is discussing itself. So, we thought we would turn over some of the control over the rules of Kublai to the community, and we would put a significant effort in it. We created a special group in Kublai, the only one that was not a project at that point, and called it “Club dei Progettisti”. Joining was unrestricted; also, we actively invited everybody in the kernel and everybody who had started a project up to that point. We did things like coordinate to welcome newcomers and discuss the renovation of the Second Life island we used for synchronous meetings. The atmosphere was that of the inner circle, the “tribe elders” of our community. This went on from about May to the end of 2009.

December 2009: policy assessment

Was the policy working? It was hard to say. The Club dei Progettisti grew to be by far the largest and most active in Kublai, but that does not mean that people interacting more in that context would then go on to collaborate on business plans in the context of individual projects, that was our real goal. It did feel like we had a vibrant community going – but not all the time. And then vibrant with respect to what? And how does vibrancy translate into effectiveness? We spent a lot of time online, and sailed by instinct. Instinct checked green, but let’s face it – after one thousand users and 150 projects it was hard not to lose the overview.

With another round of network analysis I would have been able to have a stab at policy assessment. In network terms, I wanted the kernel to be bigger: more people not from the Kublai team, collaborating across more projects, would facilitate the information flow across projects and improve efficiency. But Ruggero had finished his thesis, and the administrative structure running Kublai was at this point so rigid that contracting him was next to impossible.

Only recently, two years later, did I get the chance to crunch an export of the Kublai database. We at the Dragon Trainer group extracted a snapshot of Kublai at March 23rd 2009 (the same day Ruggero scraped it for his thesis) and one at December 31st 2009.

In these two images the nodes, representing members of the Kublai community, have been color coded according to a measure called betweenness centrality, indicating how often the node is in the shortest paths connecting other nodes (it is often interpreted as an indicator of brokerage efficacy). Yellow nodes are the least central and blue nodes the most central, with orange ones in an intermediate position; nodes representing the Kublai team employees (typically very central) have been dropped from the graph altogether. In March 2009, a handful of community members, less than ten, collaborated on several projects on a regular basis – and, as a result, did most of the brokerage of information across the network. In December, however, their number had about doubled, despite the fact that attaining orange or blue “status” required a lot more work (the most central node in the March network has betweenness centrality 1791, the one in the December network 7740). At the end of the year, Kublai’s kernel was both larger and more connected than it had been in March. This growth is an emergent social dynamics: there is no top-down control in these graphs, anybody I could tell “go form a link with X” has been dropped from the dataset. But this emergence is somehow directed: we wanted to get to a social arrangement whose graph looks like this.

You can see how powerful this thing is. We can already say something just by looking at the graph; we have not even started to crunch the numbers, let alone do more sophisticated things. We could (and we will) compute and compare measures of network centralization; respecify the network in many different ways, allowing for link impermanence (if Alice and Bob are linked but don’t keep interacting, after a while the edge fades out), bipartite networks (what about a people-project graph?) directed graphs (links representing monodirectional help rather than bidirectional collaboration: if Alice posts on Bob’s project, she is helping him, but Bob might not reciprocate); and play with the data in as many ways as we can think of.

We keep working on this, and we will continue to share our results and our thoughts. If you want to be a part of this effort, you can, and are absolutely welcome. Everything we do (the data, the code, the papers, even the mailing list) is open source and reusable. Download what takes your fancy and let us know what you are doing. We are looking forward to learning from you.

  • Download the raw data (database dump, anonymized).
  • Download and improvethe code to export the data into file formats supported by the main network analysis software.
  • Download the exported data if you would like to jump right into the network analysis. .net files (Pajek projects) are importable also in Gephi and Tulip.
  • join our mailing list if you want to be involved in our discussion. Everyone welcome, no technical background is required. We are online community managers mathematicians, coders, public policy practictioners, committed to being interdisciplinary and therefore to going out of our way to make anyone feel welcome.

La valutazione è sopravvalutata?

Gli esperti di politiche pubbliche insistono sulla valutazione quantitativa come meccanismo di accountability. La Commissione Europea si è messa alla testa di una campagna per l’adozione di tecniche di valutazione quantativa anche in aree tradizionalmente “morbide” come la coesione sociale o l’innovazione sociale. Il messaggio è semplice: sono tempi duri per i bilanci pubblici. Se volete che finanziamo qualcosa, dovete spiegare perché questa cosa è più importante di altre. Ha senso. Come potrebbe essere sbagliato?

Eppure non sono convinto. La valutazione ha solide basi teoriche quando misura i risultati nella stessa unità in cui è misurato l’input. Il gold standard in questo senso è il famoso rendimento dell’investimento (ROI): investi dollari. Raccogli dollari. Dividi i dollari raccolti per i dollari investiti. Facile. Se investi dollari per ottenere, diciamo, un aumento della popolazione di aironi di una zona umida, o una riduzione attesa dell mortalità del cancro ai polmoni, le cose iniziano a farsi più complicate. E se cerchi di paragonare un aumento della popolazione di aironi con una riduzione della mortalità da cancro ai polmoni si fanno molto più complicate.

Io lo so bene. Sono un veterano di una battaglia molto simile.

Negli anni 80, pensatori influenti come David Pearce, consigliere per l’ambiente della signora Thatcher, gli economisti dell’ambiente hanno tentato di quantificare il valore economico dei beni ambientali. L’obiettivo era di insegnare all’umanità ad abbandonare l’idea che l’ambiente si possa dare per scontato, e a cominciare a trattarlo come una risorsa scarsa. La roccaforte di questa scena era University College London, dove Pearce dirigeva un centro ricerche e un programma di Master. Mi sono iscritto al secondo nel 1992. Il nostro strumento principale era un’estensione dell’analisi costi-benefici, attrezzo ben collaudato dei valutatori dell’era del New Deal. Avevamo tutta una serie di trucchi intelligenti per tradurre i benefici ambientali in dollari o sterline: prezzi edonici, valutazione contingente, metodo dei costi di viaggio. Una volta convertiti in unità monetarie, costi e benefici ambientali potevano essere confrontati con qualunque cosa, rendendo possibile una valutazione rigorosa. O no?

Spostandoci dalle nostre aule londinesi alla pratica, abbiamo scoperto che le cose erano molto più complicate. Anzitutto c’era un grosso problema teorico: cercavamo di emulare i mercati per valutare i benefici ambientali perché, secondo la teoria economica standard, mercati ben funzionanti assegnano ai beni esattamente i prezzi che massimizzano il benessere collettivo. Sfortunatamente, le condizioni matematiche perché questo si verifichi sono molto restrittive, tanto da non verificarsi praticamente mai nella vita reale. Joseph Stiglitz, uno dei miei economisti preferiti, ha vinto un Nobel dimostrando che, rimuovendo una sola condizione (informazione perfetta e simmetrica), le proprietà virtuose dei mercati collassano completamente. In secondo luogo, anche se siamo disposti a un atto di fede nelle fondamenta teoriche, arrivare a quantificare è difficile. Molto. I dati necessari in genere non sono disponibili, ed è molto costoso generarli, quindi molti ricercatori si rifugiavano nei sondaggi di opinione (chiamati “valutazioni contingenti”, che suona più scientifico). Mossa sbagliata: ci siamo impantanati subito nei paradossi di psicologia cognitiva esplorati in dettaglio da Daniel Kahneman e Amos Tversky, che hanno mostrato in modo conclusivo che gli umani non valutano le cose allo stesso modo dei mercati – e hanno vinto un altro Nobel.

In più c’era una situazione politica molto sfavorevole per questo tipo di ricerche. Gli unici soggetti disposti a finanziare generosamente la valutazione ambientale erano le imprese inquinatrici più grandi e aggressive. Un’intera branca della letteratura è fiorita all’ombra del famigerato naufragio della petroliera Exxon Valdez: a Londra studiavamo i papers degli esperti incaricati da Exxon di produrre una valutazione dei danni causati all’ambiente artico da cento milioni di litri di greggio sversati in mare. Questi esperti avevano i mezzi per fare una valutazione vera, ma quelli che li pagavano non erano esattamente neutrali rispetto ai loro risultati. Non deve essere una situazione facile.

Eppure, valutare si doveva. Quindi ci abbiamo provato. E abbiamo scoperto una cosa interessante: con tutti i limiti, facendo un esercizio di valutazione su un progetto arrivi a capirlo molto meglio. Alla fine si ottiene un risultato, e si è in grado di difenderlo. Purtroppo, questo risultato non è mai uno scalare (tipo “questo lago vale 20 milioni di euro”); prende quasi sempre la forma “se realizzi questo progetto guadagnerai A ma perderai B e C”, con A, B e C misurati in unità completamente diverse e irriducibili. Inoltre, gli unici a imparare davvero da una valutazione sono i valutatori: tutti gli altri vedono solo il risultato finale, e non la logica sofisticata che serve per produrlo.

La causa della valutazione come requisito delle opere pubbliche ha fatto progressi innegabili. La valutazione di impatto ambientale, usata in America fino dagli anni 60, è stata resa obbligatoria in Europa per molti progetti pubblici da una direttiva del 1985. Si è investito. Molti consulenti hanno fatto qualche corso improbabile e si sono messi a vendere valutazioni di impatto ambientale. Questo ha favorito l’avvento di una valutazione oggettiva e sorretta dall’evidenza? Non credo. Anche adesso, venticinque anni dopo, ambientalisti e imprese appaltatrici continuano a combattersi in tribunale, ciascuna brandendo la propria valutazione di impatto ambientale, o semplicemente insistendo che l’altra parte ha fatto fare una VIA non imparziale per sostenere la propria posizione (questo è ciò che sta succedendo sul collegamento ad alta velocità Torino-Lione). Questo non significa che la VIA sia inutile: però significa che non è oggettiva. La promessa di valutazione quantitativa e quindi imparziale era illusoria. Sospetto che questo sia non un caso, ma parte della struttura fondamentale della valutazione: valutare, dopotutto, implica valori. Anche il ROI incorpora una serie di valori: in particolare, implica che tutta l’informazione rilevante è contenuta nei segnali di prezzo, per cui se stai facendo soldi vuol dire che stai aumentando il benessere della società.

Sarei curioso di tentare un approccio alternativo alla valutazione: l’emergenza di una comunità che partecipa a un progetto, contribuisce tempo, porta doni. Per esempio, nel corso di un progetto del Consiglio d’Europa che si chiama Edgeryders, ho registrato un breve video introduttivo in inglese. Un membro della nostra comunità lo ha caricato su Universal Subtitles, ha trascritto l’audio in sottotitoli inglesi e li ha tradotti in spagnolo. Due settimane dopo, erano stati tradotti in nove lingue. Cose così non succedono tutti i giorni ai dipendenti pubblici: il nostro piccolo gruppo di eurocrati ne è stato molto felice, ma soprattutto – insieme all’impegno sulla nostra piattaforma online ai continui apprezzamenti su Twitter e ad altre iniziative di comunità come la mappa dell’impegno civile – l’abbiamo preso come un segnale che stavamo facendo qualcosa di buono. Come una valutazione, un voto espresso in tempo-uomo e impegno. Una valutazione di questo tipo non è un’attività eseguita da un valutatore, ma una proprietà emergente del progetto stesso; e quindi rapida, a basso costo, impietosa nei riguardi dei progetti che non riescono a rendere la propria utilità chiara ai cittadini.

Certo, i progetti costruiti intorno a comunità online come Edgeryders o Kublai si prestano particolarmente bene a essere valutati in questo modo – contengono migliaia di ore, donate dai cittadini, di lavoro umano di alta qualità, un’unità di conto naturale per la valutazione. Ma è un criterio che può essere più generalizzabile di quanto sembri. Di recente un amico, che dirige una piccola azienda di software, mi ha stupito con questa considerazione:

Di questi tempi, metà del lavoro di un programmatore consiste nel far crescere e motivare una comunità su Github.

Quindi non è solo un mio errore di prospettiva: in un numero sempre maggiore di ambiti, la complessità dei problemi è diventata ingestibile a meno che non la affronti con gli strumenti dell’intelligenza collettiva, di sciame. Sempre più sono i problemi che possono – e forse devono – essere concepiti in termini di una comunità online che cresce loro intorno. Se questo è vero, quella comunità può essere usata come base di una valutazione. In realtà dovrebbe essere ovvio: non ho mai conosciuto un ecologo o un assistente sociale che pensi che valutare un impatto ambientale o sociale in termini di ROI abbia il minimo senso. Se riusciamo a inventare un percorso teoricamente solido e a basso costo per la valutazione possiamo e dovremmo sbarazzarci del ROI per le attività nonprofit. Non credo ne sentiremo la mancanza.