Pedro Prieto-Martin, a Spanish researcher and occasional commentator of this blog, just published a paper on the state of the art of e-participation in Europe. Which turns out to be pretty grim:
- The European Commission has taken the lead in promoting the discipline, launching several dedicated research programmes
- since 2000 the EC has funded at least 74 e-participation projects, for a total cost of about 187 million euro; a network of excellence for another 6; and, later, a batch of evaluation and networking initiatives of the existing experiences
- one of these programmes, eParticipation Preparatory Action, has been the object of a systematic evaluation. Projects funded: 20. Average project cost: 715,000 euro. Average number of participants: 450. Average number of user generated contributions (posts or petition signatures): 1,300. Average cost of one post or signature: 550 euro.
The eparticipation research community has managed to ignore these figures. The evaluation studies of the Preparatory Action’s projects are “unanimously positive”. Despite the Commission’s request of a rigorous cost-benefit analysis none of these studies quotes the 550 euro figure. And the Commission itself has decided, although with some corrections, to go ahead: the main difference between this first batch of projects and the next (projects approved for funding in 2009 and 2010) is, according to the paper, their budget, that increased to an average 2.8 million euro. How could the research community overlook these data? According to the author
Handling this kind of “elephant in the living room”-issues is always problematic, as their very existence tends to be denied because of their complexity or the embarrassment they cause and, as a result, they cannot be acknowledged or discussed, let alone get properly sorted out.
Prieto-Martin thinks that the reason for the not-so-great performance of e-participation projects is essentially this: in line with the tradition of European innovation policy, they have been deployed according to a “push” logic. This means incentivizing technology producers to push innovation out to more or less acquiescent users, in the form best suited to the producers’ interests. And the producers did respond with enthusiasm: unfortunately – partly because of the relative lavishness of the funding – they were in general the wrong applicants. Not the best and the most innovative, but the “usual suspects”: organizations that navigate confidently the bureaucratic requirements of European funded research. These requirements are designed to guarantee bang for taxpayers buck and an impartial allocation of resources, but – as CriticalCity’s Augusto Pirovano explains in this short video – ended up excluding from the game small businesses and civil society organizations, the true innovators.
Prieto-Martin is very critical of the situation, and rightly so. On the other hand I am not convinced it is fair to blame the European Commission for the fiasco. It is a Weberian bureaucracy: its discretional power is limited by design. As I have written before all bureaucracies have trouble relating to networked communities: the latter are made of people and find their meaning in the web of diverse person-to-person relationships, whereas Weberian bureaucracies act on the basis of rational, standardized rules applied to all. I still find convincing what I wrote on that occasion:
I see only one way: a new deal between government and the women and men who work for it. Such a new deal would work like this: administrations have to give trust and breathing space to their servants; and then assess their results, rewarding people who get results and punishing people who don’t. If there are abuses of that trust, they will be dealt with on a case-by-case basis: designing an entire system to prevent abuse is at a high risk to making it too rigid, disabling people to offer their best ideas.
I am no lawyer, but I expect Weberian bureaucracies to be prevented by design to reform themselves in the way outlined. Some kind of external legal provision will be necessary for this to happen. Until then, I guess, we’ll have to cope with a certain number of elephants camping out in the living room.
Hi Alberto, thanks for your mention and your reflections. 🙂
The article is, certainly, very critical… but it aims to be friendly and constructive.
The emphasis is not as much to blame the EC or anybody else (as you say… given the circumstances we had, it is hard to believe things could have happened differently), but to help the EC to improve the efficacy of its programmes in the future.
Pedro, this goes without saying. I am a big supporter of the European common project, and extremely grateful for the role played by the European institutions for the stabilizing role they play with respect to Italy’s political cycles. We are all doing our best to lead the damn elephant away from our common living room! 🙂
Well, this was predictable: just the numbers have been taken from Pedros paper and were thrown into an the-EU-is-so stupid context. But the calculation is complitely weird! The funding was never meant to enable operational eParticipation systems and to attract as many citizens as possible. It has been spent to enable experiments, developments, studies, research and pilots! And: All projects have only been partly funded by the EC and were co-financed by the benificiaries.
Rolf, this is not what my post is saying. If anything, as compared to the original paper, I take on a more sympathetic view of the European Commission’s work, which I respect and admire. I am a little less sympathetic towards the beneficiaries’ failure to raise the issue, though, especially after the CriticalCity experience of doing, in the same area much more with much less.
Furthermore, for a pilot to be relevant, it has to emulate the conditions that the real thing will be operating in. Everything we know about system thinking teaches us that more is actually different. You cannot study the behavior of ants in an anthill by looking at a single ant in isolation. Ten water molecules do not shimmer and slosh, because they are not a liquid. Liquidness is an emergent property of systems that consist of billions of water molecules. Similarly, a system with one million users does not behave like one with a hundred users, only bigger and faster: altogether different rules apply to it.
Finally, there is nothing wrong in failing. It is the nature of the beast. If success were certain, we would not need research. If failure is not recognized, however, we can’t make progress, and the whole research effort is invalidates. Pedro thinks this failure has not been sufficiently addressed. Do you disagree with him? And why?
Some little things:
– Why do you think that the calculation is weird, Rolf? It is clearly a very rude way to measure performance. But it is valid. It’s the basic starting point: OUTPUT/COST.
It could be complemented with further information, but it cannot be simply denied or disregarded.
And the fact is that the projects expected to attract much much much more participation than they did. Everybody was “surprised” (not me) because of the very low rate of participation. To understand why one doesn’t get what one expected, a deep and honest analysis must be the starting point. And what was done, … was to deny the problem. That 500 EUR/signature figure was so embarrasying… that everybody pretended (officially, at least), not to see it.
If you want to create e-Participation, you have to aim to “sustainable eParticipation”. Anything else makes no sense. And as Alberto suggest: is this 500 EUR/signature isolated and weird experiments a good way to achieve it? It doesn’t seem the best. The EC call actually financed projects aiming at investigating “unsustainable eParticipation”. And that’s what we got. 🙁
– But Alberto: I don’t think we should respect the work of somebody just because we are sympathetic to him/her. We can admire someone, for sure; but each piece of work he/she makes must be judged critically, based on facts. And for the failure of eParticipation policies… we are all responsible. Beneficiaries as well as the EC. Actually, the paper argues that the EC is responsible in the first place, because it established programms whose conditions made almost impossible to reach any significant innovation in this specific field: Why to force huge international consortia? Why to force “raw experimentation (non-research!)” in a field where still there were NO theoretical basis? Why to approve many different projects that were at the same time non-comparable but very simmilar? Why not think on external evaluation? Why to force co-financing (which, as Augusto Pirovano video explains: usually is not REAL co-financing)? Why not to think in reaching out for new innovative actors?…
The EC put the cart before the horse, this is clear. So… it shouldn’t be a surprise that it didn’t move that far.
Actually, the EC shoul also not be surprised that the ones it contracted to move the cart in that way… didn’t tell “this doesn’t make any sense!!”. If somebody is paying you, asking you to make a work, and at the same time puts a lot of conditions that render your work almost unuseful… well, even if it makes less sense, you tend to think: “This is what they want me to do!! They must know why they want it done this way!!”.
– And actually (I’m finishing): there was some reactions to my paper (that I have received through personal contacts), that were specially “painful” for me. It was when some beneficiaries of the projects told me: “This is what we all have been saying for the last couple of years!”. For sure, nobody took the care to write it down in a paper, and shout it loud at PeP-Net (it is bad to bite the hand that feeds 🙁 ). But it seems that the “elephant” was already perceived and talked about.
So the EC cannot simply say: “nobody told me!”. First: it is EC’s responsability to see it on its own, to evaluate the effect of the money it invest; but second: possibly, it was told it.
– The question, for me, remains: is anything going to change? London’s burning is not enough to make them change their mind??
SO PLEASE, anybody who feels they have anything to say to einrigh this discussion (EC’s officials included, for sure 😉 ), do it now!
Hi Pedro and Alberto, maybe my critic was to harsh but I was sort of shocked to see this calculation taken out of its context.
I still think it is misleading in several regards:
– EU funding should pave the way for innovative approaches & pilots. This does not work without experiments and failure is always one option. Saying that this could be avoided by (better) applied research is hard to believe. And isn’t failure the precondition to finally make it right?
– EU funding should not be misunderstood as a means to setup operational services. It is always about prototypes or pilots which could serve as the basis for further developments. And I think the EU has to find out if these pilots work in different environments, i.e. countries.
– It is not irrelevant, though, whether or not any evidence could be found that citizens will like and use these applications. But to break down the funding of projects to the number of posts in the pilot phase is absurd. Quantity is just one criterion and probably not the most important one. What about the quality of the debate? The integration of the results in the decision making process? ( I wrote about the “tyranny of scale” on the PEP-NET blog some time ago http://tinyurl.com/y93lqqc and don’t want to repeat all the arguments here). If the costs of one post were the ultimate criterion, the EU should support the setup of eParticpation discounter or reinvent facebook. Here you have mass participation by “I like buttons”. But a meaningful use of the new media in order to improve the political process is still something we all have to work on. And this work has to (partly) be funded by the public because it is in the public interest. The way the funds are allocated can be improved – no doubt about this. But I can’t believe that a Tabloid-like eye catching headline like “The 500€ blog posts” will do any good in this regard.
Hi Rolf. I do not think Pedro’s calculation was taken out of context. It is legitimate, and it is shocking. It is even comparable somewhat, and CriticalCity’s Augusto has done just that in a 2 minutes video. Whether you like it or not, people are going to do more of the same with Pedro’s data. I find that wholly consistent with his effort to expose the elephant in the living room, which I am sympathetic towards (hence what you define my “tabloid headline”).
As for the “quality of participation” argument, I don’t think is valid. If – as is reasonable to expect – the quality of contributions follows a power law, with a tiny minority of posts carrying a disproportionate amount of significance and making it all worthwhile, it follows that quality and quantity are strictly correlated (once again, more is different). Suppose a contribution out of 10.000 is pure genius and makes the whole participation effort go forward: if you have less than 10.000 posts, your number of pure genius contributions is, in probability, zero. An observer that does not see the power law argument might conclude (with Pedro’s 1.300 posts and zero brilliant contributions) that the whole exercise is not worth the while: whereas the project being observed might simply be operating at the wrong scale.
I have to reiterate Pedro’s argument: acknowledging disappointing data is the first step towards learning from experience. And the fact that we are even discussing about average quality, when it is well known that average is not representative of anything in social media (and many other complex systems), is not a good sign of health in the e-participation debate.
Hi Rolf, I understand what you say. And I think we all agree that innovation means experimenting and failing.
But what the article argues is precisely that if you don’t frame well this experimentation -for example, relating it to some sounded theory (that suggests you “what to experiment with”, and “how to do the experiment” so you can learn from it)- no matter how much you experiment and fail, no matter how much money you waste… you could NOT LEARN anything.
And the way the EU framed experimentation in this field was -for the many reasons suggested in the paper- much conducive to this “NON LEARNING” result.
Many of the conditions imposed to the trials were simply absurd, counterproductive. It must be said like this, in plain English. They were more related to the “EU-shortsighted-mindset” than to any attempt to produce results and value in the specific eParticipation field. One example, you say that the “EU had to find out if these pilots work in different countries”. But… this is not the truth. What the EU had to find out first was if the pilots actually work in ONE country. Requesting (or favoring) multi-country pilots makes that the energies are put where they shouldn’t be put. It is like asking somebody to run before he can walk: it will surely end up causing injuries.
Another example: to experiment in the eParticipation field… you definitely do not need big “consortia”, coming from many countries; but again: it was requested. And like this… many other failures.
So… in this case I think that the “tabloid-like” subjects are not superfluous. Because apparently nobody was critically analysing failure. The problems have been, instead, systematically denied, or considered as “reasonable success”. And in order to catch attention… you sometimes need this kind of “eye catching headlines”.
As you say, the cost of each signature shouldn’t be the only thing measured. But even if a lot of different things should be measured… significant participation is a prerrequisite for any of these other measurements to be possible. And the fact is that EU FINANCED A LOT OF E-PARTICIPATION PROJECTS THAT promised to attain a high level of participation… but DIDN’T ATTRACT ANY PARTICIPATION. This is what the 500 euros figure actually shows, and calls attention for: if the cost of each contribution was so high… was because there were hardly any contribution.
If you don’t achieve participation… it’s very difficult to evaluate things like “quality of debate”, because in most cases there was NO debate (see the case of VoicE, where the most participative feature used was the poll).
Integration of the results in the decision making process? Well, in many cases there were no results to integrate, but if there had been… they wouldn’t have been integrated, because there was no connection with the decission making process.
These are all things that are also exemplified in the paper.
The EU has been paying for projects that couldn’t advance the field, because of the way they were framed. It must now do different, if it cares about eParticipation. The paper suggest several things they could now try.
For sure: this doesn’t mean to reinvent Facebook, but putting more effort in doing things intelligently. Stop paying for roses’ plantations in the middle of the desert. [Actually, we already discussed about this in your post about “the tyranny of scale” 🙂 ]
This discussionis very instructive for me.
You know what I would do if I were the EU?
I would commission a study of the exsisting communities that already effectively promote some public good through public participation. Then I would try to experiment ways to reproduce the conditions, values, contexts in which they developed. My hypothesis based on observing many failed attempts at creating platforms for participation of publicly minded citizens is that: 1) there aren’t many, the scouting part should be very careful and pervasive; 2) they did not develop with much public financial support.
So I expect that the most useful thing that the EU could do to promote this outcome is not probably to pay the bills, but some other soft, value related things.
Thanks, Tito. What you propose is actually included as part of the recommendations of the paper, together with many other complementary actions. You should have a look at the article. It takes some time… but if you liked the discussion I’m sure you’ll enjoy the paper’s reflections.
The approach that you propose could be understood as an attempt to apply to Innovation Support Programmes what is commonly termed as “positive deviance” (a notion developed in the fields of Health Care and International Cooperation:
“Positive Deviance is based on the observation that in every community there are certain individuals or groups whose uncommon behaviors and strategies enable them to find better solutions to problems than their peers, while having access to the same resources and facing similar or worse challenges.
The Positive Deviance approach is an asset-based, problem-solving, and community-driven approach that enables the community to discover these successful behaviors and strategies and develop a plan of action to promote their adoption by all concerned.”
You can get more information on “Positive deviance” at: http://www.positivedeviance.org/
Thanks Pedro, I am going to read your paper carefully even though I am not a real expert on the subject. And hopefully offer some useful comments.
Positive deviance has been at the core of my method of enquiry for a long time, even though I didn’t call it this way. Thanks for sharing the link.
I wonder if you have taken into account the impact study on the eParticipation preparatory action? The trouble is that it was for internal circulation only…..
While I agree that the ROI is bad – and that the balance of stakeholders was off, the action was not about mainstreaming (hence preparatory). That said, there are some spin-offs, such as the continuation of local (e)Petitioning in certain cities such as Malmo.
The paper got finally its way to publication. It has been included in the “European Journal of ePractice”, nº 15, a Special Issue on “Policy lessons from a decade of eGovernment, eHealth & eInclusion”.
The whole journal is available here:
The version included in the journal is just slighly shorter than the one available from this post. If you want to cite this article, please use something like:
Prieto-Martín, P., de Marcos, L., & Martínez, J.J. (2012). The e-(R)evolution will not be funded. An interdisciplinary and critical analysis of the developments and troubles of EU-funded eParticipation. European Journal of ePractice, 15, 62-89.
Thanks to all that shared with us their interest in the paper! Let’s try together to learn from the past and finally get eParticipation right!
Dear Fraser, sorry for not answering before. I didn’t see your comment till now. Thanks for your comment.
I’m not sure which document you are referring to (that’s the problem with ‘Internal Circulation’ documents 🙂 ). We were using which in its 2.0 version has almost 198 pages and provides a lot of details about the evaluation work performed:
There is something, related to the Momentum reports, that surprised me. After all these discussions on PeP-Net and Alberto’s blogs took place, I was investing much energy and time trying to get the paper published (in ePractice Journal and as a chapter of a Springer book, so far), and to elevate the analysis and discussion to EC officials in Brussels, so they can act on them.
Well, after all these endeavors… one day I realised that many of the people that most defended our analysis, and made possible the publication of the paper… had been involved in Momentum project in one way or another (as experts, writers, etc.).
For this reason, I don’t expect that the conclussions included in that “impact report” would change the analysis much.
That being said… I agree with you that some spin-offs, and valuable systems have resulted from EU’s projects. This is recognized in the paper, at the same time that we question the poor effectiveness of most of the actions that were undertaken… and analyse the systemic reasons that would explain it.