Author Archives: Alberto

Milano Cathedral

You did WHAT? The Italian revenue agency infringes OpenStreetMap’s copyright

A couple of months ago, Simone Cortesi, deputy president  of Wikimedia Italia and the primus inter mappers of Italy’s geohackers, noticed an oddity in the maps of the Revenue Agency’s property market dataset. How could they know about the walkways in his own garden? He realized he himself had uploaded those data, not into any government dataset but into the “wikipedia of maps”, OpenStreetMap. Since the maps did not credit OSM as the data source, the Revenue Agency was technically infringing on OSM’s intellectual property rights. OSM maps are free to use for all, but if you do use them you must respect the terms of the Open Database License protecting the data.  If Simone’s allegations proved to be correct, this would be the largest ever copyright infringement against OpenStreetMap. And done by the tax authority of a G8 country, no less.

A group of Italian expert contributors to OSM coded a website exposing the problem and containing a tool for comparing the Revenue Agency’s “proprietary” maps with OpenStreetMap. Hundreds of eyeballs were put on the case, and sure enough, the data are the same, and the copyright infringement was there.

On July 8th 2014, after the Italian Twittersphere had put the word out, the Revenue Agency tweeted back that it had “demanded an explanation” from its technology provider, a company called Sogei. This is an in-house company, 100% owned by the Ministry of Treasury. Later in the day, Sogei complied with the terms of the OpenStreetMap license and issued a statement of apologies. With this, the generous Italian mappers declared themselves vindicated. Simone, bless him, rose to the occasion to demand the Agency opens up its own data, specifically those of the real estate registrar, as he and many of us in the Italian open data community have been advocating for years.

Over and above the embarassment, there is a deeper lesson to learn here. Sogei is a monopolist: the Revenue Agency had no choice but to get its tech from them. Sogei, in turn, ostensibly acquired their geodata from a company called Navteq, (source, in Italian), owned by Nokia (wikipedia), that appears since to have changed its name into Here.

So what happened, really? Did Navteq repackage free and open data and sell them as proprietary to Sogei, who resold them back to the Italian state? How much money was spent on this procurement process? Was there financial damage to the public purse, and was it intentional (hence an offence)? How much money could we have saved, and keep saving, if smart communities like the OSM, open source and open data communities were involved in public procurement?

OpenCorporates

Open data comes of age

If you live in Italy and are curious about your local authority’s pattern of spending and taxing, you are in luck. Since last week, OpenBilanci publishes on the web detailed financial data from all the 8,092 Italian local authorities for the past ten years. Both budgets and closed ex post accounts are available, along with a galore of indicators like financial autonomy or spending velocity. Not only are all data downloadable and open: OpenBilanci sports a nifty web interface for preliminary data exploration. The latter is a feature found also on other highly successful Italian open data projects like the mighty OpenCoesione, that released spending data on 749,112 projects funded by the country’s cohesion policy. And no surprise: though OpenCoesione is a government initiative and OpenBilanci is not-for-profit one, the same team of visionary coders stand behind both projects, through both a non profit and a for profit arm.

In the space of only a few years, open data have become a formidable force for openness, transparency and even data literacy in a country that badly needs all three. Forward-thinking civil servants and political leaders in some of Italy’s 20 regions (and some cities) have been working together with civic hackers for years now: Lazio has funded OpenBilanci through its SME-centred innovation policy, whereas Emilia Romagna has successfully built a partnership with the largest Italian open data community, Spaghetti Open Data. In a veritable stroke of genius, the city of Matera has decided to host on its own open data portal any open dataset produced by the local community.

When public authorities do not play ball, Italian civic hackers simply proceed to open up government data anyway. One of my favourite projects in this sense is Confiscati bene, started during an epic Spaghetti Open Data hackathon. The group wrote a crawler to extract data from the (non-open) website of ANBSC, a government agency tasked with reallocating assets confiscated to Mafia bosses and other assorted mobsters (the Italian police is doing a sterling job there, since ANBSC is juggling over 11,000 such assets). It cleaned them up, geocoded them, made them downloadable, built the customary sleek interface for web exploration, embedded them into a brand new website and released everything as a gift to ANBSC. OpenBilanci itself entailed scraping over two million web pages.

I know Italy’s scene best, but exciting open data projects are appearing everywhere. My absolute favourite one is British: OpenCorporates gathers data on over 60 million corporations all over the planet. Using unique identifiers and information on ownership structure, OpenCorporates shines a light on the corporate world, that has far less tight legal requirements on transparency than government. This OpenCorporates-based visualization, for example, will teach you much about Goldman Sachs.

It looks like the open data movement has come of age. It was surprisingly fast: in less than four years we went from a small cadre of nerds obsessing on Tim Berners-Lee famous “raw data now” speech to a strong community (there are almost 1,000 subscribers to the Spaghetti Open Data mailing list, churning out twenty messages a day 365 days a year) and a phalanx of young decision makers that understand the issue and are plugged into the community. I am proud of you all, my sisters and brothers in arms. And the best is yet to come – especially as we come together all across Europe, as I am sure we will soon since the times are ripe for this to happen. Who knows, data culture might even be able to shift European politics away from populism and onto evidence-based debate.

Figure 2. Specialized conversations on education ad learning

Algorithmic detection of specialization in online conversations

This is a writeup of the Team 1 hackathon at Masters of Networks 2. Participants were: Benjamin Renoust, Khatuna Sandroshvili, Luca Mearelli, Federico Bo, Gaia Marcus, Kei Kreutler, Jonne Catshoek and myself. I promise you it was great fun!

The goal

We would like to learn whether groups of users in Edgeryders are self-organizing in specialized conversations, in which (a) people gravitate towards one or two topics, rather than spreading their participation effort across all topics, and (b) the people that gravitate towards a certain topic also gravitate towards each other.

Why is this relevant?

Understanding social network dynamics and learning to see the pattern of their infrastructure can become a useful tool for policy makers to rethink the way policies are developed and implemented. Furthermore, it could ensure that policies reflect both needs and possible solutions put forward by people themselves. The ability to decode linkages between members of social networks based on the areas of their specialization can allow decision makers and development organisations to:

  1. Tap into existing networks of knowledge and expertise to gain increased understanding of a policy issue and of the groups most affected (i.e. the target population of a policy)
  2. Identify pre-existing bottom-up (ideas for) solutions relevant to the policy issue at hand
  3. Bring together networks with a proven interest in a policy issue and leverage their engagement to design new solutions and bring about change

Compared to traditional models of policy development, this method can allow for more effective and accountable policy interventions. Rather than spending considerable resources on developing a knowledge base and building new communities around a policy theme, the methodology would enable decision makers and development organisations alike to tap into available knowledge bases and to work with these existing networks of interested specialists, saving time and resources. Moreover, pre-existing networks of specialists are expected to be more sustainable as a resource of information and collective action than ad-hoc networks built around emerging policy issues.

The data

Edgeryders is a project rolled out by the Council of Europe and the European Commission in late 2011. Its goal was to generate a proposal for the reform of European youth policy that encoded the point of view of youth themselves. This was done by launching an open conversation on an online platform (more information).

The conversation was hosted on a Drupal 6 platform. Using a Drupal module called Views Datasource, we exported three JSON files encoding respectively information about users; posts; and comments.
These data are sufficient to build the social network of the conversation. In it, users represent nodes; comments represent edges. Anna and Bob are connected by an edge if Anna has written at least one comment to a piece of content authored by Bob. We used a Python script with the Tulip library for network analysis to build the graph and analyze it. The result was a network with 260 active people and about 1600 directed edges, encoding about 4000 comments.

To move towards our goal, we needed to enrich this dataset with extra information concerning the semantics of that conversation (see below).

What we did

To define to which degree people gravitate towards certain topics, and towards each other, we carried out “entanglement analysis” on a dataset containing all conversations carried out between members of the Edgeryders network. Entanglement analysis was proposed by Benjamin Renoust in 2013; we performed it using a program called Data Detangler (accessible at http://tulipposy.labri.fr:31497/).

1. Understanding Edgeryders as a social network of comments

These data can be interpreted as a social network: people write posts and comment on them; moreover, they can comment other people’s comments. Within this dataset, each comment can be interpreted as an edge, connecting the author of the comment to the author of the post or comment she is commenting on. Alternatively, we could interpret them as a bipartite network that connects people to content: comments are edges that connect their authors to the unit of content they are commenting.

2. Posts are written in response to briefs

Each of the posts written on Edgeryders is a response to set briefs, or missons, that sit under higher level campaigns. This means that many posts – and associated comments – live under the higher level ‘topic’ of one of nine campaigns.

3. Keywords indexing briefs

In order to understand how the various topics and briefs connect to each other we analysed the keywords that defined each mission/brief. This was carried out by manually analysing the significance of word frequency for each post. Word Frequency was ascertained by using the in-browser software http://tagcrowd.com/faq.html#whatis to work out the top 12-15 words per mission. We then manually verified these words and kept those that are semantically relevant (removing, for example names, or words that were too general, or that were a function of the Edgeryders platform itself- e.g. ‘comment’ or ‘add post’).

The combination of these three elements gives us a multiplex social network, that is indexed by keywords. A multiplex social network is one where there are multiple relations among the same set of actor. The process can be visualized in Figure 1.

Fig. 1 – Building a multiplex  social network where edges carry semantics. Fig. 1 – Building a multiplex social network where edges carry semantics.

4. Drop one-off interactions

We dropped edges that are linked to only one brief. These are edges of  “degenerate specialistic” interactions; as they only interact in the context of one brief, they are specialistic only by default.

5. Remove generalist conversations

At this point, we had a multiplex social network of users and keywords. Users were connected by edges carrying different keywords – indeed, each keyword can be seen as a “layer” of the multiplex network, inducing its own social network: the network of the conversation about employment, the network of the conversation about education etc. Many of the interactions going on are non-specialized; the same two users talk of several different things. In order to isolate specialized conversation, for each individual edge of the multiplex we remove all keywords except those that appear in all interactions between these two users. In other words, we rebuild the network by assigning to each edge the intersection of the sets of keywords encoded in each of the individual interactions. In many cases, the intersection is empty: it only takes two interactions happening in the context of two briefs with no keywords in common for this to happen. In this case, the edge is dropped altogether.

A nice side-effect of 4 and 5 is to greatly reduce the influence of the Edgeryders team of moderators on the results. Moderators are among the most active users; while this is as it should be, they tend to “skew” the behaviour of the online community. However, 4 removes all the one-off interactions they tend to have with users that are not very active; and 5 removes all the edges connecting moderators to each other, because they – by virtue of being very active – interact with one another across many different briefs, and as a result the intersection of keywords across all their interactions tends to be zero.

6. Look for groups of specialists

We then identified groups of specialists by identifying those users interacting together solely around a small number of keywords (e.g. in example, n(keywords) = 2).

Figure 1. Specialized conversations on education and learning Figure 1. Detecting specialized conversations on education and learning.

Conclusions

The method does indeed seem to be able to identify groups of specialists. “Groups” is used here in the social sense of a collection of people that not only write content related to the keywords, but interact with one another in doing so – this is to capture the collective intelligence dimension of large scale conversations. Figure 1 shows some conversations between people (highlighted on the left) that only interact on the “education” and “learning” keywords (shown on the right). Highlighted individuals that are not connected to any highlighted edges are users who do write contributions that are related to those keywords, but are not part to specialized interactions on those keywords.

Once a group of specialists is identified, the next step is to look for the keywords that co-occur on the edges connecting them. An example of this is Figure 2, that shows the keywords co-occurring on the edges of the conversations involving our specialist group on education and learning. The size of the edge on the right part of the figure indicated that keyword’s contribution to entanglement, i.e. to making that group of keywords a cohesive one. Unsurprisingly, “education” and “learning” are among the most important ones. More interestingly, there is another keyword that seems to be deeply entangled with these two: it is “open”. We can interpret this as follows: specialized interaction on education and learning is deeply entangled with the notion of “open”. The education specialists in this community think that openness is important when talking about education.

MoN2_Fig_2 Figure 3. Discovering more keywords entangled with the original two in the specialized conversation.

This method is clearly scalable. It can be used to identify “surprising” patterns of entanglement, which can then be further investigated by qualitative research.

Scope for improvement

The main problem with our method was that is is quite sensitive to the coding by keyword. Assigning the keywords was done by way of a quick hack based on occurrency count. This method should work much better with proper ethnographic coding. Note that folksonomies (unstructured tagging) typically won’t work, as it will introduce a lot of noise in the system (for example, with no stemming you get a lot of false (“degenerate”) specialist.)