Tag Archives: network science

One Love: the missing paper on the network of romantic partnership in polyamory

Is polyamory a single, global web of love?

Polyamory is a relationship style where people engage in multiple, committed, romantic relationships at the same time. In the West, more and more people who practice it have been coming together in communities arranged roughly by country. A good resource for people who want to know more is Franklin Veaux’s website.

In this post, I do not talk about polyamory per se. Rather, I want to remark on the insights you can get if you think about it as a social network. Each polyamorous person is a node in the network. Nodes are connected by edges, encoding the romantic relationships across people. Now, in 1959, Paul Erdös and Alfred Rényi wrote a famous graph theory paper. Among other results, they proved that:

  • If you take a network consisting of a number of disconnected nodes;
  • And then start adding one link at a time, each edge connecting any two nodes picked at random, then;
  • When the number of links of the average node exceeds 1, a giant component emerges in the network. In a network, a component is a group of nodes that are all reachable from each other, directly or indirectly. A giant component is a component that consists of a large proportion of all nodes in the network. If a network has a giant component, most of its nodes are reachable from most other nodes.

Almost by definition the average polyamorists has more than one relationship. Granted, some people will only have one: maybe they are monogamous partners of a polyamorous person, or maybe they are still building their own constellation. Ther are even people who identify as polyamorous but are currently single. But there is also quite a high proportion of people with two or more partners. So, under most real-world conditions, the number of partners of the average person in the polyamory community is greater than one.

So, we have a mathematical theorem about random graphs and an educated guess about polyamory. When we put the two together, we obtain a sweeping hypothesis: most polyamorists in the world are connected to each other by a single web of love. Everyone is everyone else’s lover’s lover’s lover’s lover – six degrees of separation, but in romance. This would be an impressive macrostructure in society. Is it really there? There is a missing paper here. How to disprove the hypothesis and write it?

A statistical physics-ish approach

The hypothesis is quite precise, and in principle testable. But there are are substantial practical difficulties. You’d need unique identifiers for every poly person in the world, to make sure that Alice, Bob’s lover, is not the same person as Alice, Chris’s lover. Some people perceive a stigma around polyamory, and even many of those who don’t prefer to keep their relationship choices to themselves. So, the issues around research ethics, privacy and data protection are formidable.

So, maybe we can take a page out of the statistical physics playbook. The idea  of statistical physics is to infer a property of the whole system (in this case, the property is the existence of a giant component in the polyamory network) from statistical, rather than deterministic, information on the system’s components (in this case, the average number of partners per person). In our case, you could:

  1. Build computer simulations of polyamorous networks, and see if, for a realistic set of assumption, there is a value of the average number of relationships R that triggers a phase transition where a giant component emerges in the network.
  2. Run a simple survey (anonymous, sidestepping the ethics/privacy/data protection problems) to ask polyamorists how many relationship they have. Try also to validate assumptions underpinning your simulations.
  3. Compare the average number of relationships R’ as it results from the survey with the trigger value R as it results from the simulation. If R’>R, then most polyamorists are indeed connected to each other by a single web of love.

In the rest of this post, I am going to think aloud around step 1. At the very end I add a few considerations on step 2.

The model

Models are supposed to be abstract, not realistic. But the assumptions behind the Erdös-Rényi random graph model (start with disconnected nodes, add edges at random) are a bit too unrealistic for our case. I tried to build my own model starting from a different set of assumptions:

  • I assume that, as is often the case in real life, people move into polyamory by opening their previously monogamous relationships. So, I start from a set of couples, not of individuals. In network terms, this means starting with a network with N nodes, organized into N/2 separate components with 2 nodes each. N/2 is also the number of edges in the initial network.
  • At every time step, I add an edge between two random individuals that are not already connected. We ignore gender, and assume any person can fall in love with any other person.

Notice that, at time 0, all nodes of the network have one incident edge. In an Erdös-Rényi graph, we would already see a giant component, but this is not an Erdös-Rényi graph. In fact, we can think of it in a different way: we can redefine nodes as couples, rather than individuals. This way, we obtain a completely disconnected network with N/2 node. As we add edges in the original network of individuals, we now connect couples; and we are back into the Erdös-Rényi model, except with N/2 couples instead of N individuals. By the 1959 result, a giant component connecting most couples emerges when the average couple has over one incident edge: in other words, when there are N/4 inter-couple edges (since one edge always connects two couples, or, more precisely, two people in two different couples).

How many edges do individual people have on average at this point? There were N/2 intra-couple edges at the beginning; we then added N/4 inter-couple new ones. This means our network has now N x 3/4 edges. Each edge is incident to two individuals; so, the average individual has 1.5 incident edges.

Let us restate our result in polyamory terms.

Start from a number of monogamous couples. At each time period, add a new romantic relationship between two randomly chosen individuals that are not already in a relationship with each other. When the average number of relationships exceeds 1.5, a large share of individuals are connected to each other by an unbroken path of romantic relationships.

I have created a simple NetLogo model to illustrate the mechanics of my reasoning. You are welcome to play with it yourself. Starting from a population of 200 couples (and ignoring gender), and running it 100 times, I obtain the familiar phase transition, with the share of individuals in the largest component rising rapidly after the average number of partners per person crosses the 1.5 threshold. The vertical red line in the figure shows the threshold; the horizontal one is drawn at 0.5. Above that line, the majority of individuals are in the “one love” giant component. Notice also that, when the average number of partners reaches 2, about 80% of all polyamorists are part of the giant component.

Phase transition in size of the giant component as the average number of partners crosses the 1.5 line

Obtaining data

Now the question is: do polyamorous people actually have over 1.5 relationships on average? Like so many empirical questions, this one looks simple, but it is not. To answer it, you first have to define what “a relationship” is. Humans entertain a bewildering array of relationships, each one of which can imply, or not, romance (and how do you even define that?), sex, living together, parenting together, sharing finances and so on. They differ by duration, time spent together per week or per year, and so on. Coming up with a meaningful definition is not easy.

Supposing you do hammer out a definition, then you have to get yourself some serious data. Again, this is difficult. To quote the authors of the 2012 Loving More survey:

Truly randomized surveys […] are difficult, if not impossible, to obtain among hidden populations.

And, sure enough, I have been unable to find solid data about the average number of partners in polyamorous individuals.

The Loving More survey itself, for all its limitations, is one of the richest sources of empirical data on human behavior in polyamory: it involved over 4,000 Americans who self-identify as polyamorous. But the question “how many relationships do you have” was not asked. We do know that respondents reported an average of 4 sexual partners in the year previous to the survey. The exact same question is also asked in the (statistically legit) General Social Survey: there, a random sample of the U.S. population reported 3.5 sexual partners on average during the year previous to the survey. The difference is statistically significant, but it is not large, and anyway it only refers to recent sexual partners. Frankly, I have no idea how to infer values for the average number of romantic relationships from these numbers.

So, there is a significant empirical challenge here. But, if you solve it, you get to write the missing paper on polyamory, with the exciting conclusion that the “one love global network” exists, or not. I am looking forward to reading it!

Photo credit: unknown author from this site (Google says it’s labelled for reuse)

Photo credit: McTrent on flickr.com

What counts as evidence in interdisciplinary research? Combining anthropology and network science

Intro: why bother?

Over the past few years, it turns out, three of the books that most influenced my intellectual journey were written by anthropologists. This comes as something of a surprise, as I find myself in the final stages of a highly quantitative, data- and network science heavy Ph.D. programme. The better I become at constructing mathematical models and building quantitatively testable hypotheses around them, the more I find myself fascinated by the (usually un-quantitative) way of thinking great anthro research deploys.

This raises two questions. The first one is: why? What is calling to me from in there? The second one is: can I use it? Could one, at least in principle, see the human world simultaneously as a network scientist and as an anthropologist? Can I do it in practice?

The two questions are related at a deep level. The second one is hard, because the two disciplines simplify human issues in very different ways: they each filter out and zoom in to different things. Also, what counts as truth is different. Philosophers would say that network science and anthropology have different ontologies and different epistemologies. In other words, on paper, a bad match. The first one, of course is that this same difference makes for some kind of added value. Good anthro people see on a wavelength that I, as a network scientist, am blind to. And I long for it… but I do not want to lose my own discipline’s wavelength.

Before I attempt to answer these questions, I need to take a step back, and explain why I chose network science as my main tool to look at social and economic phenomena in the first place. I’m supposed to be an economist. Mainstream economists do not, in general, use networks much. They imagine that economic agents (consumers, firms, labourers, employers…) are faced with something called objective functions. For example, if you are a consumer, your objective is pleasure (“utility”). The argument of this function are things that give you pleasure, like holidays, concert tickets and strawberries. Your job is, given how much money you have, to figure our exactly which combination of concert tickets and strawberries will yield the most pleasure. The operative word is “most”: formally, you are maximising your pleasure function, subject to your budget constraint. The mathematical tool for maximising functions is calculus: and calculus is what most economists do best and trust the most.

This way of working is mathematically plastic. It allows scholars to build a consistent array of models covering just about any economic phenomenon. But it has a steep price: economic agents are cast as isolated. They do not interact with each other: instead, they explore their own objective functions, looking for maxima. Other economic agents are buried deep inside the picture, in that they influence the function’s parameters (not even its variables). Not good enough. The whole point of economic and social behaviour is that involves many people that coordinate, fight, trade, seduce each other in an eternal dance. The vision of isolated monads duly maximising functions just won’t cut it. Also, it flies in in the face of everything we know about cognition, and on decades of experimental psychology.

The networks revolution

You might ask how is it that economics insists on such a subpar theoretical framework. Colander and Kupers have a great reconstruction of the historical context in which this happened, and how it got locked in with university departments and policy makers. What matters to the present argument is this: I grasped at network science because it promised a radical fix to all this. Networks have their own branch of math: per se, they are no more relevant to the social world than calculus is. But in the 1930s, a Romanian psychiatrist called Jacob Moreno came up with the idea that the shape of relationships between people could be the object of systematic analysis. We now call this analysis social network analysis, or SNA.

Take a moment to consider the radicality and elegance of this intellectual move. Important information about a person is captured by the pattern of her relationships with others, whoever the people in question are. Does this mean, then, that individual differences are unimportant? It seems unlikely that Moreno, a practicing psychiatrist, could ever hold such a bizarre belief. A much more likely interpretation of social networks is that an individual’s pattern of linking to others, in a sense, is her identity. That’s what a person is.

Three considerations:

  1. The ontological implications of SNA are polar opposites of those of economics. Economists embrace methodological individualism: everything important in identity (individual preferences, for consumer theory; a firm’s technology, in production theory) is given a priori with respect to economic activity. In sociometry, identity is constantly recreated by economic and social interaction.
  2. The SNA approach does not rule out the presence of irreducible differences across individuals. A few lines above I stated that an individual’s pattern of linking to others, in a sense, is her identity. By “in a sense” I mean this: it is the part of the identity that is observable. This is a game changer: in economics, individual preferences are blackboxed. This introduces the risk of economic analysis becoming tautologic. If you observe an economic system that seems to plunge people into misery and anxiety, you can always claim this springs directly from people maximising their own objective functions because, after all, you can’t know what they are. This kind of criticism is often levelled to neoliberal thinkers. But social networks? They are observable. They are data. No fooling around, no handwaving. And even though there remains an unobservable component of identity, modern statistical techniques like fixed effects estimation can make system-level inferences on what is observable (though they were invented after Moreno’s times).
  3. Moreno’s work is all the more impressive because the mathematical arsenal around networks was then in its infancy. The very first network paper was published by Euler in 1736, but it seems to have been considered a kind of amusing puzzle, and left brewing for over a century. In the times of Moreno there had been significant progress in the study of trees, a particular class of graphs used in chemistry. But basically Moreno relied on visual representation of his social networks, that he called sociograms, to draw systematic conclusions.

By Martin Grandjean (Own work), strictly based on Moreno, 1934 [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

With SNA, we have a way of looking at social and economic phenomena that is much more appealing than that of standard economics. It puts relationships, surely the main raw material of societies and economies, right under the spotlight. And it is just as mathematically plastic – more, in fact, because you can more legitimately make the assumption that all nodes in a social network are identical, except for the links connecting them to other nodes. I embraced it enthusiastically, and spent ten years teaching myself the new (to me) math and other relevant skills, like programming and agent-based modelling.

Understanding research methods in anthropology

As novel as networks science felt to me, anthropology is far stranger. From where I stand, it breaks off from scholarship as I was trained to understand it in three places. These are: how it treats individuals; how it treats questions; and what counts as legitimate answers.

Spotlight on individuals

A book written by an anthropologist is alive with actual people. It resonates with their voices, with plenty of quotations; the reader is constantly informed of their whereabouts and even names. Graeber, for example, towards the beginning of Debt introduces a fictitious example of bartering deal between two men, Henry and Joshua; a hundred pages later he shows us a token of credit issued by an actual 17th century English shopkeeper, actually called Henry. This historical Henry did his business in a village called Stony Stratford, in Buckinghamshire. The token is there to make the case that the real Henry would do business in a completely different way than the fictional one (credit, not barter). 300 pages later (after sweeping over five millennia of economic, religious and cultural history in two continents) he informs us that Henry’s last name was Coward, that he also engaged in some honourable money lending, and that he was held in high standing by his neighbours. To prove the case, he quotes the writing of one William Stout, a Quaker businessman from Lancashire, who started off his career as Henry’s apprentice.

To an economist, this is theatrical, even bizarre. The author’s point is that it was normal for early modern trade in European villages to take place in credit, rather than cash. Why do we need to know this particular’s shopkeeper’s name and place of establishment, and the name and birthplace of his apprentice as well? Would the argument not be even stronger, if it applied to general trends, to the average shopkeeper, instead of this particular man?

I am not entirely sure what is going on here. But I think it is this: to build his case, the author had to enter in dialogue with real people, and make an effort to see things through their eyes. Ethnographers do this by actually spending time with living members of the groups they wish to study; in the case of works like Debt he appears to spend a great deal of time reading letters and diaries, and piecing things together (“Let me tell you how Cortés had gotten to be in that predicament…”). If the reader wishes to fully understand and appreciate the argument, she, too, needs to make that effort. And that means spending time with informants, even in the abridged form of reading the essay, and getting to know them. So, detailed descriptions of individual people are a device for empathy and understanding.

All this makes reading a good anthro book great fun. It also is the opposite of what network scientists do: we build models with identical agents to tease out the effect of the pattern of linking. Anthropologists zoom in on individual agents and make a point of keeping track of their unique trajectories and predicaments.

Asking big questions

Good anthropologists are ambitious, fearless. They zero in on big, hairy, super-relevant questions and lay siege to them. Look at James Scott:

I aim, in what follows, to provide a convincing account of the logic behind the failure  of some of the great utopian social engineering schemes of the twentieth century.

That’s a big claim right there. It means debugging the whole of development policies, most urban regeneration projects, villagization of agriculture schemes, and the building of utopian “model cities” like Kandahar or Brasilia. It means explaining why large, benevolent, evidence-based bureaucracies like the United Nations, the International Monetary Fund and the World Bank fail so often and so predictably. Yet Scott, in his magisterial Seeing Like a State, pushes on – and, as far as I am concerned, delivers the goods. David Graeber’s own ambition is in the title: Debt – The first 5,000 years.

Economists don’t do that  anymore.You need to be very very senior (Nobel-grade, or close) to feel like you can tackle a big question. Researchers are encouraged to act as laser beams rather than searchlights, focusing tightly on well-defined problems. It was not always like that: Keynes’s masterpiece is immodestly titled The General Theory of Employment, Interest and Money. But that was then, and now it is.

What counts as “evidence”?

Ethnographic analysis – the main tool in the anthropologist’s arsenal – is not exactly science. Science is about building a testable hypothesis, and then testing it. But testing implies reproducibility of experiments, and that is generally impossible for meso- and macroscale social phenomena, because they have no control group. You cannot re-run the Roman Empire 20 times to see what would have happened if Constantine had not embraced the christian faith. This kind of research is more like diagnosis in medicine: pathologies exist as mesoscale phenomena and studying them helps. But in the end each patient is different, and doctors want to get it right this time, to heal this patient.

How do you do rigorous analysis when you can’t do science? When I first became intrigued with ethnography, someone pointed me to Michael Agar’s The professional stranger. This book started out as a methodological treatise for anthropologists in the field; much later, Agar revisited it and added a long chapter to account for how the discipline had evolved since its original publication. This makes it a sort of meta-methodological guide. Much of Agar’s argument in the additional chapter is dedicated to cautiously suggesting that ethnographers can maintain some kind of a priori categories as they start their work. This, he claims, does not make an ethnographer a “hypothesis-testing researcher”, which would obviously be really bad. When I first read this expression, I did a double take: how could a researcher do anything else than test hypotheses? But no: a “hypothesis-testing researcher” is, to ethnographers, some kind of epistemological fascist. What they think of as good epistemology is to let patterns emerge from immersion in, and identification with, the world in which informants live. They are interested in finding out “what things look like from out here”.

It sounds pretty vague. And yet, good anthropologists get results. They make fantastic applied analysts, able to process diverse sources of evidence from archaeological remains to statistical data, and tie them up into deep, compelling arguments about what we are really looking at when we consider debt, or the metric system, or the particular pattern with which cypress trees are planted in certain areas. A hard-nosed scientist will scoff at many of the pieces (for example, Graeber writes things like “you can’t help feeling that there’s more to this story”. Good luck getting a sentence like that past my thesis supervisor), but those pieces make a very convincing whole. To anthropologists, evidence comes in many flavours.

Coda: where does it all go?

You can see why interdisciplinary research is avoided like the plague by researchers who wish to publish a lot. Different disciplines see the world with very different eyes; combining them requires methodological innovation, with a high risk of displeasing practitioners of both.

But I have no particular need to publish, and remain fascinated by the potential of combining ethnography with network science for empirical research. I have a specific combination in mind: large scale online conversations, to be harvested with ethnographic analysis. Harvested content is then rendered as a type of graph called a semantic social network, and reduced and analysed via standard quantitative methods from network science. With some brilliant colleagues, we have outlined this vision in a paper (a second one is in the pipeline) so I won’t repeat it here.

I want, instead, to remark how this type of work is, to me, incredibly exciting. I see a potential to combine ethnography’s empathy and human centricity, anthropology’s fearlessness and network science’s exactness, scalability and emphasis on the mesoscale social system. The idea of “linking as identity” is a good example of methodological innovation: it reconciles the idea of identity as all-important with that of interdependence within the social context, and it enables simple(r) quantitative analysis. All this implies irreducible methodological tensions, but I think in most cases they can be managed (not solved) by paying attention to the context. The work is hard, but the rewards are substantial. For all the bumps in the road, I am delighted that I can walk this path, and look forward to what lies beyond the next turns.

Photo credit: McTrent on flickr.com

 

The quest for collective intelligence: a research agenda

I am knee deep into the research work for opencare. I think I am learning new things on how to use collective intelligence in practice. This has far-reaching implications for my own work in Edgeryders, and beyond.  Far beyond, in fact. If we crack collective intelligence, we gain access to a new source of cognition. Forget my own work; this has profound implications for the future of our species. If you think that’s radical, go read the work of cultural evolution scholars, like Boyd, Richerson or Henrich. They think homo sapiens has started a major transition: evolutionary forces are pulling us towards a larger, more integrated “collective brain”. We are en route to becoming to primates what ants are to flies.

Collective intelligence is an elusive concept. It appeals to intuition, but it is hard to define and harder to measure and model. And yet, model it we must if we are to go forward. The good news is: I think I see a possible way. What follows is just a  back-of-the-envelope note, plotting a rough course for the next three years or so.

1. Data model: semantic social networks

I submit that the raw data of collective intelligence are in the form of semantic social networks. By this term I mean a way to represent human conversation. The representation is a social network, because it involves humans connected to each other by interactions. And it is semantic, because those interactions encode meaning.

2. Network science: it’s all in the links.

Collective intelligence is not additive: it’s interactional. We can only generate new insight when the information in my head comes into contact with the information in yours. So, what makes a collectivity more or less smart is the pattern of linking across its members. Network science is what allows a rigorous study of that linking, looking for the patterns of interaction which associate to the smartest behaviors.

3. Ethnography: harvesting smart outcomes

Suppose we accept that the hive mind can generate powerful insights and breakthroughs. How can we, individual human beings, lift them from the surrounding noise? Looking at what individual members of the community say and do would likely be fruitless. The problem is understanding how the group represents to itself the issue at hand; no individual you ask will be able to hold all the complexity in her head. We do have a discipline that specializes in this task: ethnography. Ethnographers are good at representing a collective point of view on something. Their skills are useful to understand just what the collective intelligence is saying.

4. “Shallow” text analytics: casting your net wider

Ethnography is like a surgical knife: super sharp and precise. But sometimes you what you need is a machete. As I write this, the opencare conversation consists of over 300,000 words, authored by 137 people. This is a very big study by ethnography standards, and these numbers are likely to double again. We are already pushing the envelope of what ethnographers can process.

So, the next step is giving them prosthetics. The natural tool is text analytics, a branch of data analysis centered on text-as-data. It comes in two flavors: shallow-and-robust and deep-and-ad-hoc. I like the shallow flavor best: it is intuitive and relatively easy to make into standard tools. When the time of your ethnographers is scarce and the raw data is abundant, you can use text analysis to find and discard contributions that are likely to be irrelevant or off topic.

5. Machine learning: weak AI for more cost-effective analysis

Beyond the simplest levels, text analytics uses a lot of machine learning techniques. It comes with the territory: human speech does not come easy to machines. At best, computers can evolve algorithms that mimic classification decisions made by skilled humans. A close cooperation between humans and machines just makes sense.

6. Agent-based modelling: understanding emergence by simulation

We do not yet have a strong intuition for how interacting individuals give rise to emergent collective intelligence. Agent-based models can help us build that intuition, as they have done in the past for other emergent phenomena. For example, Craig Reynolds’s Boids model explains flocking behaviour very well.

The above defines the “long game” research agenda for Edgeryders. And it’s already under way.

  • I am knee-deep in network science since 2009. We run real-time social network analysis on Edgeryders with Edgesense. We have developed an event format called Masters of Networks to spread the culture beyond the usual network nerds like myself. All good.
  • We collaborate with ethnographers since 2012. We have developed OpenEthnographer, our own tool to do in-database ethno coding I’d love to have a blanket agreement with an anthropology department: there is potential for groundbreaking methodological innovation in the discipline.
  • We are working with the University of Bordeaux to build a dashboard for semantic social network analysis.
  • I still need to learn a lot. I am studying agent-based modelling right now. Text analytics and machine learning are next, probably starting towards the end of 2016.

With that said, it’s early days. We are several breakthroughs short of a real mastery of collective intelligence. And without a lot of hard, thankless wrangling with the data, we will have no breakthrough at all. So… better get down to it. It is a super-interesting journey, and I am delighted and honoured to be along for the ride. I look forward to making whatever modest contribution I can.

Photo credit: jbdodane on flickr.com CC-BY-NC