Monthly Archives: October 2015

Yesterday, I attended a great talk on Phase Transitions in Random Graphs, the second lecture by visiting scholar Cris Moore of the Santa Fe Institute.

Now, you may be wondering, “Phase Transitions in Random Graphs”? What does that even mean?

Well, I’m glad you asked.

First, “graph” is the technical math term for a network. So we’re talking about networks here, not about random bar charts or something. The most common random graph is the Erdős–Rényi model developed by Paul Erdős and Alfred Rényi. (Interestingly, a similar model was developed separately and simultaneously by Edgar Gilbert who gets none of the credit, but that is a different post.)

The Erdős–Rényi model is simple: you have a set number of vertices and you connect two vertices with an edge with probability p.

Imagine you are a really terrible executive for a new airline company: there are a set number of airports in the world, and you randomly assign direct flights between cities. If you don’t have much start up capital, you might have a low probability of connecting two cities – resulting in a random smattering of flights. So maybe a customer could fly between Boston and New York or between San Francisco and Chicago, but not between Boston and Chicago. If your airline has plenty of capital, though, you might have a high probability of flying between two cities, resulting a connected route allowing a customer to fly from anywhere to anywhere.

The random network is a helpful baseline for understanding what network characteristics are likely to occur “by chance,” but as you may gather from the example above – real networks aren’t random. A new airline would presumably have a strategy for deciding where to fly – focusing on a region and connecting to at least a few major airports.

A phase transition in a network is similar conceptually to a phase transition in a physical system: ice undergoes a phase transition to become a liquid and can undergo another phase transition to become a gas.

A random network undergoes a phase transition when it goes from having lots of disconnected little bits to having a large component.

But when/why does this happen?

Let’s imagine a random network with nodes connected with probability p. In this network,p = k/n where k is a constant and n is the number of nodes in the network. We would then expect each node to have an average degree of k.

So if I’m a random node in this network, I can calculate the average size of the component I’m in. I am one node, connected to k nodes. Since each of those nodes are also connected to k nodes, that makes k^2 nodes connected back to me. This continues outwards as a geometric series. For small k, the geometric series formula tells us that this function will converge at 1 / (1 – k).

So we would expect something wild and crazy to happen when k = 1.

And it does.

This is called the “critical point” of a random network. It is at this point when a network goes from a random collection of disconnected nodes and small components to having a large component. This is the random network’s phase transition.

Nearly every mother I know has described herself as a terrible mother.

I find that disconcerting.

Not that these mothers actually are terrible mothers – oh dear, you finally caved to your crazy toddler’s demands and let them watch tv so you could get a moment of rest? – rather, I find it concerning that so many mothers have the self-perception of failing.

To be clear, I imagine there’s a similar phenomenon for fathers, though I don’t have the personal experience to validate that. I also get the sense that this is a gendered issue which disproportionally effects women.

Interestingly, when I tried to find data on this, I mostly found self-help articles geared towards helping women be better mothers. So, that’s telling. Also, there’s an interesting study from the UK that found “70% of mothers are left feeling like the ‘bad cop’ while fathers become the centre of attention and known as the ‘fun one’ in family.”

But while some of the sense of “terribleness” may arise from such inner family dynamics. I don’t think that’s the only source of this phenomenon.

I cringe a little when I read one of those stories about a family handing out goodie bags before a flight with their infant.

Let’s be clear: you are rearing a small person; you don’t have to apologize to me.

I imagine much of this “bad parent” anxiety comes from idealized images of the 50s. After all, we all know that everything was just hunky-dory then.

But I’m not convinced that challenge gets the rest of us off the hook.

That is to say – imagine a parent: she thinks she’s a terrible mother because she thought it would be possible to “have it all” if she was good enough. She thought she would never get mad at her child whom she loves so much. She thought with a little patience and care, she could easily raise a child who was always polite and reasonable.

Even a woman has this misguided and idealized vision of what parenthood should be – why are the rest of us playing into that?

Shouldn’t we all be working to dispel those myths?

I’m impressed by any person who takes on the challenge of raising a child. It is hard, exhausting, and trying work. There will be food in your hair and too many sleepless nights. I am amazed by anyone who can handle all that and still get out of bed in the morning. …Even if you only got up because your child, who insisted on sleeping with you, somehow kicked you in the face.

So parents, don’t feel guilty when your kids cries on the train or throws groceries around in the store. You are doing something harder than anything I will ever do, and you are amazing at it.

I just wish there was some universal hand sign that could convey all that to the next frazzled mother I see nearing the point of break down because her toddler refuses to listen to reason.

I had the opportunity today to attend a talk by Jeffrey Boase of the University of Toronto. Boase has done extensive work around mobile log data – having research participants install apps that gather their (anonymized) call data and engaging participants in short, mobile-based surveys.

The motivation for this work can be seen in part from his earlier research – while 40% of mobile phone use studies base their findings on self-reported data, this data correlate only moderately with the server log data. In other words, self-reported data has notable validity issues while log data provides a much more accurate picture.

Of course, phone records of call time and duration lacks the context needed to make useful inferences. So Boase works to supplement log data with more traditional data collection techniques.

A research participant, for example, may complete a daily survey asking them to self-report data on how they know a certain person in their address book. Researchers can also probe further, not only getting at familial and social relationships but also asking whether a participant enjoys discussing politics with someone.

By using this survey data in concert with log data, Boase can build real-time social networks and track how they change.

His current work, the E-Rhythms Project, seeks to provide a rich understanding of mobile phone based peer bonding during adolescence and its consequences for social capital using an innovative data collection technique that triangulates smartphone log data, on-screen survey questions, and in-depth interviews.

I’m rather enjoying being back to daily commutes downtown. It’s been particularly interesting being primarily on the Green Line – having previously commuted primarily by Red.

The Redline is perhaps the most polished of the MBTA lines. It is certainly the youngest. The section of track from Park Street to Harvard was opened in 1912. Followed by the later extension to Porter, Davis, and Alewife as recently as the early ’80s.

The Green Line, on the other hand, is the MBTA’s oldest line – making it the single oldest public transit line in the United States. While the Lechmere terminus – where I now board the train – wasn’t opened until 1922 – much of the tunnel work was initially done in the late 1890s.

So every time the train squeals while going through one of those sharp turns I think of it as a little bit of history.

Perhaps even more interesting about the Green Line tunnels, though, is that there are places where you can actually see good portions of the track. With the current re-construction of Government Center station, there’s a whole section that’s well lit and open for viewing. It’s fascinating.

The MBTA boldly claims to have ” a history longer than that of American independence,” dating the city’s history of public transportation back to the family-operated ferries of 1631.

Public transit as we know it, though, really started to emerge in the mid-1800s. While initially stage coaches carried passengers between Boston and surrounding cities, in the 1820s “omnibus” (OMNI – a bus for all, everywhere) service emerged. “Longer than a conventional stagecoach, it had lengthwise seats along either side rather than cross seats, and a door at either end. Stagecoaches went directly from one city or town; omnibuses made several stops along an assigned route.” The OMNIs were, of course, still horse-drawn.

Around the sometime, street railways – horse drawn cars on tracks – began to appear in concert with the nation’s railroad boom. As one historian notes, these street railways “were not envisioned purely to provide transportation within the city, a need as yet not pressing or obvious in the small, compact cities of that day.”

By the 1850s, there were horse-drawn street cars throughout Boston and nearby cities, with service that competed with, and sometimes duplicated, that of the OMNIs. In 1887 the state consolidated all lines into the West End Street Railway, which then oversaw the expansion of the system and the introduction of electrified street cars.

Eventually, the streets became so crowded with street cars that the “Tremont Street Subway” – now the Green Line – was conceived to alleviate traffic in the city’s busiest disctricts.

As I’ve waded through the literature on deliberative theory, I’ve been struck by two disparate schools of thought: some authors focus their attention on political deliberation while others focus on moral deliberation.

The difference in focus is not trivial. Consider Amy Gutmann and Dennis Thompson’s explanation of where deliberation thrives:

In politics, disagreements often run deep. If they did not, there would be no need for argument. But if they ran too deep, there would be no point in argument. Deliberative disagreements lie in the depths between simple misunderstanding and immutable irreconcilability.

Gutmann and Thompson do write extensively about moral deliberation, but this passage hints at incentive for avoiding moral discussion in politics: moral disagreements seem more intractable.

Two people of opposing beliefs may never find common moral ground on an issue, but that doesn’t neccessarily preclude the possibility of reasonable having productive political debate on an issue.

To what extent, though, is it possible to disentangle moral and political interests?

On the topic of abortion, for example, even if people tried to restrict their comments to the political issues of funding and access, I suspect that most dialogues would still find their way to a fundamental, moral impasse.

Perhaps not all issues are so morally charged, though – in discussions of education, healthcare, and the environment are morality and politics so inseperable? Either way, I’m actually more interested in the related question: in general, should morality and politics be so intertwined?

Our political sensibilities seem to say no – as good citizens, we ought to have rousing debates over politics while also embracing the pluralistic nature of our fellow citizens’ views.

Yet separating morality from politics seems undesirable, even if it were possible. Discussing the environment without discussing environmental justice is inauthentic and unproductive. Discussing education without tackling the moral issues raised by deep, educational inequality fails to get to the heart of the mater.

The personal is indeed political and the political is fundamentally moral.

This brings me back to the excellent work of Diana Mutz in Hearing the Other Side.

Mutz illustrates an inherent tension in political theory: should a citizen’s social network be composed of people who are “politically like-minded or have opposing views?” While the ideals of political theory seems to indicate that the answer should be “both,” Mutz shows that this is not possible.

We must choose, she argues, between a homogenous network of people who agree politically or a heterogenous network of people who are apolitical:

A highly politicized mindset of “us” versus “them” is easy so long as we do not work with “them” and our kids do not play with their kids. But how do we maintain this same favor and political drive against “them” when we carpool together?

Her analysis is compelling, but I find myself fighting against believing it. I don’t want to think that political agency requires self-sorting into like minded groups and I don’t want to think that political action is impossible in heterogenous groups.

One thing I struggled with as I read her work is exactly what it means to be “like-minded.” I have many productive debates with my equally liberal friends. We agree – but we don’t agree. Does that that make us like-minded?

This idea of political “difference” here can perhaps be better understood as one of different moral views. That is, we can have productive political debate among people of different views, as long as they are morally “like-minded.”

Again, this may provide incentive for avoiding moral debate – just as Mutz demonstrates that social interaction across political difference must be apolitical. That would be a depressing conclusion – although, perhaps, an inevitable one.

But if taking morality out of politics lessens the value of political dialogue, we must find some way overcoming that challenge. Or, at least, as Mutz argues, we must greatly rethink our ideals of deliberation.

I am very excited that Northeastern is hosting a series of lectures by Cris Moore of the Santa Fe Institute. Moore has his background in physics, but he has also distinguished himself in the fields of mathematics, computer science, and network science.

Basically, he knows pretty much everything.

He began his lecture series yesterday with a talk on “computational complexity and landscapes.” Exploring qualitative differences between different types of algorithms, he essentially sought to answer the question: why are some problems “hard”?

Of course, “hard” is a relative term. In computer science, the word has a very specific meaning which requires some history to explain.

In 1971, Stephen Cook posed a question which remains open in computer science today: if a solution to a problem can be (quickly) verified by a computer can that problem also be (quickly) solved by a computer?

This question naturally sorts problems into two categories: those that can be solved in polynomial time (P) and those for which an answer can be can be checked in polynomial time (NP for ” nondeterministic polynomial time”).

Consider the famous Bridges of Königsberg problem. Euler, living in Königsberg in 1736, wanted to find a path that would allow him to cross each of the city’s bridges exactly once without ever crossing the same bridge twice.

Now you may start to appreciate the difference between finding a solution and checking a solution. Given a map of Königsberg, you might try a path and then try another path. If you find that neither of these paths work, you have accomplished exactly nothing: you still don’t have a solution but neither have you proven that none exists.

Only if you spent all day trying each and every possibility would you know for certain if a solution existed.

Now that is hard.

If, on the other hand, you were equipped with both a map of Königsberg and a Marauder’s map indicating the perfect route, it would be relatively easy to verify that you’d been handed a perfect solution. Once you have the right route it is easy to show that it works.

Of course, if the story were that simple, “P versus NP” wouldn’t have remained an open problem in computer science for forty years. Euler famously solved the Bridges of Königsberg problem without using the exhaustive technique described above. Instead, in a proof that laid the foundations of network science, Euler reduced the problem to one of vertices and edges. Since the connected path that he was looking for requires that you both enter and leave a land mass, an odd number of bridges meeting in one place poses problems. Eventually you will get stuck.

In this way, Euler was able to prove that there was no such path without having to try every possible solution. In computer science terms, we was able to solve it in polynomial time.

This is the fundamental challenge of the P versus NP dilemma: a problem which initially appears to be “NP” may be reducible to something that is simply “P.”

There are many problems which have not been successfully reduced, and there’s good reason to think that they never will be. But, of course, this is the typical NP problem: you can’t prove all NP problems can’t be reduced until you exhaust all of them.

As Cris Moore said, “the sky would fall” if it turned out that all NP problems are reducible to P.

To get a sense of the repercussions of such a discovery: a problem is defined as NP-hard if it is “at least as hard as the hardest problems in NP.” That is, other NP problems can be reduced, in polynomial time, to NP-hard problems.

That is to say, if you found a quick solution to an NP-hard problem it could also be applied as a quick solution to every NP problem. Thousands of problems which currently require exhaustive permutations to solve could suddenly be solved in polynomial time.

In computer science, there is a common mode of thinking: you are trying to solve a problem and your adversary, virtually interacting through your computer, does everything in their power to stop you.

This isn’t the problem-solving approach they taught us in physics or marketing, so I always found it a little odd when I encountered it. Why would I have an evil adversary inside the computer? Can’t I just write code that tries to solve a problem in the most efficient way possible? How does having an adversary help me do that?

I’m still not sure whether to find that mode of problem-solving helpful, but at least I know now where the idea came from.

Computer science was born out of World War II cryptography efforts. In cryptography, your aim is design algorithms that can’t be broken by some adversary or, alternatively, to break an algorithm designed by someone trying to make your job as hard as possible.

In short, in cryptography, there really are adversaries.

This makes the association of “The Adversary” with computer-based thinking even more intriguing – during World War II, it wasn’t just impressive algorithms, but the real complexity of language that created the most “unbreakable” codes.

On the front lines of WWII were “code talkers” – bilingual speakers of English and native American languages. Most notably, Navajo code talkers played a critical role in cryptography during the war. Each English letter was associated with an English word, and that English word was translated into Navajo, and many common expressions were given shorter nicknames. Additionally, Navajo has complex tonal qualities and syntax structures, making the language “unintelligible all other tribes and all other people,” according to one Major General Clayton B. Vogel.

Importantly, one of the reasons Navajo was particularly good for cryptography was that, at the time, it only existed as a spoken language. This was because, as Vogel pointed out, “Navaho [sic] is the largest tribe, but the lowest in literacy.”

These Navajo, many forced into boarding schools where they had been forbidden to speak their native language, then used their language as a crucial tool in American war and defense efforts.

Declassified in 1968, the Navajo code remains the only oral military code that has never been broken.

After much debate, the City of Somerville announced a change in policy earlier this week: snow emergency parking will now alternate sides of the street.

For those of you who don’t live in snow-laden cities, there is a general principal of winter snow removal that it is particularly hard to do when there are parked cars in the way. For that reason, municipalities typically restrict parking during winter storms.

Different communities have different strategies, but in Somerville, it has always been the rule that during a snow emergency you can only park on the odd side of the street.

The result of this, from an even-side resident’s perspective, is that snow plows favor the even side of the street and push more snow onto even-side sidewalks.

In a normal year, this is annoying. When you get a record breaking 110.6 inches of snow, it is a problem.

The real problem, you see, with snow plows, is that they have a remarkable ability to destroy hours worth of labor in mere minutes. I myself live on a corner – so I’m on the even side of one street and the odd side of another.

On the even side it is a constant battle – you go out and shovel, the plow snows you back in. You go out and shovel, the plow snows you back in. While the struggle itself might be enough to fill a man’s heart, it’s also incredibly frustrating.

And, it makes me feel like I should leave a note for passing pedestrians: sorry, I’m trying to do my civic duty, but the snow plow keeps ruining it. Apologies in advance if you come through at an un-shoveled time.

Last year, there was great tumult around the odd-only snow policy. Every time the City of Somerville posted an announcement about a new winter storm, there was a flurry of Facebook comments: Why can’t we alternate the ban?

This is, I believe, a fairly common practice among municipalities. If the winter starts in an odd year, snow parking is odd only. If the winter starts in an even year, snow parking is even only.

This is, arguably, more fair. A point that was raised many times by many citizens over many years, but perhaps most vociferously last year.

So it was exciting too see Mayor Joe Curtatone announce:

After careful review, and with significant community feedback, we have determined that the best and most logical next step in our ongoing efforts to provide excellent snow removal operations in Somerville is to alleviate some of the traditional hardships for residents living on the even side of the street where snow and ice buildup from plowing operations.

On the train home this evening, there were two women having an impassioned discussion about how to best organize books. In particular, one was debating the best organizational scheme for her collection – should she mix her fiction and non-fiction? Which books did she want to have more accessible?

This is a big dilemma.

When I was young and fancy free, I used to take great care in organizing my bookshelf – a habit I have since not found sufficient time for. The last time I organized my bookshelves, I realized my shelf space was tragically insufficient for my book collection – a problem I solved by acquiring more books.

Irregardless, strategies for organizing books fascinates me. Most of us don’t use the Dewey Decimal system at home, leaving many questions on organizational schema.

Personally, I like to organize my books first by topic, then by author, but – just for fun – I like to put a little randomness in there so you never know what you might find next to each other.

Tracking people’s political action and political deliberation Mutz comes to a disconcerting conclusion: the two are not readily compatible. On the one hand:

In studies of mass behavior, partisans are typically the “good guys.” They are the ones who always score highest on political knowledge tests, who vote most frequently, volunteer their time and money for campaigns, and basically embody everything that social scientists say they want all citizens to be.

But these hyper-partisans are sorely lacking in other civic areas. Most notably, as Mutz documents, they are significantly less likely to have productive political conversations with people who have different opinions. Why would they, when they already know they are “right”?

The strength of a person’s partisanship may have direct implications for their ability to interact with those who are different. As Mutz explains:

A highly politicized mindset of “us” versus “them” is easy so long as we do not work with “them” and our kids do not play with their kids. But how do we maintain this same fervor and political drive against “them” when we carpool together?

This is the crux of her argument – that a person cannot simultaneously maintain a diverse political network and a robust social network. We each must choose: be political and talk to those who agree with us or be apolitical and interact with people of many views.

The challenge she raises is an important one, and, I agree, one that has generally been overlooked by political theorists and deliberative theorists alike. Reconciling these two types of “ideal citizens” is no small task, but it is not, I believe, an impossible goal.

Mutz sees it as a flaw that our political system asks citizens to be both advocates and thoughtful observers; but – I wonder if the flaw is that we set these traits up as opposites.

Our socialization tells us that in polite company we ought to avoid such contentious topics as politics. But is it really so hard to imagine that with well-developed social skills people could regularly have engaging conversations across difference?

I’m not sure that is so impossible as Mutz imagines.

Certainly, as social beings we are likely to avoid engaging in irreparable fights with our friends and family. And it may be easier to be apolitical in social settings, but it is far from required.

Perhaps this dilemma points to another debate about deliberation – ought it to end in consensus?

If you assume that political talk needs to end in agreement, then debate among partisans who are not of like mind becomes nearly impossible indeed. By definition, each partisan is entrenched in their own view and the requirement of consensus leaves little room for anything but persuasion.

But two partisans, fully embracing an “us” versus “them” mentality, are unlikely to agree.

Modern trends in deliberation shy away from this consensus requirement. While many still see this as the ideal outcome of deliberation, there are theorists and practitioners who equally embrace deliberation as a tool for building understanding, not agreement.

Here, then, the nature of partisan deliberation may become quite different. Rather than (fruitlessly) trying to change each other’s minds, partisans could use the opportunity to better understand each other and to sharpen their own thinking (without necessarily changing their own minds).

Such deliberation is certainly no small task, and it rubs against how we’ve been socialized and how we’ve self-segregated into like minded groups. But it is, I think, possible to be both a partisan and fully open to genuinely hearing the other side.