http://www.uvm.edu/~cmplxsys/
en-usTue, 31 Mar 2015 14:25:49 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=20199&category=cmplxsys
Mon, 09 Feb 2015 00:00:00 -0500http://www.uvm.edu/~cmplxsys/?Page=news&storyID=20199&category=cmplxsysIn 1969, two psychologists at the University of Illinois proposed what they called the Pollyanna Hypothesis—the idea that there is a universal human tendency to use positive words more frequently than negative ones. “Put even more simply,” they wrote, “humans tend to look on (and talk about) the bright side of life.” It was a speculation that has provoked debate ever since.

Now a team of scientists at the University of Vermont and The MITRE Corporation have applied a Big Data approach—using a massive data set of many billions of words, based on actual usage, rather than "expert" opinion—to confirm the 1960s guess.

Movie subtitles in Arabic, Twitter feeds in Korean, the famously dark literature of Russia, books in Chinese, music lyrics in English, and even the war-torn pages of The New York Times—the researchers found that these, and probably all human language­, skews toward the use of happy words.

“We looked at ten languages,” says UVM mathematician Peter Dodds who co-led the study, “and in every source we looked at, people use more positive words than negative ones.”

But doesn’t our global torrent of cursing on Twitter, horror movies, and endless media stories on the disaster du jour mean this can’t be true? No. This huge study of the “atoms of language—individual words,” Dodds says, indicates that language itself—perhaps humanity’s greatest technology—has a positive outlook. And, therefore, “it seems that positive social interaction,” Dodds says, is built into its fundamental structure.

Above average happiness

To deeply explore this Pollyanna possibility, the team of scientists at UVM’s Computational Story Lab—with support from the National Science Foundation and The MITRE Corporation—gathered billions of words from around the world using 24 types of sources including books, news outlets, social media, websites, television and movie subtitles, and music lyrics. For example, “we collected roughly 100 billion words written in tweets,” says UVM mathematician Chris Danforth, who co-led the new research.

From these sources, the team then identified about 10,000 of the most frequently used words in each of 10 languages including English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese, Russian, Indonesian and Arabic. Next, they paid native speakers to rate all these frequently used words on a nine-point scale from a deeply frowning face to a broadly smiling one. From these native speakers, they gathered five million individual human scores of the words. Averaging these, in English for example, “laughter” rated 8.50, “food” 7.44, “truck” 5.48, “the” 4.98, “greed” 3.06 and “terrorist” 1.30.

A Google Web crawl of Spanish-language sites had the highest average word happiness, and a search of Chinese books had the lowest, but—and here’s the point—all 24 sources of words that they analyzed skewed above the neutral score of five on their one-to-nine scale—regardless of the language. In every language, neutral words like “the” scored just where you would expect: in the middle, near five. And when the team translated words between languages and then back again they found that “the estimated emotional content of words is consistent between languages.”

In all cases, the scientists found “a usage-invariant positivity bias,” as they write in the study. In other words, by looking at the words people actually use most often they found that, on average, we—humanity— “use more happy words than sad words," Danforth says.

Moby Dick vs. the Count of Monte Cristo

This new research study also describes a larger project that the team of 14 scientists has developed to create “physical-like instruments” for both real-time and offline measurements of the happiness in large-scale texts—“basically, huge bags of words,” Danforth explains.

They call this instrument a “hedonometer”—a happiness meter. It can now trace the global happiness signal from English-language Twitter posts on a near-real-time basis and show differing happiness signals between days. For example, a big drop was noted on the day of the terrorist attack on Charlie Hebdo in Paris, but the signal rebounded over the following three days. The hedonometer can also discern different happiness signals in U.S. states and cities: Vermont currently has the happiest signal, while Louisiana has the saddest. And the latest data puts Boulder, Colo., in the number one spot for happiness, while Racine, Wis., is at the bottom.

But, as the new paper describes, the team is working to apply the hedonometer to explore happiness signals in many other languages—the French signal will be up soon—and from many sources beyond Twitter. For example, the team has applied their technique to more than 10,000 books, inspired by Kurt Vonnegut’s “shapes of stories” idea. Visualizations of the emotional ups and downs of these books can been seen on the hedonometer website; they rise and a fall like a stock-market ticker. The new study shows that Moby Dick’s 170,914 words has four or five major valleys that correspond to low points in the story, and the hedonometer signal drops off dramatically at the end, revealing this classic novel’s darkly enigmatic conclusion. In contrast, Dumas’s Count of Monte Cristo—100,081 words in French— ends on a jubilant note, shown by a strong upward spike on the meter.

The new research “in no way asserts that all natural texts will skew positive,” the researchers write, as these various books reveal. But at a more elemental level, the study brings evidence from Big Data to a long-standing debate about human evolution: our social nature appears to be encoded in the building blocks of language.

The new study as well as the hedonometer is based on the research of Peter Dodds and Chris Danforth and their team in the University of Vermont’s Computational Story Lab, including visualization by Andy Reagan, at UVM’s Complex Systems Center, and the technology of Brian Tivnan, Matt McMahon and their team from The MITRE Corporation.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=20113&category=cmplxsys
Wed, 28 Jan 2015 00:00:00 -0500http://www.uvm.edu/~cmplxsys/?Page=news&storyID=20113&category=cmplxsysJohn Brockman's Edge Question is a major event in the intellectual calendar each year — its roots go back to talks he had with Isaac Asimov and others in 1980. This year's question, "What do you think about machines that think?" drew essays from Daniel C. Dennett, Nicholas Carr, Steven Pinker, Freeman Dyson, George Church and nearly two hundred other luminaries and Nobel Prize winners.

In his new essay, “Manipulators and Manipulanda,” Bongard asks you to “place a familiar object on a table in front of you, close your eyes, and manipulate that object such that it hangs upside down above the table.” What did you do, and think, to know you were succeeding? Now, Bongard writes, “close your eyes again, and think about manipulating someone you know into doing something they may not want to do.” Did you employ similar thinking and how does the structure of your body — say, the fact that you have two hands (not one or fifty) — shape your thinking about both kinds of manipulation?

Bongard is deeply interested in how our bodies shape the way we think (and has a book on this topic). His essay continues this exploration, arguing machines must act in order to think, and “in order to act, they must have bodies to connect physical and abstract reasoning,” he notes.

But what if these machines do not have bodies shaped like humans, Bongard wonders. He describes a hypothetical robot shaped like a bush. “Picture a shrub in which each branch is an arm and each twig is a finger,” he writes. “This robot's fractal nature would allow it to manipulate thousands or millions of objects simultaneously. How might such a robot differ in its thinking about manipulating people, compared to how people think about manipulating people?” Bongard wonders.

To learn Bongard’s answer, read the whole essay. It’s online now and will appear in a printed book as each of the Edge questions — like “What will change everything?” (2009) and “What is your dangerous idea?” (2006) — has for the last decade.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=19213&category=cmplxsys
Mon, 22 Sep 2014 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=19213&category=cmplxsysDr. Josh Payne was recently awarded the highly competitive Ambizione Fellowship, a major 3-year grant awarded by the Swiss National Science Foundation to promote junior scientists into the role of "group leader," prior to their first faculty position. We asked Josh -- who is still in Switzerland -- to write an essay about his time at UVM.

My name is Josh Payne and I graduated with a Ph.D. in Computer Science from UVM in 2009. My trajectory to Vermont was nearly random. As I was finishing my master's degree in applied mathematics at RPI, I received an email announcing a fellowship opportunity at UVM. I had a lot of friends in Burlington and loved snowboarding in the Green Mountains, so I thought I'd apply. When I came to interview, I met my future advisor, Prof. Margaret Eppstein. It was immediately clear that we were a good match, both in terms of our personalities and our research interests. She was doing cool stuff at the interface of computer science and evolutionary biology and I was keen to learn more, so I signed up.

Maggie and I worked together for 5 years and had a very productive (and fun) collaboration. We were broadly interested in understanding how network structure influenced the spread of information. For example, we asked how various patterns of mating interactions affected the spread of advantageous genes in evolving populations, and how the structure of social networks reinforced beliefs (i.e., social contagion) to facilitate dissemination. Maggie and I made a great team. I couldn't have asked for a better mentor.

I was lucky to come to UVM when I did, because just a couple of years into my time there, the College of Engineering and Mathematics hired Drs. Josh Bongard, Chris Danforth, and Peter Dodds, all at once. That was great. Those guys brought so much life into the college, both within the classroom and without. Josh, for instance, was one of the best professors I'd ever taken a class with. He was also an easy going, approachable guy who genuinely cared about his students. Plus, he was interested in evolutionary robotics. (Is there a cooler research program? I'm not sure.) And Peter and Chris, those two were absolutely hilarious. They made academics fun. They were a couple of young guys, asking cool questions about complex systems and having a blast answering them. Their enthusiasm was infectious and I couldn't help but be encouraged.

Inspiration at its finest.

My time at UVM afforded several opportunities for travel, both to attend short scientific conferences and to participate in longer summer courses. I spent time in Seattle, Santa Fe, Boston, D.C., and London. But the most influential of my travels was a 3-month visit to the Institute of Applied Systems Analysis in Vienna. There, I worked on the evolution of dispersal in an old Hapsburg palace, in the same room that Marie Antoinette supposedly played in as a child. More importantly, that's where I met my future wife, Davnah, who was working on fisheries-induced evolution at the time.

After I graduated from UVM in 2009, Davnah and I moved to Hanover, NH, to a post-doctoral position at Dartmouth. She joined Dr. Ryan Calsbeek's group in the biology department and I teamed up with Dr. Jason Moore in the Giesel School of Medicine. We loved our time in New Hampshire. We're both into outdoor sports and we couldn't believe how good the cycling and x-country skiing was in the area. Hanover was such a cool spot. But after a couple of years we got cabin fever and decided to venture further afield. I was awarded an International Research Fellowship from the NSF to study with Andreas Wagner at the University of Zurich. So, in 2011, we packed our bags and moved to Switzerland, which is where we've been ever since.

Things have worked out well here. Davnah has happily left the academy, finding a comfortable niche in industry. I'm still afloat in academia and have found my way back to the States a couple of times to teach in Bard College's Citizen Science program, a three-week crash course on the scientific method and its place in society. I've recently been awarded an Ambizione Fellowship via the Swiss NSF to study transcriptional regulation for three years and will start up my own small group here in Zurich beginning in January. Another recent highlight was receiving the Swiss Young Bioinformatician of the Year Award for some work Andreas and I published in Science back in February.

Life is good outside of work as well. Davnah and I had our first child in September of 2013, a baby girl named Inge Marie. She's cool. We're also happy to live in Zurich. The city offers all of the amenities of a major metropolitan area (museums, restaurants, a vibrant downtown, etc.), but also has lots of green space and fresh water, providing plenty of opportunities for outdoor sports. And the city is only an hour from the Alps by train. Needless to say, there are plenty of fun things to do outdoors there.

So that’s it. That’s my story. And it all began with a random email in 2003, asking if I might be interested in taking a Ph.D. at UVM.

I'm glad I said yes.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=17840&category=cmplxsys
Wed, 19 Feb 2014 00:00:00 -0500http://www.uvm.edu/~cmplxsys/?Page=news&storyID=17840&category=cmplxsysSelecting a Chevy Volt, Tesla Model S, Nissan Leaf — or one of many other new models — shoppers in the United States bought more than 96,000 plug-in electric cars in 2013. That’s a tiny slice of the auto market, but it’s up eighty-four percent from the year before. In Vermont, as of January 2014, there were 679 plug-in vehicles, according to the Vermont Energy Investment Corporation. That’s two hundred percent growth over 2013.

This is good news in terms of oil consumption and air pollution. But, of course, every plug-in has to be, well, plugged in. And this growing fleet will put a lot of new strain on the nation’s aging electrical distribution systems, like transformers and underground cables, especially at times of peak demand — say, six in the evening when people come home from work.

How to manage all these cars seeking a socket at the same time — without crashing the grid or pushing rates to the roof — has some utilities wondering, if not downright worried.

Now a team of UVM scientists have created a novel solution, which they report on in the forthcoming March issue of IEEE Transactions on Smart Grid, a journal of the Institute of Electrical and Electronics Engineers.

Put it in a packet

“The key to our approach is to break up the request for power from each car into multiple small chunks — into packets,” says Jeff Frolik, a professor in the College of Engineering and Mathematical Sciences and co-author on the new study.

By using the nation’s growing network of “smart meters” — a new generation of household electric meters that communicate information back-and-forth between a house and the utility — the new approach would let a car charge for, say, five or ten minutes at a time. And then the car would “get back into the line,” Frolik says, and make another request for power. If demand was low, it would continue charging, but if it was high, the car would have to wait.

“The vehicle doesn't care. And, most of the time, as long as people get charged by morning, they won’t care either,” says UVM’s Paul Hines, an expert on power systems and co-author on the study. “By charging cars in this way, it's really easy to let everybody share the capacity that is available on the grid.”

Taking a page out of how radio and internet communications are distributed, the team’s strategy will allow electric utilities to spread out the demand from plug-in cars over the whole day and night. The information from the smart meter prevents the grid from being overloaded. "And the problem of peaks and valleys is becoming more pronounced as we get more intermittent power — wind and solar — in the system,” says Hines. “There is a growing need to smooth out supply and demand.”

At the same time, the UVM teams' invention — patent pending — would protect a car owner’s privacy. A charge management device could be located at the level of, for example, a neighborhood substation. It would assess local strain on the grid. If demand wasn’t too high, it would randomly distribute “charge-packets” of power to those households that were putting in requests.

“Our solution is decentralized,” says Pooya Rezaei, a doctoral student working with Hines and the lead author on the new paper. “The utility doesn't know who is charging.”

Instead, the power would be distributed by a computer algorithm called an “automaton” that is the technical heart of the new approach. The automaton is driven by rising and falling probabilities, which means everyone would eventually get a turn — but the utility wouldn’t know, or need to know, a person’s driving patterns or what house was receiving power when.

Urgent needs

But what if you come home from work and need to charge your plug-in right away to get to your kid’s big basketball game? “We assumed that drivers can decide to choose between urgent and non-urgent charging modes,” the scientists write. In the urgent mode the vehicle requests charge regardless of the price of electricity. In this case, the system gives this car the best odds of getting to the front of the line, almost guaranteeing that it will be charged as soon as possible — but at full market rates instead of the discount rate that would be used as an incentive for those opting-in to the new approach.

Why put plug-in cars on “packetized” demand instead of all the other electric demands in a house? Because the new generation of car chargers, so-called “Level 2 PEV chargers” are likely to be the biggest power load in a home. “The load provided by an electric vehicle and the load provided by a house are basically equivalent,” says Frolik. “If someone gets an electric vehicle it's like adding another house to that neighborhood.”

Imagine a neighborhood where everyone buys a plug-in car. Demand doubles, but it’s over the same wires and transformers. Concern about overload in this kind of scenario has led some researchers and utilities to explore systems where the company has centralized control over who can charge when. This so-called “omniscient centralized optimization” can create a perfectly efficient use of the available power — in theory.

But it also means drivers have to either be willing to provide information about their driving habits or set schedules about when they’ll charge their car. This rubs against the grain of a century’s worth of understanding of the car as a tool of autonomy.

Others have proposed elaborate online auction schemes to manage demand. “Some of the other systems are way too complicated,” says Hines, who has extensive experience working with actual power companies. “In a big city, a utility doesn’t want to be managing millions of tiny auctions. Ours is a much simpler system that gets the job done without overloading the grid and gets people what they want the vast majority of the time.”

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=17304&category=cmplxsys
Thu, 21 Nov 2013 00:00:00 -0500http://www.uvm.edu/~cmplxsys/?Page=news&storyID=17304&category=cmplxsysDr. Chris Danforth, Associate Professor in the Department of Mathematics & Statistics, has been appointed to the Flint Professorship in the College of Engineering and Mathematical Sciences at the University of Vermont.

The Flint Professorship was established in 1885 by Edwin Flint who named UVM as a benefactor of his estate. The appointment was to go to a distinguished professor in the fields of “Mathematics, Natural or Technic Science.”

“It is a great pleasure to see Chris recognized for his contributions to the college, “said Dean Luis Garcia, who further commented, “He is a great representative of the energy and creativity that our faculty brings every day, and I am thrilled that our students benefit from his talents.”

Danforth's research focuses on the interface between big data and mathematical models. His scholarly contributions are broad. He has published in the fields of Atmospheric Science, Applied Mathematics, Astronomy, Biology, Complex Systems, Computer Science, Ecology, Engineering, Linguistics, Nonlinear Dynamics, Physics, and Psychology. He has applied principles of chaos theory to improve the algorithms used to make weather forecasts, and developed a new instrument for measuring population level happiness in real-time using social media, the hedonometer.

Danforth, along with colleague Peter Dodds, co-directs the Computational Story Lab, a group of applied mathematicians at the Undergraduate, Masters, PhD, and Postdoctoral rank working on large-scale, system-level problems in a wide variety of disciplines. The lab's research has been featured by the New York Times, Science Magazine, NBC's Today Show, and the BBC among others.

"I feel fortunate to be given the opportunity to engage in the research process with bright, creative students and colleagues, developing new approaches for describing and understanding the physical and social universe" says Danforth. "This is a team effort."

Danforth joined UVM after receiving a PhD in Applied Mathematics and Scientific Computation from the University of Maryland in 2006. He received a BS with honors in Mathematics and Physics from Bates College in 2001, where he was elected to Phi Beta Kappa, Pi Mu Epsilon, and Sigma Xi.

Danforth will hold the Chair until the end of academic year 2017-2018. During his tenure as the Flint Professor, he’ll receive a stipend to support his graduate program. The Chair was most recently held by Professor Emeritus Robert Jenkins, former Dean of the College of Engineering and Mathematical Sciences.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=16695&category=cmplxsys
Thu, 29 Aug 2013 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=16695&category=cmplxsysUVM's Hedonometer project has been featured in a BBC News story on big data, "Tomorrow's cities: How big data is changing the world":

You may not be that bothered about the idea of living in a smart city but I bet you'd love to live in one that was happy.

The data to measure the happiness of a city is already all around us, in the tweets we send on an hourly basis to the profiles we share on Facebook.

And increasingly that data is being captured and analysed to gauge the health and happiness of a nation.

Take the Hedonometer project which this year set out to map happiness levels in cities across the US using data from Twitter.

Using 37 million geolocated tweets from more than 180,000 people in the US, the team from the Advanced Computing Centre at the University of Vermont rated words as either happy or sad.

As well as discovering, somewhat depressingly, that people were happiest when they were further away from home, the study threw up some interesting facts about how healthy they were too.

It found words such as "starving" and "heartburn" were written far more frequently in cities with a high percentage of obese citizens.

Such data could be incredibly useful to city governments, for informing them about what policies were needed in any given area.

"Cities looking to understand changes in the behaviour of their citizens, for example to locate ads for public health programmes, can look to social media for real-time information," said Chris Danforth, one of the project leaders.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=16666&category=cmplxsys
Mon, 26 Aug 2013 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=16666&category=cmplxsysConsider an experiment where one draws liquid through a straw by applying a pressure difference between its ends. This is equivalent to opening up a faucet, and the flow across the straw will depend on its size as well as the nature of the atoms making up the fluid inside it. Next, decrease the temperature to near absolute zero, so that the interactions between particles in the straw are dictated by the rules of quantum mechanics. At such low temperature, only a liquid of helium is known to exist, because its quantum motion prevents any solidification. This liquid is known as a "superfluid" because it can flow across the straw without any viscosity (friction) regardless of the its length. How would the superfluid of helium change if the diameter of the straw was reduced to the size of a single atom? In this nanoscale limit, the atoms will be forced to queue up, one-by-one as they move through the straw and the enhanced interactions may lead to exciting changes in the properties of the fluid.

To investigate this nanoscale regime, an international team of researchers led by Prof. Del Maestro at UVM has performed a numerical experiment requiring thirty years of computer processing time to measure the superfluid properties of a liquid of helium flowing through and around straws of roughly 10 nm length and 1-3 nm in diameter. Remarkably, in their work published this week in Physical Review B, they have demonstrated that in the core the straw, the atoms can flow without viscosity, but in a way that is distinct from superflow in wider three dimensional straws. Instead, the core behaves as a special type of cooperative one dimensional quantum fluid known as a Luttinger liquid.

The next step will be the experimental observation of a Luttinger liquid of helium and recent advances in nanotechnology combined with insights from Prof. Del Maestro's simulations may make this a reality in the near future.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=16567&category=cmplxsys
Wed, 07 Aug 2013 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=16567&category=cmplxsysIt's "evolution on steroids," according to Morgan Freeman who explores the robotics work of UVM Professor Josh Bongard and Cornell Professor Hod Lipson in this episode of "Through the Wormhole with Morgan Freeman."

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=15720&category=cmplxsys
Tue, 30 Apr 2013 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=15720&category=cmplxsysPick up your smartphone. How are financial markets faring? Check Dow Jones or the S&P 500. Average temperature in the United States last July 4? Steer your iPad over to the National Weather Service. OK, so how unhappy was the world after the Boston Marathon bombings on Monday, April 15?

Wait a minute. You can’t measure global happiness, can you? Yep, now there’s a website for that: www.hedonometer.org.

A team of scientists from the University of Vermont and The MITRE Corporation have been gaining international attention over the last few years for the creation of what they’re calling a hedonometer. It’s a happiness sensor.

Now findings from this research are updated every 24 hours (soon to be every hour, and, eventually, every minute) — and are available to the public for free.

The day of the Boston Marathon was the saddest day measured by the scientists in nearly 5 years of observations.

Twitter, BBC, Bitly, and beyond

The new website went public on April 30. On its front page, a wavering graph rises and falls like a ticker at the New York Stock Exchange. Except, instead of averaging the value of thousands of companies, the hedonometer compiles and averages the emotional state of tens of millions of people.

“What it’s doing right now is measuring Twitter, checking the happiness of tweets in English,” says Chris Danforth, a UVM mathematician who co-led the creation of the site with fellow mathematician Peter Dodds.

But soon the hedonometer will be drawing in other data streams, like Google Trends, the New York Times, blogs, CNN transcripts, and text captured by the link-shortening service Bitly. And it will be data-mining in twelve languages.

Hedonometer.org is based on the research of Dodds and Danforth and their team in the Computational Story Lab at the University of Vermont’s Complex Systems Center, and the technology of Brian Tivnan, Matt McMahon and their team from MITRE, a not-for-profit organization that operates federal research and development centers and has expertise in big data analytics.

In February, the research team made headlines with the hedonometer. Studying geo-tagged tweets from cell phones, they reported on the happiest and saddest cities in America: Napa, CA, at the top and Beaumont, TX, at the bottom. In future versions of the new website, the researchers plan to make this kind of geographically linked data available, allowing as-it-happens observation of how a happiness signal varies, say, between Seattle and San Diego.

“Reporters, policymakers, academics — anyone — can come to the site,” says Danforth, “and see population-level responses to major events.”

Like the Boston Marathon bombings.

Boston’s impact

On Monday, April 15, reporters and TV crews from all over the world flocked to Boston to report on what they thought would be stories of athletic triumph. Instead, as the world now knows, two crude bombs near the finish line were detonated, killing three and injuring more than 260. Reporters turned to telling this new, tragic story. Many went out and started interviewing people. The stories were compelling; many people they spoke to around Boston seemed scared, angry and sad.

But suppose reporters wanted to find out how the bombings were affecting the mood of the world — in real-time. Was this horror registering in the global psyche, and how deeply?

“Many of the articles written in response to the bombing have quoted individual tweets reflecting qualitative micro-stories,” says Danforth. But capturing a few online comments or reactions on video does not necessarily reflect the overall mood of the English-speaking world anymore than talking to ten people in the park equals the US Census.

What if a reporter had also turned to the hedonometer? First, she’d have seen a dramatic downward spike in happiness for that day. Clearly, the Boston Marathon bombings were registering around the world. “Our instrument reflects a kind of quantitative macro-story,” Danforth says, “one that journalists can use to bring big data into an article attempting to characterize the public response to the incident.”

Then — in the same way that a stockbroker might drill down into a market average to get a sense of which companies are moving the markets the most — a reporter could dig deeper into the hedonometer’s data. There, she could see that “explosion,” “victims,” and “kill” are at the top of a list of trending words pushing the hedonometer down to its lowest ever point on April 15.

“They rise to the top because they are words that are negative,” Danforth says, “but primarily because they appear so much more than they usually do in the background in the ambient chatter of English.”

Emotional temperature

The hedonometer draws on what scientists call the “psychological valence” of about 10,000 words. Paid volunteers, using Amazon’s Mechanical Turk service, rated these words for their “emotional temperature,” says Dodds, director of UVM’s Complex Systems Center.

The volunteers ranked words they perceived as the happiest near the top of a 1-9 scale; sad words near the bottom. Averaging the volunteers’ responses, each word received a score: “happy” itself ranked 8.30, “hahaha” 7.94, “cherry” 7.04, and the more-neutral “pancake” 6.96. Truly neutral words, “and” and “the” scored 5.22 and 4.98. At the bottom, “crash” 2.60, the emoticon “:(“ 2.36, “war” 1.80, and “jail” 1.76.

Using these scores, the team collects some fifty million tweets from around the world each day—“then we basically toss all the words into a huge bucket,” says Dodds—and calculate the bucket’s average happiness score. As the site develops, the scientists anticipate that it will be gathering billions of words and sentences daily.

"Our method is only reasonable for large-scale texts, like what's available on the Web," Dodds says. "Any word or expression can be used in different ways. There's too much variability in individual expression," to use this approach to understand small groups or small samples. For example, “sick” may mean something radically different to a 14-year-old skateboarder than it does to his pediatrician.

But that's the beauty of big data. Each word is like an atom in the air when you’re trying to figure out the temperature. It’s the aggregate effect that registers, and no individual tweet or word makes much difference. In the Boston Marathon bombings example, positively scored words like “prayers” and “families” also spiked, but, obviously, not for positive reasons.

“If we remove ‘prayers,’ ‘love,’ and ‘families,’” says Chris Danforth, “it’s not going to change the day’s overall deviation from the background, because of all the other words.”

Changing which words are used to assess the overall emotional picture, “is like changing the filter on a lens you’re using,” explains Peter Dodds. “You can take out all the color, or you can turn up the contrast, but you can still see the picture.”

The verdict of consciousness

In 1881, a little-known book, Mathematical Psychics, published by Francis Edgeworth, asked the reader to “imagine an ideally perfect instrument, a psychophysical machine, continually registering the height of pleasure experienced by an individual, exactly according to the verdict of consciousness.”

In other words, a hedonometer. While Edgeworth’s was a thought experiment, Dodds and Danforth’s hedonometer is a real device. Of course, it doesn’t directly measure “the height of pleasure.” While the team is opening conversations with experts in brain scanning about how fMRI images might corroborate their remote-sensing approach, "we can’t — and really don’t want to — look inside people's heads," says Dodds.

Nor is their hedonometer “ideally perfect.” They’re working now to expand beyond the “atoms” of single words to explore the “molecules” of two-word expressions. But the hedonometer does work.

“The key piece is not whether we’re correctly measuring atoms and molecules,” says Brian Tivnan, a researcher from MITRE. “It’s the relative context that is so important: which is why the sudden drop from the Boston Marathon bombings jumps out at you. The hedonometer shows the pulse of a society.”

Of course, happiness isn’t simple. Plato, Buddha, Freud and Tina Turner all pondered its meaning. Many Americans rank happiness as what they want most in life, but what is it, really?

“We’re not trying to tell you that contentment is better than happiness — we’re not trying to define the word,” says Danforth. The Nasdaq Index doesn’t capture the whole stock market. Gross Domestic Product doesn’t define the meaning of the economy. An EKG doesn’t tell a doctor everything about your heart. But all these aggregate measures, of something remote, are widely studied. The hedonometer may prove to be the same.

“We’re just saying we’re measuring something important and interesting,” says Chris Danforth. “And, now, sharing it with the world.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=14604&category=cmplxsys
Thu, 18 Oct 2012 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=14604&category=cmplxsysPositive moods appear to spread through social networks -- up to three links away from the person you interact with -- according to a new study published in the Journal of Computational Science by Christopher Danforth and Peter Dodds, professors in the Department of Mathematics and Statistics, and their research team.

The finding that happiness might be contagious has a caveat, however: the phenomenon is not about simply following someone on Twitter, but two people connecting by directly replying to one another.

Researchers used a "hedomenter" for scoring the happiness level of words used in a tweet, measured on a scale of one to nine, with "love" rating 8.42 and "die" coming in at 1.74. "The study," according to the UK Huffington Post, "computed the happiness of each user by applying this 'hedometer' to all tweets authored by the user." Using this measure, contentment levels are higher the closer a user is to those who are very happy and declines with the degree of separation.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=14569&category=cmplxsys
Tue, 16 Oct 2012 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=14569&category=cmplxsysPaul Hines, assistant professor in the School of Engineering, received the 2012 Milt Silveira Award from Bernard "Chip" Cole, Interim Dean of the UVM College of Engineering and Mathematical Sciences (CEMS), during the October 16, 2012 CEMS faculty meeting.

This award, established in 2008 by Dr. Milton Silveira, recognizes the junior faculty member in CEMS who "best embodies a 'pioneering spirit,' drive and potential to succeed at the highest levels of his or her profession." Faculty previously recognized include: Frederic Sansoz in 2008, Josh Bongard in 2009, Jane Hill in 2010, and John Voight in 2011.

Hines has a BS from Seattle Pacific University, an MS from the University of Washington and a Ph.D. from Carnegie Mellon University in electrical engineering. He serves as a research scientist within the Computational Science Division at the U.S. Department of Energy (DOE) National Energy Technology Laboratory (NETL), and is leading the NETL component of a DOE Office of Electricity Delivery and Energy Reliability (OE) -- a funded research project that aims to use distributed autonomous agents to improve electricity distribution systems.

Hines' scholarly contributions are in the areas of electrical energy systems, decentralized (agent-based) control systems, complex networks and vulnerability, optimization, and energy policy. His goal is to reduce the frequency of 2003-sized blackouts to less than one in 25 years. Current statistics indicate that a 2003-level blackout will occur every 25 years.

His research has been featured in prestigious journals such as Scientific American, The International Journal of Critical Infrastructures, and IEEE Transactions on Power Systems.

For more information about his research, please visit Paul Hines' website.]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=14100&category=cmplxsys
Tue, 14 Aug 2012 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=14100&category=cmplxsysIn 1714, the British government held a contest. They offered a large cash prize to anyone who could solve the vexing “longitude problem” — how to determine a ship’s east/west position on the open ocean — since none of their naval experts had been able to do so.

Lots of people gave it a try. One of them, a self-educated carpenter named John Harrison, invented the marine chronometer — a rugged and highly precise clock — that did the trick. For the first time, sailors could accurately determine their location at sea.

A centuries-old problem was solved. And, arguably, crowdsourcing was born.

Crowdsourcing is basically what it sounds like: posing a question or asking for help from a large group of people. Coined as a term in 2006, crowdsourcing has taken off in the internet era. Think of Wikipedia, and its thousands of unpaid contributors, now vastly larger than the Encyclopedia Britannica.

Crowdsourcing has allowed many problems to be solved that would be impossible for experts alone. Astronomers rely on an army of volunteers to scan for new galaxies. At climateprediction.net, citizens have linked their home computers to yield more than a hundred million hours of climate modeling; it’s the world’s largest forecasting experiment.

But what if experts didn’t simply ask the crowd to donate time or answer questions? What if the crowd was asked to decide what questions to ask in the first place?

Could the crowd itself be the expert?

That’s what a team at the University of Vermont decided to explore — and the answer seems to be yes.

Prediction from the people

Josh Bongard and Paul Hines, professors in UVM’s College of Engineering and Mathematical Sciences, and their students, set out to discover if volunteers who visited two different websites could pose, refine, and answer questions of each other — that could effectively predict the volunteers’ body weight and home electricity use.

The experiment, the first of its kind, was a success: the self-directed questions and answers by visitors to the websites led to computer models that effectively predict user’s monthly electricity consumption and body mass index.

“It’s proof of concept that a crowd actually can come up with good questions that lead to good hypotheses,” says Bongard, an expert on machine science.

In other words, the wisdom of the crowd can be harnessed to determine which variables to study, the UVM project shows — and at the same time provide a pool of data by responding to the questions they ask of each other.

“The result is a crowdsourced predictive model,” the Vermont scientists write.

Unexpected angles

Some of the questions the volunteers posed were obvious. For example, on the website dedicated to exploring body weight, visitors came up with the question: “Do you think of yourself as overweight?” And, no surprise, that proved to be the question with the most power to predict people’s body weight.

But some questions posed by the volunteers were less obvious. “We had some eye-openers,” Bongard says. “How often do you masturbate a month?” might not be the first question asked by weight-loss experts, but it proved to be the second-most-predictive question of the volunteer’s self-reported weights — more predictive than “how often do you eat during a day?”

“Sometimes the general public has intuition about stuff that experts miss — there’s a long literature on this,” Hines says.

“It’s those people who are very underweight or very overweight who might have an explanation for why they’re at these extremes — and some of those explanations might not be a simple combination of diet and exercise,” says Bongard. “There might be other things that experts missed.”

Cause and correlation

The researchers are quick to note that the variables revealed by the evolving Q&A on the experimental websites are simply correlated to outcomes — body weight and electricity use — not necessarily the cause.

"We’re not arguing that this study is actually predictive of the causes,” says Hines, “but improvements to this method may lead in that direction.”

Nor do the scientists make claim to being experts on body weight or to be providing recommendations on health or diet (though Hines is an expert on electricity, and the EnergyMinder site he and his students developed for this project has a larger aim to help citizens understand and reduce their household energy use.)

“We’re simply investigating the question: could you involve participants in the hypothesis-generation part of the scientific process?” Bongard says. “Our paper is a demonstration of this methodology.”

“Going forward, this approach may allow us to involve the public in deciding what it is that is interesting to study,” says Hines. “It’s potentially a new way to do science.”

And there are many reasons why this new approach might be helpful. In addition to forces that experts might simply not know about — “can we elicit unexpected predictors that an expert would not have come up with sitting in his office?” Hines asks — experts often have deeply held biases.

Faster discoveries

But the UVM team primarily sees their new approach as potentially helping to accelerate the process of scientific discovery. The need for expert involvement — in shaping, say, what questions to ask on a survey or what variable to change to optimize an engineering design — “can become a bottleneck to new insights,” the scientists write.

“We’re looking for an experimental platform where, instead of waiting to read a journal article every year about what’s been learned about obesity,” Bongard says, “a research site could be changing and updating new findings constantly as people add their questions and insights.”

The goal: “exponential rises,” the UVM scientists write, in the discovery of what causes behaviors and patterns — probably driven by the people who care about them the most. For example, “it might be smokers or people suffering from various diseases,” says Bongard. The team thinks this new approach to science could “mirror the exponential growth found in other online collaborative communities,” they write.

“We’re all problem-solving animals,” says Bongard, “so can we exploit that? Instead of just exploiting the cycles of your computer or your ability to say ‘yes’ or ‘no’ on a survey — can we exploit your creative brain?”

The University of Vermont has been named one of only 18 colleges and universities in the country to receive a highly coveted Integrative Graduate Education and Research Training, or IGERT, grant from the National Science Foundation, the first awarded in the state of Vermont. The UVM proposal was chosen from among 154 IGERT proposals submitted to the NSF in 2012.

UVM will receive approximately $3 million over five years to create an innovative, multi-disciplinary graduate program supporting 22 doctoral students who will be trained to analyze and develop smart grid systems. UVM will also hire two faculty members as part of the grant.

A smart grid is an intelligent, digitally enabled electric grid that gathers, distributes and acts on information about the behavior of consumers and suppliers in order to improve the efficiency, reliability, cost and sustainability of electricity services. A smart grid can also better serve new technologies, such as plug-in hybrid electric vehicles, and more effectively assimilate renewable energy, such as solar and wind power, that is not produced at a uniform, predictable rate.

UVM's new graduate program will create a generation of multidisciplinary scientists who are capable of analyzing the entire smart grid system – integrating technology, human behavior and public policy – to understand the complex dynamics of the next generation of electric power systems. The ultimate goal of the program is to develop the scientific/engineering research workforce necessary to allow intelligent deployment of smart grids that provide efficient power delivery in keeping with society's needs.

The growth of smart grids has been hampered in part by a workforce unable to fully exploit the integrated nature of the intelligent digital technology, a deficiency the new program will directly address, according to Domenico Grasso, vice president for Research and dean of the Graduate College at UVM.

“In the past we might have viewed technical issues like energy loading and consumer concerns like variable pricing as separate,” Grasso said. “Now we know all these factors touch one another and have to be viewed holistically to devise solutions that can move us forward.”

TRI critical to success

Though UVM has tried for IGERT funding in the past, Grasso believes the renewed institutional commitment to investments in innovative and strategic research played a significant role in the success of this grant. Specifically, UVM's Transdisciplinary Research Initiative, or TRI, specifically the Complex Systems and Neuroscience, Behavior and Health spires, were critical components in the winning bid, according to Grasso. “Both Complex Systems and NBH were instrumental in our obtaining the grant and will be at the heart of the new curriculum,” he said. The first students will enroll beginning in the fall of 2012.

In addition to the spires, two other factors were important in UVM's successful application for the IGERT, said Jeffrey Marshall, a professor in the School of Engineering in the College of Engineering and Mathematical Sciences, who spearheaded the grant: the Vermont Advanced Computer Center at UVM, which will provide the computational power the initiative will need, and UVM's ongoing partnership with Sandia National Laboratories, one of whose areas of focus is the development and deployment of smart grid technology.

“Multi-disciplinary research and education of the sort sponsored by the IGERT program are an ideal fit with the smaller, more connected structure of UVM, where faculty from all different parts of the university generally know each other and work closely together,” said Marshall.

Marshall also credited Vermont senator Bernie Sanders for helping position the university to win the grant. Sanders has played a critical role in a variety of smart grid initiatives, including facilitating and helping fund the Sandia partnership with the university and the state, and securing funds to establish the Center for Energy Transformation and Innovation housed at UVM.

Advancing Vermont's leadership in smart grid

The grant will significantly advance the state of Vermont's leadership position in smart grid technology and UVM's role in helping propel that advance, Marshall said. In 2009 the state received a $69 million federal grant from the U.S. Department of Energy, matched by the state's electric utilities, for a total of $168 million, to install the country's first statewide smart-metering system.

The IGERT grant is complementary to the DOE grant, producing a professionally trained workforce able to make strategic use of the data gathered from the smart meters. Marshall expects that a significant number of graduates will stay in Vermont. The program is also open to working professionals in the state, who will be able to take courses. Professionals will also teach in the program.

Other UVM faculty who are co-investigators on the IGERT grant with Marshall include Margaret Eppstein in Computer Science, Stephen Higgins in Psychiatry, Paul Hines in Engineering, and Chris Koliba in Community Development and Applied Economics. Diann Gaalema in Psychiatry, Cynthia Forehand in the Graduate College and Grasso were also important contributors in the development of the project. Faculty participating on the project come from many university departments in addition to those above, including Computer Science, Mathematics, Economics, and Psychology as well as researchers and staff from Sandia, the Vermont Law School, Champlain College, and the ECHO Center.

The grant will also help support UVM's strategic initiative to increase diversity in its graduate programs.

Launched in 1997, the IGERT is the National Science Foundation's flagship interdisciplinary training program, educating U.S. Ph.D. scientists and engineers by building on the foundations of their disciplinary knowledge with interdisciplinary training. The IGERT program spans science, technology, engineering, mathematics and social sciences.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=13529&category=cmplxsys
Thu, 05 Apr 2012 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=13529&category=cmplxsysFour leading researchers from Mechanical Engineering, Environmental Engineering and the new Bioengineering programs will be cross-pollinating in a new high-tech laboratory on the second floor of Votey Hall. Drs. Rachael Oldinski, Mary Dunlop, Britt Holmén, and Jane Hill, have access to nearly 3000 square feet of space for collaborative research in newly renovated lab space that connects to the existing Hill Lab.

“Our work to develop and test technologies to rapidly diagnose infectious disease is enhanced by this joint facility,” says Dr. Hill, assistant professor in the School of Engineering. “It is also a great environment to do some research cross-fertilization.”

"This renovation greatly improves our bio- and chemical-related engineering research facilities and capabilities for growth. More importantly, by linking faculty with different expertise, the space will foster student and faculty interaction across subdisciplines within engineering," says Dr. Holmén, associate professor in the School of Engineering whose environmental engineering research focuses on characterizing processes affecting the transport and fate of organic chemicals and airborne particles from agriculture and transportation sources. Dr. Holmén’s expertise ranges from the study of nanoparticles in vehicle exhaust to herbicide gas/particle partitioning at the farm scale.

“My research group is pleased to have this space to support our interdisciplinary work on biological feedback control systems,” says Dr. Dunlop, assistant professor in the School of Engineering. “The opportunity for greater interaction with colleagues and students in the other research areas is an important step towards supporting transdisciplinary research within CEMS.”

“The interdisciplinary research of my students will be enhanced by the close proximity of distinguished professors and their respective students,” says Dr. Oldinski, an assistant professor in the School of Engineering and an assistant professor in the Department of Orthopaedics and Rehabilitation in the College of Medicine.

The Dunlop Lab studies how microorganisms use feedback to respond to changes in their environment. The focus is on engineering novel control systems in cells and studying how robust, predictable behavior is achieved with naturally occurring feedback loops. Dr. Dunlop’s interest is in processes that are dynamic or stochastic and the researchers use fluorescent proteins and time-lapse microscopy to image single cells over the course of many hours. Solutions using engineered control systems are being applied to problems in bioenergy and medical research.

The Hill Lab is focused on two primary research areas. The first area centers on the development of technologies to rapidly determine the identity of pathogenic bacteria. Mass spectrometry is the primary tool used to rapidly “fingerprint” bacteria in contexts ranging from food to the human lung. The second area focuses on studying how organic phosphorus compounds are cycled in the environment, with an emphasis on directing the release of phosphate near plant roots rather than in places where it can leach into nearby waterbodies and cause algal blooms.

Dr. Holmén leads the Transportation-Air Quality Laboratory (TAQ Lab) which aims to understand and model factors affecting vehicle exhaust emissions as they relate to effects on human and environmental health. With a focus on unregulated pollutants such as air toxics and nanoparticles, both primary exhaust composition and secondary transformation processes are quantified at high temporal resolution under real-world vehicle operating conditions. Studies on hybrid and conventional gasoline and diesel vehicles operating on diverse fuels, including biodiesel blends from multiple feedstocks, position the TAQ Lab with a unique dataset for modeling tailpipe emissions of the future on-road vehicle fleet.

Dr. Oldinski is the director of the Engineered Biomaterials Research Laboratory in the School of Engineering. Dr. Oldinski’s research encompasses the fundamental understanding and development of polymeric materials for biological applications with a specific emphasis on tissue regeneration and drug delivery. The research in her laboratory involves: (i) developing novel polymeric materials and precursors; (ii) utilizing processing techniques to fabricate scaffolds with the desired micro- and macroscopic structures both spatially and temporally; (iii) investigating the interaction of cells with these materials while developing materials-based techniques to control cell differentiation; and (iv) using polymers to control the delivery of therapeutic molecules.

The renovation and creation of this new laboratory space is the result of a commitment from the UVM College of Engineering and Mathematical Sciences to empower the research faculty and to enhance the undergraduate teaching experience for students.

How brains do that is an area of rapidly expanding research — and Sejnowski is one of the field’s most celebrated investigators.

Sejnowski will speak on one of these brain tricks — a remarkable way that neurons efficiently represent visual information — on Friday, Feb. 24 at 2 p.m. at the Davis Auditorium at Fletcher Allen Health Care.

His lecture, “Suspicious Coincidences in the Brain,” is free and open to the public.

Brain spikes

Sejnowski is the Francis Crick Professor at the Salk Institute for Biological Studies where he directs their Computational Neurobiology Laboratory. He is also an investigator with the Howard Hughes Medical Institute and holds academic appointments at the University of California, San Diego.

In his lecture, Sejnowski will focus on a strange brain phenomena called a “spike coincidence” in which a group of brain cells — neurons — fire at the same time.

“I will show how rare spike coincidences can be used efficiently to represent important visual events,” Sejnowski says.

And these coincidences are part of a larger suite of signals, both biochemical and electrical -- some “analog," “some “digital,” he says — that the brain uses to efficiently handle visual inputs.

Going further, Sejnowski will describe how this brain architecture can be reproduced with computer technologies to “simplify the early stages of visual processing.”

This work is part of the long-range goal of Sejnowski's laboratory to understand the computational powers of brains and to find the principles that link a brain to behavior.

More information

Terry Sejnowski has published more than 300 scientific papers and 12 books, including The Computational Brain, with Patricia Churchland. He was elected an IEEE Fellow in 2000, an AAAS Fellow in 2006, to the Institute of Medicine in 2008, the National Academy of Sciences in 2010 and the National Academy of Engineering in 2011.

"The afternoon of May 6, 2010 was among the strangest in economic history. Starting at 2:42 p.m. EDT, the Dow Jones stock index fell 600 points in just 6 minutes. Its nadir represented the deepest single-day decline in that market’s 114-year history. By 3:07 p.m., the index had rebounded. The “flash crash,” as it came to be known, was big, unexpected and scary — and a new study says flash events actually happen routinely, at speeds so fast they don’t register on regular market records, with potentially troubling consequences for market stability."

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=13040&category=cmplxsys
Thu, 12 Jan 2012 00:00:00 -0500http://www.uvm.edu/~cmplxsys/?Page=news&storyID=13040&category=cmplxsys“If it bleeds, it leads,” goes the cynical saying with television and newspaper editors. In other words, most news is bad news and the worst news gets the big story on the front page.

So one might expect the New York Times to contain, on average, more negative and unhappy types of words — like “war,” “ funeral,” “cancer,” “murder” — than positive, happy ones — like “love,” “peace” and “hero.”

Or take Twitter. A popular image of what people tweet about may contain a lot of complaints about bad days, worse coffee, busted relationships and lousy sitcoms. Again, it might be reasonable to guess that a giant bag containing all the words from the world’s tweets — on average — would be more negative and unhappy than positive and happy.

But new research shows just the opposite.

“English, it turns out, is strongly biased toward being positive,” said Peter Dodds, an applied mathematician at the University of Vermont.

Two happiness studies

That work attracted wide media attention showing that average global happiness, based on Twitter data, has been dropping for the past two years.

Combined, the two studies show that short-term average happiness has dropped — against the backdrop of the long-term fundamental positivity of the English language.

Universal positivity

In the new study, Dodds and his colleagues gathered billions of words from four sources: twenty years of the New York Times, the Google Books Project (with millions of titles going back to 1520), Twitter and a half-century of music lyrics.

“The big surprise is that in each of these four sources it’s the same,” says Dodds. “We looked at the top 5,000 words in each, in terms of frequency, and in all of those words you see a preponderance of happier words.”

Or, as they write in their study, “a positivity bias is universal,” both for very common words and less common ones and across sources as diverse as tweets, lyrics and British literature.

Homo narrativus

Why is this? “It’s not to say that everything is fine and happy,” Dodds says. “It’s just that language is social.”

In contrast to traditional economic theory, which suggests people are inherently and rationally selfish, a wave of new social science and neuroscience data shows something quite different: that we are a pro-social storytelling species. As language emerged and evolved over the last million years, positive words, it seems, have been more widely and deeply engrained into our communications than negative ones.

“If you want to remain in a social contract with other people, you can’t be a…,” well, Dodds here used a word that is rather too negative to be fit to print — which makes the point.

Both studies drew on a service from Amazon called Mechanical Turk. On this website, the UVM researchers paid a group of volunteers to rate, from one to nine, their sense of the “happiness” — the emotional temperature — of the 10,222 most common words gathered from the four sources. Averaging their scores, the volunteers rated, for example, “laughter” at 8.50, “food” 7.44, “truck” 5.48, “greed” 3.06 and “terrorist” 1.30.

The Vermont team — including Dodds, Isabel Kloumann, Chris Danforth, Kameron Harris, and Catherine Bliss — then took these scores and applied them to the huge pools of words they collected. Unlike some other studies — with smaller samples or that elicited strong emotional words from volunteers — the new UVM study, based solely on frequency of use, found that “positive words strongly outnumber negative words overall.”

Confirming Pollyanna

This seems to lend support to the so-called Pollyanna Principle, put forth in 1969, that argues for a universal human tendency to use positive words more often, easily and in more ways than negative words.

Of course, most people would rank some words, like “the,” with the same score: a neutral 5. Other words, like “pregnancy,” have a wide spread, with some people ranking it high and others low. At the top of this list of words that elicited strongly divergent feelings: “profanities, alcohol and tobacco, religion, both capitalism and socialism, sex, marriage, fast foods, climate, and cultural phenomena such as the Beatles, the iPhone, and zombies,” the researchers write.

“A lot of these words — the neutral words or ones that have big standard deviations — gets washed out when we use them as a measure,” Dodds notes. Instead, the trends he and his team have observed are driven by the bulk of English words tending to be happy.

If we think of words as atoms and sentences as molecules that combine to form a whole text, “we’re looking at atoms,” says Dodds. “A lot of news is bad,” he says, and short-term happiness may rise and and fall like the cycles of the economy, “but the atoms of the story — of language — are, overall, on the positive side.”

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=13004&category=cmplxsys
Tue, 03 Jan 2012 00:00:00 -0500http://www.uvm.edu/~cmplxsys/?Page=news&storyID=13004&category=cmplxsysResearch by a team of UVM scientists led by applied mathematician Peter Dodds, analyzing more than 46 billion words chosen by Twitter users around the globe, shows that societal happiness is apparently trending downward.

Dodds was interviewed on NPR's "Marketplace," the social media website Mashable created this video, and stories appeared in:

The gross domestic product of the United States — that oft-cited measure of economic health — has been ticking upward for the last two years.

But what would you see if you could see a graph of gross domestic happiness?

A team of scientists from the University of Vermont have made such a graph — and the trend is down.

Reporting in the Dec. 7 issue of the journal PLoS ONE, the team writes, “After a gradual upward trend that ran from January to April, 2009, the overall time series has shown a gradual downward trend, accelerating somewhat over the first half of 2011.”

“It appears that happiness is going down,” said Peter Dodds, an applied mathematician at UVM and the lead author on the new study.

Twitteronomics

How does he know this? From Twitter. For three years, he and his colleagues gathered more than 46 billion words written in Twitter tweets by 63 million Twitter users around the globe.

In these billions of words is not a view of any individual’s state of mind. Instead, like billions of moving atoms add up to the overall temperature of a room, billions of words used to express what people are feeling resolve into a view of the relative mood of large groups.

These billions of words contain everything from “the” to “pancakes” to “suicide.” To get a sense of the emotional gist of various words, the researchers used a service from Amazon called Mechanical Turk. On this website, they paid a group of volunteers to rate, from one to nine, their sense of the “happiness” — the emotional temperature — of the ten thousand most common words in English. Averaging their scores, the volunteers rated, for example, “laughter” at 8.50, “food” 7.44, “truck” 5.48, “greed” 3.06 and “terrorist” 1.30.

The Vermont team then took these scores and applied them to the huge pool of words they collected from Twitter. Because these tweets each have a date and time, and, sometimes, other demographic information — like location — they show changing patterns of word use that provide insights into the way groups of people are feeling.

The new approach lets the researchers measure happiness at different scales of time and geography — whether global patterns over a workweek — or on Christmas.

And stretched out over the last three years, these patterns of word use show a drop in average happiness.

Or at least a drop in happiness for those who use Twitter. “It does skew toward younger people and people with smartphones and so on — but Twitter is nearly universal now,” Dodds said, “Every demographic is represented."

“Twitter is a signal,” Dodds said, “just like looking at the words in the New York Times or Google Books.” (Word sources that the team is also exploring in related studies). “They’re all a sample,” he says. “And indeed everything we say or write is a distortion of what goes on inside our head.”

But — like GDP is a distortion of the hugely complex interactions that make up the economy and yet is still useful — the new approach by the UVM team provides a powerful sense of the rising and falling pulse of human feelings.

Getting serious about happiness

“Individual happiness is a fundamental societal metric,” the researchers write in their study. Indeed the ultimate goal of much public policy is to improve and protect happiness. But measuring happiness has been exceedingly difficult by traditional means, like self-reporting in social science surveys. Some of the problems with this approach are that people often don’t tell the truth in surveys and the sample sizes are small.

And so efforts to measure happiness have been “overshadowed by more readily quantifiable economic indicators such as gross domestic product,” the study notes.

The new approach lets the UVM researchers almost instantaneously look over the “collective shoulder of society,” Dodds says. “We get a sense of the aggregate expressions of millions of people,” says Dodds’ colleague Chris Danforth, a mathematician and a co-author of the study, while they are communicating in a “more natural way,” he says. And this opens the possibility of taking regular measures of happiness in near real-time — measurements that could have applications in public policy, marketing and other fields.

The study describes hundreds of insights from the Twitter data, like a clear weekly happiness signal “with the peak generally occurring over the weekend, and the nadir on Monday and Tuesday,” they write. And over each day happiness seems to drop from morning to night. “It’s part of the general unraveling of the mind that happens over the course of the day,” said Dodds.

In the long-term graph that shows an overall drop in happiness, various ups and downs are clearly visible. While the strongest up-trending days are annual holidays like Christmas and Valentine’s Day, “all the most negative days are shocks from outside people’s routines,” Dodds say. Clear drops can be seen with the spread of swine flu, announcement of the U.S. economic bailout, the tsunami in Japan and even the death of actor Patrick Swayze.

On the dashhboard

Right now the sensor is only available to the researchers, but Dodds, Danforth and their colleagues have in mind a tool that could go “on the dashboard” of policy makers, Dodds says. Or, perhaps, on a real estate website for people exploring communities into which they might move, or, simply, “if someone is flying in a plane they could look at this dashboard and see how the city below them is feeling,” he says.

Of course feelings change quickly and the nature of happiness itself is one of the most complex, profound issues of human experience.

“There is an important psychological distinction between an individual's current, experiential happiness and their longer term, reflective evaluation of their life,” the scientists write, “and in using Twitter, our approach is tuned to the former kind.”

And looking ahead, the Vermont scientists hope that by following the written expressions of individual Twitter users over long time periods, they’ll be able to infer details of happiness dynamics “such as individual stability, social correlation and contagion and connections to well-being and health.”

Dodds and his colleagues are no strangers to the debates over the role of happiness that can be traced back through Brave New World to Jeremy Bentham, Thomas Aquinas, and Aristotle. “By measuring happiness, we're not saying that maximizing happiness is the goal of society,” Dodds says. “It might well be that we need to have some persistent degree of grumpiness for cultures to flourish.”

Nevertheless, this study provides a new view on a compelling question: why does happiness seem to be declining?

UVM professors Stuart Kauffman, Christopher Koliba and Brian Beckage will discuss the topic Kauffman took on in a post for NPR's 13.7 Cosmos and Culture blog. He writes, "The mixture of quantum and classical is neither deterministic, after Newton and Einstein, nor quantum random, after Schrodinger and von Neuman. The world is new. But what does this mean for the social and natural sciences?"

Information: andrea.elledge@uvm.edu.]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12702&category=cmplxsys
Wed, 02 Nov 2011 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12702&category=cmplxsysImagine, for a second, that the 120 people gathered in Davis Auditorium at the TEDxUVM lectures last Friday, plus the 544 people watching the live-stream video, were set to work calculating with paper and pencil what the world’s computers can calculate in one second.

How long do you suppose it would take them to do the same calculations? A thousand years? A million years? Not even close. If they had started calculating at the start of the Big Bang, with nary a snack or bathroom break, they wouldn’t be halfway done, if recent estimates in Science are correct.

And the corollary to this vast computing power is the vast amount of information now being produced and stored by our computers, satellites, cell phones and social media. A recent IBM estimate notes that at least ninety percent of all the information created by humanity has been produced in the last two years.

This is “big data,” according to the organizers of the TEDx event, “Big Data, Big Stories,” organized by UVM’s Complex Systems Center. And the effects, insights, problems and potential of all this information — the “big stories” — was the topic of the packed event’s 11 10-minute micro-lectures held at Fletcher Allen Health Care near campus.

Pattern recognition

In these “really data-rich worlds,” said UVM mathematician Peter Dodds — who organized the TEDx event along with his fellow mathematician Chris Danforth, roboticist Josh Bognard, and staffers Andi Elledge and Keri Toksu — something fundamental can change about how scientists do what they do.

When the data set gets big enough and the computers fast enough, Dodds said, instead of starting with a question, “there is a new way to approach things: which is simply that you have to go look for patterns, look for the shades, in these massive data sets.”

For example, in billions of Twitter tweets and blog posts, Dodds and his colleagues have found patterns of language that point to the rise and fall of the world’s mood. This discovery has allowed the scientists to create a near-real-time “hedonometer” — a happiness-measuring tool that can take the emotional pulse of places and groups of people around the world. Wednesday, not Monday, is the nadir of the workweek it seems. And the overall global “happiness signal” dropped, said Isabel Kloumann ’11 (one of Dodds's former students) during her TEDx talk, “corresponding to the time of the London riots breaking out.”

Call a robot

Who the scientist is may be changing too, under the storm of big data. In his talk, Mike Schmidt, a Cornell researcher, threw a clean curve of data points up on the screen and asked the audience to describe the equation that produced that pattern. The mathematically gifted in the group got it easily. “X squared,” someone shouted.

But as subsequent slides of data points got more complex, the audience was stumped. Into this kind of mess — or collections of data millions of times more messy — Schmidt would have researchers deploy a “new kind of artificial intelligence,” he says, “a robotic scientist” to fish out patterns from the seeming chaos.

As one example of this kind of silicon assistant, Schmidt has created a free software tool, Eureqa, that crunches raw experimental results and “distills out the fundamental mathematical properties of your data,” he says, “so that you come away with the model and deeper understanding of that data to help you ask the right questions.”

Ceci n’est pas une fish

If big data and fast computers are creating opportunity for new insights, the higher resolution and detail of the vast ocean of big data is also creating new challenges. Even headaches.

“With all this increased computing we are absolutely drowning in data,” said UVM’s Austin Troy, director of UVM’s Spatial Analysis Laboratory, in his TEDx talk, “and one of the types we are most drowning in is remote sensing data,” like that collected by satellites. Geographic images that once could only be resolved to a 30-meter-wide box labeled “forest,” can now be discerned as a specific maple tree with a broken branch.

In this case, the intelligence of computers can be the limiting factor rather than the breakthrough assistant. To explain, Troy put up side-by-side images of two bearded men. “I can tell within two seconds that that is George Carlin and that is Sigmund Freud,” he said, as the audience laughed. But for Troy to train a computer to recognize the difference would take “unbelievable amounts of time,” he said. Computers may be good at distilling mathematical approximations of data, but they don’t yet hold an old-fashioned candle to the human capacity for finding the gestalt.

In the same way, high-resolution images from space are providing an incredibly rich portrait of the planet — Troy showed a gorgeous image of the huge shadows of camels flowing across African desert and a tacky swimming pool in Nevada shaped like a tropical fish — but computers have been largely unable to pierce this raw data and find the camel or recognize the fish. Teaching computers to not work pixel by pixel, but, instead, to start to recognize the complex interplay of “shape, size, tone, pattern, texture, sight, and association,” Troy said, is one of the biggest challenges of big data.

Our satellites may be able to view every bumper-sticker on the planet and our computers may be able to complete calculations that would foil all of humanity, and yet, looked at another way, our computers are puny. All the computer storage in the world contains less information than is contained in your DNA. In one second, the 120 people gathered at the TEDxUVM lectures fire off as many neural impulses as all the computers in the world can perform operations.

“I need to teach a computer to see objects,” Troy said, “and to think like me.” That could be a while.

No hurricane in a water molecule

But part of the promise of the big data revolution is looking for patterns and interactions that are beyond or alien to the human mind. And in these patterns may be hidden “a way to solve incredibly hard problems that we need to solve,” Peter Dodds said. Looking for the master variable that controls a hurricane's track, a sudden economic collapse, or an ecosystem breakdown is bound to fail, he thinks.

That’s because many of the most important problems we want to understand are driven by complex systems, “where there is no powerful central control,” Dodds said. Instead there are “lots of localized interactions giving rising to macroscopic behavior, and often the macroscopic behavior is disastrous, like crashes in the stock market or ecosystem crashes.”

“There is no hurricane in a water molecule,” he said, “there is no financial collapse in a dollar bill. It’s all in how these thing arise.” And how they arise may yield to the brute and tireless power of a computer in ways that the far-more-powerful and elegant human brain doesn’t take in.

“There are compelling reasons for understanding systems, and the reason we haven’t been able to do so for things like social systems and economics systems,” Dodds said, “is because we haven’t been able to describe them.”

But work like that of Rob Axtell, from George Mason University, is getting closer. His TEDx talk described a model of the U.S. economy with 150 million independent “agents,” each with complex — sometimes irrational (i.e., real world!) — rules of behavior representing individual people. By letting these agents all interact in computer simulations he is seeking a view of the larger macro-economy that emerges from millions of micro decisions.

“What we are finding in the last few years is that some things that we thought were beyond measure — like willpower,” or the economic value of nature, or the timing of “random” terrorist attacks, said UVM robotics expert Josh Bongard, “— are not.”

Getting hotter

Still, the fundamental unpredictability of some aspects of the future, like the weather beyond a few weeks, may be intractable, UVM mathematician and climate modeler Chris Danforth reminded the audience, invoking the great chaos theorist Edward Lorenz.

And yet Danforth — who served as the moderator of the TEDx event — says “the most important big data story,” is the one coming out of the huge pools of information going into long-term climate forecasts. (Remember: it’s very hard to know if it will rain next Tuesday, but very easy to know it will be colder in January than June.)

If “we get it wrong — or we don’t pay attention to what the models are telling us,” Danforth said, “we could end up in big trouble.”

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12521&category=cmplxsys
Mon, 10 Oct 2011 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12521&category=cmplxsysThe New York Times and Science Magazine feature results from research on society’s well-being by Peter Sheridan Dodds and Christopher Danforth, professors in the Department of Mathematics and Statistics, Vermont Advanced Computing Center (VACC), and Complex Systems Center in the UVM College of Engineering and Mathematical Sciences (CEMS) in September 2011 issues. Dodds and Danforth’s research examined 4.6 billion tweets over nearly 3 years.

Their findings suggest that our moods are driven in part by a shared underlying biological rhythm that transcends culture and environment. By analyzing the frequency with which words occurred in their massive database of tweets, they saw patterns including happy weekends, and a morning peak in mood followed by an afternoon decline—“the daily unraveling of the human mind,” Dodds calls it. Other ‘happy” days often coincided with holidays, whereas especially unhappy days tended to coincide with unexpected events, such as the Japanese earthquake and tsunami. Their findings also hint at a global decline in mood starting in April 2009 that continues at least through the first half of 2011.

Peter Sheridan Dodds received a prestigious five-year $678,000 National Science Foundation (NSF) Faculty Early Career Development (CAREER) Award for his research entitled, "Explorations of Complex Social and Psychological Phenomena through Multiscale Online Sociological Experiments, Empirical Studies, and Theoretical Models." He is the twelfth UVM faculty member to receive a NSF Foundation CAREER Award given for research that equals the highest expectations of colleagues around the world.

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12438&category=cmplxsys
Tue, 27 Sep 2011 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12438&category=cmplxsysFor his work to understand how to build better robots, Joshua Bongard, a researcher at the University of Vermont, has received the highest award given by the U.S. government to young scientists.

On Sept. 26, President Barack Obama announced Bongard as one of 94 winners of the Presidential Early Career Award for Scientists and Engineers; he will be honored at a White House ceremony in October.

Bongard is only the second researcher in UVM history to receive the PECASE award, which provides $500,000 in research funds over several years.

Inspired by evolution

Bongard’s far-reaching work looks to nature for ideas. “The goal is to borrow ideas from neuroscience and evolution to help us build better and more intelligent robots,” he says.

So far, scientists have had little success in building resilient machines that can continually perform behaviors that are fairly simple but require ongoing adaptation to changing conditions — like paving a road or cleaning up a toxic dump.

But Bongard is on a mission to make them.

“The prevailing approach to create such machines is to copy physiological and neurological systems observed in animals, and build them into robots,” Bongard notes. “This raises the issue however of what, from among the infinitude of existing biological structures, should be copied.”

Instead of guessing, Bongard has innovated systems in which computer programs copy the dynamics of biological evolution and replay them in a virtual space with numerous generations of synthetic creatures — something like a highly sophisticated video game.

The resulting algorithm yields ideas for robots that have optimized their neurological structures — and their behaviors and body plans — over many generations of being tested by virtual evolution, instead of human guesswork.

With these ideas in hand, Bongard and his students can then build actual robots in their workshop that are adaptable and capable of responding to novel challenges.

“My long-term goal is to give back to neuroscience and evolutionary biology, to give us a different tool to investigate: why does intelligence evolve?” Bongard says. “Under what conditions will intelligence evolve? Could we ever consider a machine to be intelligent, or is intelligence something limited to biological organisms?”

Presidential vision

Recognizing this kind of innovative work, the PECASE awards “embody the high priority the Obama Administration places on producing outstanding scientists and engineers to advance the Nation’s goals, tackle grand challenges, and contribute to the American economy,” the White House wrote in a press release.

In 1996, the National Science and Technology Council was commissioned by President Clinton to create a program that would support and honor outstanding scientists and engineers early in their research careers — from this council came the PECASE award.

Each year, more than a dozen federal departments and agencies nominate scientists and engineers whose early accomplishments “show the greatest promise for assuring America’s preeminence in science and engineering and contributing to the awarding agencies' missions,” the White House press office wrote.

“It is inspiring to see the innovative work being done by these scientists and engineers as they ramp up their careers — careers that I know will be not only personally rewarding but also invaluable to the Nation,” President Obama said in the White House release. “That so many of them are also devoting time to mentoring and other forms of community service speaks volumes about their potential for leadership, not only as scientists but as model citizens.”

An innovator

Bongard, an assistant professor of computer science in UVM’s College of Engineering and Mathematical Sciences, was one of 21 nominees presented by the National Science Foundation for the most recent round of awards.

Bongard’s research has received national and international attention, and has been featured in Wired magazine, the Boston Globe, The Voice of America, Popular Science, and many other outlets. He also received a fellowship from Microsoft Research in 2007 for research related to self-healing robots — one of five given nationwide. He was named by MIT as one of the world’s top innovators under 35.

Bongard will travel to Washington, D,C., Oct. 13-14, to receive the award and will attend three ceremonies cumulating with a recognition ceremony at the White House with President Obama.

“This award allows me to continue with my basic scientific research, but it also allows me to create tools that draw many people into my research beyond my graduate students,” Josh Bongard says. “Through this award, we’re developing a web interface that will allow people to perform evolutionary robotics experiments without having a background in evolution or robotics."

]]>http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12336&category=cmplxsys
Wed, 07 Sep 2011 00:00:00 -0400http://www.uvm.edu/~cmplxsys/?Page=news&storyID=12336&category=cmplxsysIn military planning, it's important to be able to estimate not only the number of fatalities but how often attacks that result in fatalities will take place. "Pattern in Escalations in Insurgent and Terrorist Activity," a study by researchers including Brian Tivnan -- a UVM Complex Systems Center affiliate and chief engineer in Modeling & Simulation for MITRE -- uncovered a simple dynamical pattern that may be used to estimate the escalation rate and timing of fatal attacks.

The study was published in the July 2011 issue of Science magazine and covered by Science Now:

The Taliban-backed suicide bombing that left 21 dead in a hotel in Kabul on Tuesday appeared to come out of nowhere. Insurgent attacks on coalition forces in Iraq and Afghanistan have also proved unpredictable, with weeks or even months between one burst of deadly fighting and the next. But according to a new study, attacks that seem sporadic in the beginning can begin to show a pattern as the aggressors refine their methods. The finding may provide a way for military leaders to gauge the timing of future attacks in a conflict, helping them allocate troops, weapons, and resources more safely and efficiently. The research may even lead to ways of anticipating such seemingly random events as suicide bombings.

The Defense Advanced Research Projects Agency (DARPA) Mathematics of Sensing, Exploitation, and Execution (MSEE) program has awarded $500,000 to Drs. Joshua Bongard and Christopher M. Danforth, assistant professors in the UVM College of Engineering and Mathematical Sciences. The mission of DARPA is to pursue and exploit fundamental science and innovation for National Defense in advanced research and development in enabling technical areas. The goal of their research is to teach sensors how to think.

"Our goal is to develop a novel method to combat the 'data deluge' challenge, which is that modern technology generates far more data than any single human can deal with. More specifically, we will create models that explain the torrent of data coming from imaging studies of the most hierarchical, complex system we know of: the human brain," says Josh Bongard, assistant professor in the Department of Computer Science and Principal Investigator for the grant. To do this, Bongard and Danforth will team up with neuroscientists in the UVM College of Medicine to use their MRI data sets as a starting point.

Background Information

Dr. Bongard is recognized nationally and internationally for his research on evolutionary robotics. He received a National Science Foundation CAREER Award from the Division of Information & Intelligent Systems and the prestigious and highly competitive New Faculty Fellowship from Microsoft Research. He also was named by MIT Technology Review Magazine as one of the world's top innovators under 35. To read more about Dr. Bongard's research visit his website.

Dr. Danforth works on a variety of applied mathematics problems related to large-scale data and modeling. His research has been covered by Science Magazine and The New York Times.To read more, visit his website.