Thursday, February 5, 2015

Friday Thinking, 6 February 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

In the year 1930, John Maynard Keynes predicted that technology would have advanced sufficiently by century’s end that countries like Great Britain or the United States would achieve a 15-hour work week. There’s every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn’t happen. Instead, technology has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul. Yet virtually no one talks about it.

Why did Keynes’ promised utopia – still being eagerly awaited in the ‘60s – never materialise? The standard line today is that he didn’t figure in the massive increase in consumerism. Given the choice between less hours and more toys and pleasures, we’ve collectively chosen the latter. This presents a nice morality tale, but even a moment’s reflection shows it can’t really be true. Yes, we have witnessed the creation of an endless variety of new jobs and industries since the ‘20s, but very few have anything to do with the production and distribution of sushi, iPhones, or fancy sneakers.

The League of Legends experiments are the brainchild of Jeffrey “Lyte” Lin, a game designer with a Ph.D. in cognitive neuroscience from the University of Washington in Seattle. He began studying human behavior the way most academic researchers did, arranging lab experiments with small groups of 50 to 60 college students as his test subjects. Today, he heads Riot’s player behavior team of more than 30 researchers working in game design, statistics and data science as they devise social psychology experiments on competitive League of Legends gamers. Riot’s stake in reducing toxic player behavior goes well beyond the simple virtue of sportsmanship—the company’s “free-to-play” business model of selling non-essential game items depends on keeping players happy and invested in the game.

Lin and the Riot team wondered what would happen if they took known psychology findings and applied them on a massive scale to improve player behavior. “When we first started, applying classic psychology theories was the most logical approach,” Lin says. “But as we settled in and better understood how to look at human behaviors online, we started digging in more and more into bleeding edge stuff.” They soon realized that they could do more than just replicate classic experiments; they could do scientific research on human behavior that had never been possible before in an academic lab.

...Though Riot’s experiments lack the pristine conditions of a traditional academic psychology experiment, the sheer volume of behavioral data channeling through Riot’s game servers every day — chat messages and in-game actions from an estimated 27 million daily players — allows the Riot team to collect vast amounts of data very quickly. “It’s not about precision in any data point; it’s all about quantity,” Lin explains. They can test many different experimental conditions simultaneously. For instance, the Optimus Experiment tested 217 unique conditions across more than 10 million games worth of data, with 10 percent of all games acting as controls.

...Last year, the company launched six research collaborations with universities, including a project with the University of York in the UK that looked at how the names of League of Legends gamers reflected real-life characteristics. A collaboration with MIT aims to measure teamwork among five strangers on the same League of Legends team and develop a “collective intelligence” test that can predict performance on certain tasks.

Agile and the Creative Economy thus comprise a large core idea, with many different implementations. The core idea is not just a new process or methodology, but a different ideology–a different way of viewing and acting in the world. Instead of an ideology of control with a focus on efficiency and predictability and detailed plans and internal focus, it’s an ideology of enablement, with a focus on self-organization, continuous improvement, an iterative approach, and above all, the customer is now central.

A shift in ideology isn’t a little fix, like adding a marketing department.

It’s more like the Copernican revolution in astronomy. Prior to Copernicus in the 16thCentury, people imagined that the sun revolved around the earth. It was self evident and confirmed every day by what everyone could see. Everyone knew that the sun revolved around the earth. After Copernicus, people realized that it was the other way around. Despite appearances, it’s the Earth that revolves around the Sun.

And as Thomas Kuhn pointed out in his book, The Structure of Scientific Revolutions(1957) this wasn’t just a discovery in astronomy. It had vast economic, social, and political consequences. People began to ask: is it really plausible that the Roman Catholic Church is the center of the universe? Is it plausible that kings and queens govern by divine right? It began to put in question and eventually alter the entire structure of society.

The ongoing shift in management today is of a similar nature and magnitude.

In traditional management, the firm was the stable center of the universe and the customer revolved around the firm. The customer was taken for granted.

In the new scheme of things, the customer is the center of the universe and the firm revolves around the customer.

Norman Mailer and Marshall McLuhan expound on violence, alienation and the electronic envelope. The clash of two great minds. (1968) www.cbc.ca

Being able to see an interview (with the simple ‘click’ of a Youtube search) between Norman Mailer and Marshall McLuhan almost 50 after it was recorded ... is ‘Awesome’. The next article is longish but worth the read.

Just how enduring is the externalized memory of the Web? How enduring should it be? Will Big Data be as ephemeral and mercurial? And for word lovers here’s a few new ‘must know’ terms - link rot, content drift, and reference rot. As bad as this is - we have to remember how often works become out of print and how hard it is to find physical references that are years old - if we depended on print at the accelerating rate of knowledge/information doubling that’s happening - what local or even central libraries can have copies of references? The solution involves citing title and author and deeper more extensive search. Would it be possible in the 21st Century to not only include all the data but also copies of all references in digital publications as indexes.

The average life of a Web page is about a hundred days. Strelkov’s “We just downed a plane” post lasted barely two hours. It might seem, and it often feels, as though stuff on the Web lasts forever, for better and frequently for worse: the embarrassing photograph, the regretted blog (more usually regrettable not in the way the slaughter of civilians is regrettable but in the way that bad hair is regrettable). No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t. In 2006, David Cameron gave a speech in which he said that Google was democratizing the world, because “making more information available to more people” was providing “the power for anyone to hold to account those who in the past might have had a monopoly of power.” Seven years later, Britain’s Conservative Party scrubbed from its Web site ten years’ worth of Tory speeches, including that one. Last year, BuzzFeed deleted more than four thousand of its staff writers’ early posts, apparently because, as time passed, they looked stupider and stupider. Social media, public records, junk: in the end, everything goes.

Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. (Some of those companies may have notified users, but Jason Scott, who started an outfit called Archive Team—its motto is “We are going to rescue your shit”—says that such notification is usually purely notional: “They were sending e-mail to dead e-mail addresses, saying, ‘Hello, Arthur Dent, your house is going to be crushed.’ ”) Facebook has been around for only a decade; it won’t be around forever. Twitter is a rare case: it has arranged to archive all of its tweets at the Library of Congress. In 2010, after the announcement, Andy Borowitz tweeted, “Library of Congress to acquire entire Twitter archive—will rename itself Museum of Crap.” Not long after that, Borowitz abandoned that Twitter account. You might, one day, be able to find his old tweets at the Library of Congress, but not anytime soon: the Twitter Archive is not yet open for research. Meanwhile, on the Web, if you click on a link to Borowitz’s tweet about the Museum of Crap, you get this message: “Sorry, that page doesn’t exist!”

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous.

The footnote problem, though, stands a good chance of being fixed. Last year, a tool called Perma.cc was launched. It was developed by the Harvard Library Innovation Lab, and its founding supporters included more than sixty law-school libraries, along with the Harvard Berkman Center for Internet and Society, the Internet Archive, the Legal Information Preservation Alliance, and the Digital Public Library of America. Perma.cc promises “to create citation links that will never break.” It works something like the Wayback Machine’s “Save Page Now.” If you’re writing a scholarly paper and want to use a link in your footnotes, you can create an archived version of the page you’re linking to, a “permalink,” and anyone later reading your footnotes will, when clicking on that link, be brought to the permanently archived version. Perma.cc has already been adopted by law reviews and state courts; it’s only a matter of time before it’s universally adopted as the standard in legal, scientific, and scholarly citation.

The next few video are really worth the time to view they bring some game scholars to discuss the role of games in education, learning and civic engagement.

Here’s a very interesting 4 min slice of a conversation between John Seely Brown and Constance Steinkuehler, a games and learning scholar about games and intelligence. Anyone interested in education should find this interesting.

Constance Steinkuhler, a games and learning scholar, discusses her firsthand experiences in seeing how youth-centered learning and online gaming leads to compelling turnarounds in youth engagement.

Constance's research-based practices focus on solving the disconnect between learning that takes place in gaming environments and in-class learning: "When I started doing studies around science and literacy and civic engagement around online games, when the biggest population of those games is teenage guys and they're not faring well in school, it begs a real question. What's happening in this space between kids and their school world and their school identities instead of their game identities and what they're doing around games?

Here is a longer 29 min presentation by her - this is well worth the view.

She is a tenured professor and from 2011 to 2012 she took public service leave and worked as a Senior Policy Analyst in the Office of Science and Technology Policy (OSTP) at the White House Executive Office, advising on policy matters about video games and learning.

Kurt Squire, video game designer and assistant professor at the University of Wisconsin-Madison, breaks down the educational power of video games for digital-age learners.

Kurt expresses a deep interest in "this idea that when you play through a game world--let's say that you're playing as a role-playing game and you're a character who's shaped this entire world--we are starting to see some evidence that players, after playing games like this, will start to [...] look outside and say 'Well, why are things the way they are?'"

Squire goes on to explain how this evidence can be applied to more immediate, local issues: "So, what we want to do with educational games, are design games that try to do that but really build them around critical, kind of current issues and then get kids to be motivated and have the skills to go out and start to solve these problems as a direct result of having played the game."

One such application was a Madison, Wisconsin-focused game called "Citizen Science" that promoted limnology concepts and encouraged users to take actions to improve the biological makeup of local lakes. "Now, our hope is that you play the game, you put it down, you say: 'Oh! Well why don't we something about it?'" Kurt says. "So we've basically taught you how to do everything that you're going to need to do to change the lake."

This is a long (1hr 45min) but excellent analysis-critique of the video game by one of the ‘giants’ of ‘independent’ game developers. For anyone interested in video games - this is a MUST VIEW.

Video games have evolved tremendously over the past few decades; they're much more entertaining than they used to be. That is not by accident; we, the community of game designers, have been continuously refining our techniques. The most common way we do this is by testing out our games on you, the players, and optimizing for the "best" result (where "best" is defined by us). As this process is ongoing, what kind of relationship exists between the designer and the player? Is it artist/audience, experimenter/subject, entrepreneur/customer, or tycoon/resource? Invariably it's some admixture of these things, the particular ratios for a given game being chosen by its designers (usually without awareness that a decision is being made). Today, due to the way the Internet is widely used, and because game designers are becoming more serious about certain aspects of their craft, the iteration time of this game design optimization process is shorter than ever before: designers can observe their players much more thoroughly, and more quickly, than they ever have in the past.

At some point a quantitative change becomes a qualitative one: the result of all this competency may be heavily destructive. Some aspects of the current notion of "good game design" may in fact be very bad, or at least indefensible, from an ethical standpoint. Today's "better" video games spend a great deal of effort to undermine defenses that took you tens of millennia to evolve. They tend to be successful at this. As designers keep evolving their craft and gain greater analytical power, what will happen?

Speaking of learning and games here’s something very interesting that not only indicates the speed of change but confirms that information overload is not just ‘filter failure’ but requires a shift to pattern recognition (as McLuhan noted in the 1950s).

Reading and writing gave us external and distributable storage. Coding gives us external and distributable computation. It allows us to offload the thinking we have to do in order to execute some process. To achieve this, it seems like all we need is to show people how to give the computer instructions, but that's teaching people how to put words on the page. We need the equivalent of composition, the skill that allows us to think about how things are computed. This time, we're not recording our thoughts, but instead the models of the world that allow us to have thoughts in the first place.

We build mental models of everything - from how to tie our shoes to the way macro-economic systems work. With these, we make decisions, predictions, and understand our experiences. If we want computers to be able to compute for us, then we have to accurately extract these models from our heads and record them. Writing Python isn't the fundamental skill we need to teach people. Modeling systems is.

In the same way that composition and comprehension are not tied to paper, modeling is not tied to computers. It can be both physical and mental. It takes place on paper and in Excel or with Legos and balsa wood airplanes. It is an incredibly powerful skill which we can make even greater use of by transposing our models to computers. To understand how we do that, we have to look more deeply at what it means to model.

Modeling is creating a representation of a system (or process) that can be explored or used.

Speaking of modeling, quite awhile ago I included some articles from an economist that had been hired by Valve (a gaming company - Google their “Handbook for New Employees”). The economist was Yanis Varoufakis. He’s now looking to be part of a government that reforms Greece. This is an interesting 5 min video.

Yanis Varoufakis, tipped to be Syriza's new finance minister, tells Paul Mason what his party would do if it gets into government in Greece, and admits the prospect of power in Europe is "scary".

This is a fantastic video by one of Valve’s game designers - this is a MUST VIEW for a number of reasons. First as an approach to designing an economy - these are certainly design principles Yanis (see above) would have been exposed to. These principles in turn are backed by real Big Data derived from millions of hours of game play. Second these principles are completely applicable to the design of the workplace to engage workers and develop incentive structures.

A look at the multi-year history and development of the in-game economies and microtransaction systems in Team Fortress and Dota, including some of the surprises we encountered and some of the lessons we've learned that we think are applicable to a wide range of products.

Now lets extend the game - this may be the future of the movie, the game and the Big Screen Home Theatre.

The most buzzworthy feature of 2015’s Sundance Film Festival isn’t a film at all, but a pair of virtual reality goggles you strap to your head. Three years after VR made its debut at Sundance, the technology has fully established itself. An entire section of the festival is now devoted to VR experiences, many of them more interactive than what we’ve seen to date. Talk to filmmakers and they’ll tell you they can’t remember being so excited: some say it’s like they’re present at the dawn of a new medium.

Take Birdly, a full-body VR experiment that turns you into a bird flying above the streets of San Francisco, soaring higher with every flap of your arms. Or Project Syria, which throws the viewer in the middle of a harrowing rocket attack. Or, perhaps most darkly,Perspective; Chapter I: The Party, which lets you see the world through the eyes of a man, and then a woman, as an encounter at a college party turns into sexual assault. All are on display at New Frontier, Sundance’s annual showcase for works at the intersection of art and technology. And they’ve quickly become the talk of the festival.

Virtual reality debuted at Sundance in 2012 with Nonny de la Peña’s Hunger in Los Angeles, which used an early head-mounted display to place viewers in the middle of a food line outside a church. That project was developed by then-19-year-old Palmer Luckey, and the success of Hunger spurred him on to build a consumer version of his VR headset. He called it Oculus Rift, and launched it successfully on Kickstarter; last year,Facebook bought his company for $2 billion.

In the wake of Oculus’ success, and under the direction of curator Shari Frilot, VR dominates New Frontier this year. "I think what’s behind the explosion is the marketplace embracing it," Frilot says. Of the 14 projects in the showcase, 11 are enhanced by virtual reality. Most are independent art projects, but not all: Fox Searchlight is also here with Wild — The Experience, putting users in between actresses Reese Witherspoon and Laura Dern in a scene inspired by the recent movie of the same name.

Story Studio is the company’s internal team exploring what it calls 'VR cinema'

The prominence of virtual reality has been one of the biggest stories of the 2015 Sundance Film Festival, and now Oculus itself is stepping into the fray to highlight the importance of storytelling in VR. The company has pulled back the curtain on Oculus Story Studio, an internal team focused on exploring the potential of what it calls "VR cinema" — and the group's first movie is debuting this week.

Called Lost, the project is a real-time computer generated VR experience for the Crescent Bay prototype, and is directed by Saschka Unseld, a former Pixar animator who created the 2013 short The Blue Umbrella. Lost runs roughly five minutes in length, but in what Unseld touts as one of the project’s innovations, it changes the pace of its storytelling based on the action taken by the viewer. "It could be three-and-a-half minutes and it could be 10," he says. "It all depends on you."

Watch this instructional video to see how the Galaxy Note 4 and Oculus™ power an amazing virtual reality experience. Gear VR Innovator Edition features incredibly responsive head tracking, a side-mounted touchpad, and a full 360° field of view. It truly has to be experienced to be believed.

Here is something about the future of music and performance arts. The article includes two short video (2 min and 7 min) demonstrations - very pleasant to listen to - besides being fascinating.

Researcher Develops Jazz Music-Playing Robots That Can Improvise With a Human Musician

Musician and researcher Mason Bretan plays jazz with a backup band of four robots in what at first glance appears to be a clever application of dancing robots and pre-recorded tracks, but is actually something much more remarkable. While two of the robots do indeed play prerecorded percussion, the other two are actually improvising along with Bretan’s playing. And all of them are swaying and dancing in response to the music. The largest robot, named Shimon, improvises on a marimba–a complex task that require the robot to pre-position its arms in anticipation of the next note. You can see Bretan and his robo-band in action in the six-minute improvisational piece, “What You Say.” Bretan developed the robots as part of his PhD research at the Georgia Institute of Technology.

Here’s the acerbic Bruce Sterling talking in July 2014, about the emerging potential of the Smart City. He asks some very important questions about how we want to shape our cities in the coming decade. The real point is made in the final 5 min - he predicts that within 3 years there will be an epic struggle. The video is 22 min.

One key question- Where is the chart of all the disrupted 'dead' businesses that the smart city will leave in it's wake. or What happens when a consortium of apps go on strike and hold the city hostage?

What started as a service to help customers buy goods from Alibaba retailers has grown into a serious finance business all its own.

Not many years ago, Jane Yang, a 26-year-old civil servant in Beijing, paid her landlord in three-month installments with a stack of 100-yuan notes. To pay her utilities—water, electricity, and home Internet bill—she went to three separate banks, where she handed cash to a teller. The process was “very time-consuming and irritating,” she remembers. Even as skyscrapers and gleaming shopping malls cropped up around China’s capital, most middle-class residents had never seen or used a simple checkbook.

Today she uses the Alipay app, China’s most popular online payment service, on her smartphone to transfer money directly to her landlord’s account. She pays for her utilities and her mobile-phone account through Alipay as well. Yang even keeps savings in her Alipay Yu’ebao money market account, where money accrues higher interest than it does in a traditional bank account. Yang hadn’t set out to deliberately overhaul her financial habits, but new mobile technology, she says, “made so it easy.”

As of October 2014, Alipay had more than 300 million registered users in China (and 17 million overseas), according to the most recent figures the company has made public.

Between July 2013 and June 30, 2014, Alipay handled $778 billion (4.8 trillion yuan) in transactions, according to the company. It is able to process more than 10 billion transactions per day. During the popular “Singles’ Day” annual sale—which is like Black Friday in the U.S. but on overdrive—Alipay handled up to 2.85 million transactions per minute, and 54 percent of its transactions are made via mobile device. With these new mobile payment technologies, China has leapfrogged both checkbooks and desktop banking.

Here is a fascinating conversation between two great thought leaders - the conversation is 80 min. David Graeber’s book “Debt: The First 5,000 Years” is a MUST READ for anyone who wants to have a truly deep and historical understanding of the roots of economics. Brian Eno is a world leading musician that I’m sure everyone has heard of.

The 2014 Artangel Longplayer Conversation between Brian Eno and David Graeber took place 7pm, Tuesday 7 October 2014 at the Royal Geographical Society, London SW7.

Longplayer is a thousand-year long musical composition conceived and composed by Jem Finer. The Longlayer Conversations began with a meeting in 2005 between New York artist and musician Laurie Anderson and Nobel prize-winning author Doris Lessing; they continue to take place in the context of this project.

Talking about change and resistance to change - here’s a fascinating study.

The first network analysis of the entire body of European Community legislation reveals the pattern of links between laws and their resilience to change.

One of the more fascinating areas of science that has emerged in recent years is the study of networks and their application to everyday life. It turns out that many important properties of our world are governed by networks with very specific properties.

These networks are not random by any means. Instead, they are often connected in the now famous small world pattern in which any part of the network can be reached in a relatively small number of steps. These kinds of networks lie behind many natural phenomena such as earthquakes, epidemics and forest fires and are equally ubiquitous in social phenomena such as the spread of fashions, languages, and even wars.

So it should come as no surprise that the same kind of network should exist in the legal world. Today, Marios Koniaris and pals at the National Technical University of Athens in Greece show that the network of links between laws follows exactly the same pattern. They say their network approach provides a unique insight into the nature of the law, the way it has emerged and how changes may influence it in the future.

And speaking of network science - here’s a vision of how the digital environment, Social Physics and Big Data provides a platform for transparent experimentation.

Riot Games wants you to behave yourself when you play League of Legends, so it’s turned the game into a virtual lab

League of Legends is often called the world’s most popular video game—it draws enough online spectators during championship events to rival the millions who watch the World Series and NBA Finals. But it’s also a virtual lab capable of running experiments with thousands or even millions of human players, collecting data around the clock from time zones scattered across North America, Asia and Europe. Such a “big data” approach to studying human behavior could lead to new psychological insights that would be impossible to achieve in the confines of a university lab.

Riot takes great pains to point out how its experiments benefit the entire League of Legends community. The game company is likely reaping the rewards of this publicity campaign; experimentation in a similar vein by Facebook in 2014 showed that public opinion can quickly turn sour when people feel emotionally manipulated for corporate interests. Facebook’s failure to explain its motives up front allowed users to draw their own conclusions and imagine the worst.

Riot Games and Facebook are not alone in toying with user behavior. Many companies routinely do A/B testing to see how people respond to slightly different presentations of material on a Web site, tweaking text or images, for example, to get visitors to stick around longer or spend more money. Riot’s experiments are also in its self-interest—to keep players from quitting and to attract new customers who might otherwise be scared away by the toxic reputation of multiplayer online battle arena (MOBA) games, says Jamie Madigan, a psychologist who studies video games. “In terms of using big data, I doubt Riot is the only game company using player tracking and so forth,” Madigan says. “But I think they are unique in how they’re taking an experimental approach that is more scientific.”

Riot’s relative transparency about its aims puts it ahead of the pack, as most companies don’t publicize how they tinker with the online experiences of millions of customers. As a result, Riot’s experiments also offer a rare glimpse into the ways that companies nudge our behavior online, every minute of every day.

From the smart city to the smart home? This is interesting with a 3min video and links to the Kickstarter project. Maybe ‘She’s’ not here yet but where will she be in the next 15 years?

She's a personal assistant, photographer, butler and home security guard all in one — and she's a robot.

Robotbase, a robotics company headquartered in New York City, is developing a personal robot that can perform a variety of daily functions at home or at work— everything from turning on lights to managing social calendars.

The bot can even read a bedtime story to your kids, and adjust the color of the room's lighting based on the story's mood, the makers said

The first prototype was launched earlier this month at CES 2015 in Las Vegas, and the company has already raised more than $127,000 through a crowdfunding campaign.

"Twenty years ago, personal computers came along and changed everything. Ten years ago, we had the smartphone," said Duy Huynh, founder and CEO of Robotbase. "We look at our product as the next device after the computer and the smartphone."

While this doesn’t look like a home robot - it surely will be part of the extension of the home as ‘homebot?’ or redefining the meaning of the ‘home-computer of Home-IoT’. There is a 3 min video that everyone should watch.

This winter I bought an electric mattress cover and I LOVE being able to climb into a snuggly pre-warmed bed - an amazing sensation that makes me wonder why I waited so long. - This is a smart one.

There's nothing like slipping into bed on a cold night when your electric blanket has been hard at work, but warming up the linen to create that toasty sleeping cocoon of course requires you to flick it on in advance. The makers of Luna believe that poor foresight isn't worth losing sleep over, so they've created an internet-connected mattress cover that adjusts to your lifestyle. This means automatically setting the bed's temperature, tracking the quality of your rest, and even kicking your coffee machine into action when you wake up.

Once you nod off, Luna can set the temperature in your home through a Nest smart thermostat (provided you have one). Other possibilities include flicking off your Emberlight smart globes, securing the house with your Lockitron smart locks, shutting down your Beep-enabled stereo system and firing up your Wi-Fi coffee machine in the morning.

Other noteworthy features of Luna's connected mattress cover include the ability to use your smartphone as a remote to set your bed's temperature ahead of time, and a smart alarm designed to wake you up during a light phase of sleep. It is available for Queen, King and California King mattresses and is washing machine friendly.

Prices range from US$199 to $229 depending on sizing, but those looking to smarten up their sleeping will have to wait a little while yet. Luna is available for preorder through a campaign on Indiegogo, where the $100,000 funding goal was put to bed in just six hours. If the remainder of the project runs as planned, the company will begin shipping out its mattress covers to weary-eyed backers in August 2015.

Mechanic Advisors’ new product is designed to make the appearance of the dreaded check engine light that little bit less disheartening, by giving car owners a portal into the health of their vehicle. The device provides drivers with the same information available to their local auto repair shop, but what makes it truly unique is its ability to put them in touch with a suitable, trustworthy mechanic.

The device plugs into the vehicle's standard On-Board Diagnostics (OBD) port, linking to iOS and Android smartphones to provide real-time vehicle data and decipher more than 20,000 error codes – the exact same diagnostic tools available to mechanics. It’ll also provide alerts when it’s time to change oil or replace tires, and according to the company, will work with almost any vehicle manufactured form 1996 onwards.

If and when a problem arises, the companion app will link you directly to one of more than half a million trusted mechanics, making it easier to get your vehicle into a reliable repair shop. In future, the company plans to improve the service by using anonymous data to provide useful information such as known issues for specific models of car, breaking down stats based on location and driving habits. The company believes that giving drivers real-time stats will not only help educate them about their vehicles, but should also eliminate the distrust surrounding auto repairs.

Here’s a new approach that will soon be a part of the learning environment in the near future.

We've seen a good number of electronic gloves before, and now researchers at Georgia Tech have devised one to rehabilitate patients who suffer from paralyzing spinal cord injuries while teaching them how to tickle the ivories. Christened Mobile Music Touch, the black mitt pairs with a keyboard and cues individual fingers with vibrations to play notes. The handgear also buzzes constantly for several hours to stimulate recovery while users go about their day, similar to another yellowjacket-developed solution. After treatment, some patients could pick up objects and feel textures they hadn't been able to -- especially remarkable since, according to the university, little improvement is typically seen a year after injuries are sustained. Folks who learned to play the piano with the device also experienced better results than those who did without it. Project leader Dr. Tanya Markow believes that the rehab's success could be caused by renewed brain activity that sometimes lies dormant. For the full skinny, head past the break for the press release and a video of the gauntlet in action.

Good news for people without natural born talent: You don’t need it. Or at least you won’t in the future—DUN DUN DUNNNNN! By then, technology will make up for our humanoid shortcomings. Can’t play the piano? No problem; a glove will force your fingers to play that passage you can’t seem nail. Not handy with a pen? That’s an easy fix thanks to a robotically controlled accessory that will guide your hand to sketch a perfect circle.

This is hyperbole, but only slightly, as a new project from designer/engineer Saurabh Datta proves. For his thesis project at Copenhagen’s Institute of Interaction Design, Datta created a series of devices that teach people simple tasks like tapping a piano key or drawing basic shapes by using forced haptic feedback. In other words, you don’t control Datta’s machines, they control you.

Neurons, the cells of the nervous system, communicate by transmitting chemical signals to each other through junctions called synapses. This "synaptic transmission" is critical for the brain and the spinal cord to quickly process the huge amount of incoming stimuli and generate outgoing signals. However, studying synaptic transmission in living animals is very difficult, and researchers have to use artificial conditions that don't capture the real-life environment of neurons. Now, EPFL scientists have observed and measured synaptic transmission in a live animal for the first time, using a new approach that combines genetics with the physics of light. Their breakthrough work is published in Neuron.

Wondering how we will power the ever increasing range of wearables and sensors? Here’s the answer - the human battery.

Using human skin as one of its charge-collectors, a new flexible generator converts muscle movements into enough power for small electronics. The postage-stamp-sized device takes advantage of static electricity to convert mechanical energy into electrical energy.

Such friction-powered generators could usher new types of wearable sensors that don’t require batteries but instead are powered by the wearer’s daily activities like walking, talking or holding an object.

The new device, which researchers from the National University of Singapore presented at the IEEE MEMS 2015 conference last week, can generate 90 volts of open circuit voltage when touched gently with a finger. Electrical and computer engineering professor Chengkuo Lee and his colleagues demonstrated that the new device can be used as a wearable self-powered sensor to track the user’s motion and activity.

The clues only get more baffling. The clock is ticking. You rip apart suitcase after suitcase and book after book, trying to find clues while music reminiscent of Mission Impossible plays in the background. You only have seconds left to crack the code on the safe. Will you be able to escape?

This is Escape Room Live D.C., a tourist attraction in Georgetown. “Spies” have two goals: obtain the key to the room and find the location of the “drop” that is scheduled for tomorrow. Nearly everything in the room is searchable, including the suitcases, the wall decor, the books and various articles of clothing. To make the tasks more difficult, some of the clues are red herrings, designed to throw players off track.

The Gamemaster explains the rules to groups of six to eight participants: No cell phones are allowed inside, staff watch the group’s progress from a separate room and the timer will go off at 45 minutes. If a player needs help or breaks a rule, staff members can communicate with the group via intercom.

After the rundown, the group is ushered into the actual Escape Room and the door is locked behind them. Then the game begins.

In four months, Escape Room has earned positive reviews from local newspapers and attracted crowds large enough to ensure each of the rooms is booked Thursday through Sunday. The Escape Room is not open Monday through Wednesday. Currently, there are two puzzle rooms with two levels of difficulty.

The Fleshers create the puzzles that confound their customers. Ginger Flesher, a retired math teacher, said they based their designs on the European escape rooms, which they found either challenging or simply impossible.