'Listening' Table Records and Understands the Conversation You’re Having

Meetings can be horrible, unproductive, and maddening. Can a high-tech conference table make them bearable? The New York Times R&D Lab have tackled the problem with the lab's latest project. The Semantic Listening Table is an attempt to make sense of the chaotic conversation.

The smart table combines pervasive data collection and the internet of things into a new concept. As its name implies, the table listens to you, using an array of dynamic microphones that are enclosed beneath a the little perforated grate that sticks out of the table. (The mic array on its own looks a lot like the grey conference table speakerphones. The grate is an aesthetic consideration that helps break that association for people sitting around the table.)

We recently visited the R&D lab to give it a try. Located in a lofty, well-lit corner on the 28th floor of the Times Building, the New York Times R&D lab is completely dissociated from the publication's actual news-gathering operation. By design, the lab brings in people from diverse backgrounds—these aren't journalists—and tries to engage creatively with the technology that's likely to be emerging in the next three to five years.

The goal isn't to create a product in the traditional sense, or even a tool for the news team, but instead to try to put different technologies in dialogue with each other in a single design. This process actually results in what in consumer product development would yield what call frankengadgets. These products combine several disparate functions into a single product that's worse for the combination. The Dick Tracy watches some manufacturers are churning out these days are a perfect example: They're terrible watches and terrible phones.

The R&D Lab, however, doesn't have to sell products, and so it can turn these franken-designs into intellectual provocations, which is the idea behind the voice-recognition table.

Whereas wearable tech uses sensors like accelerometers to measure physical data to process, the Semantic Listening Table attempts to create sensors that extract meaning from the conversation that's had around it. The design is meant to maximise communication while creating a record of a meeting or conversation. As you speak, your voice is processed by the voice-recognition software in an Android tablet, and transcribed as record. (Under the hood, the table's many hardware components are connected to a Mac Mini brain by an Arduino board.)

The entire transcript of a meeting isn't very useful—it's just too much information. To try and glean the meaning of conversations, the table is covered in capacitive strips, which meeting participants push to mark the transcript for a short time before and after that moment. The marker indicates that somebody at the table thought that this moment in the conversation was meaningful.

The table's particularly compelling because of the simplicity of the design. The table is round and about the right size for a small meeting room, but it's got a snazzier look you might expect to find in expensive design catalogue rather than in a midtown office. It's not an imposing object, but it's clear from the outset that it's different than just any other table. It's got a slick corian top and wooden base, plus a weird metallic grill in the middle, which looks a little be the supercharger protruding from the hood of a hot trod. When you talk, an inlaid ring of backlit red plastic flashes in time with your voice.

The transcription works amazingly well, and the marker interface doesn't require any brains at all. The hope is that you would find yourself actually looking at the people you're talking to, and maybe getting something done.

In fact, the R&D team says a big part of the design process was actually stripping features away; at one point, for example there were concepts for projecting information on to the table. The features that did make it in have been greatly simplified. The table doesn't attempt to record who is saying what or who drops markers at particular times. Who said what and who marked when might be interesting, but it's more information than you really need.

The R&D Lab team isn't actually all that interested in improving the brains of the table to try and make the transcription is flawless. Ultimately, the goal of the table is to produce notes not unlike what you would take if you were in a meeting. You don't need to write down every single word to understand the content of what was said. All you need are little pieces of information to help jog your memory.

Another key feature is the physical On/Off switch, which activates and disables the table's listening. We live in a world where pervasive data collection is seen as the norm, and the team wanted to provide a meaningful counterpoint to that trend. The table only listens when you want it to be listening. And its transcripts are erased from the table's locally stored database on a rolling 28-day basis. Much like you wouldn't keep your notes from every meeting forever, there's no reason for the table to keep an indefinite record.

What happens next with the table is a matter of some question. It works so well that you could absolutely imagine a bunch of lofty startups investing in the technology to help them keep track of epic brainstorming sessions, but there are no plans to actually bring it to market. Indeed, the table is a perfect example of some of the challenges involved in creating purely conceptual technology: The table's a great listener, but it hasn't figured out how to write the news just yet.