Category Archives: SXSW

“A space station is a rangy monstrosity, a giant erector set built by a madman.”–Mary Roach

Oh, how jaded we’ve become. Remember Skylab? When it became the first orbiting space station to crash back to earth, away back in 1979, it provoked a wide range of bizarre cultural outcroppings, from Skylab crash parties to insurance against it landing on your head. This time? Not so much. If the cable news channels can’t politicize it, they won’t give it much mention.

While you’re reading about all this week’s future-related news, don’t forget that you can subscribe to Seeking Delphi™ podcasts on iTunes, PlayerFM, or YouTube(audio with slide show) and you can also follow us on Twitter and Facebook

” The only true disability is a crushed spirit.”–Aimee Mullins

In this final instalment from the first Seeking Delphi™ visit to SXSW, we hear from two of the most remarkable individuals I have ever had the pleasure of meeting.

The session, entitled Extreme Bionics: The Future of Human Ability, delved 100 years into the past, covering the history of prosthetic devices from the crude low-tech devices built for World War I amputees, through to the increasingly high tech devices of today. Furthermore, it looked to a future that might bridge the final gap to neurological embodyment of artificial limbs, and various technologies that will enhance natural biological human abilities along with prosthetic devices.

Aimee Mullins was born without shin bones and lost both of her legs below the knee at the age of one. She has hardly let that stop her–she was a paralympian and is a model and actress. Most notably, she had a recurring role in season two of the hit Netflix series, Stranger Things.

Hugh Herr lost both of legs below the knee at age 18 to frostbite suffered in a mountain climbing mishap. He is an associate professor and head of the biomechatronics group at MIT’s Media Lab.

In keeping with the future theme of Seeking Delphi™ I asked both of them to imagine the future of these technologies. This panel was part of the IEEE Tech for Humanity series at SXSW 2018. Acknowlegements to them, and to Interprose, for arranging these interviews.

“It’s not going to do any good to land on Mars if we’re stupid.”–Ray Bradbury

“You cannot be serious.”–John McEnroe

Is Vladimir Putin serious? He’s really going to put Russians on the moon by next year? Live Russians? Human Russians? Russian manikins, maybe. Or how about those nested Russian dolls? I have my hunches about his obvious hyperbole. Like maybe he’s goading a certain Western leader I won’t name to take it seriously and go broke trying to compete with him. All the while what he’s really doing is focusing his resources on hacking democracy and wreaking havoc.

While you’re reading about all this week’s future-related news, don’t forget that you can subscribe to Seeking Delphi™ podcasts on iTunes, PlayerFM, or YouTube(audio with slide show) and you can also follow us on Twitter and Facebook

–Next Big Future reports on the progress–and relative merits–of AD-Astra’s VX200SSTM VASIMR® prototype space propulsion engine. Recent test firings have brought them one step closer to enabling earth to mars transit in as little as 4 to 6 weeks. SpaceX, with its BFR, has aims at making the transit at similar speeds.

“No problem can be solved from the same level of consciousness that created it.”–Albert Einstein

For anyone who has watched the HBO series Westworld, the questions about creating machine consciousness run much deeper than “can we.” These include, should we? How will we treat it? How will it feel about its station as artificial life? Will we be able to control it, and is that ethical? And most profoundly, how will that change what it means to be human? The questions go beyond ethical to existential, and they were all addressed in the SXSW Intelligent Future track in a panel titled Can We Create Consciousness In A Machine? Not surprisingly, there were two techno-philosophers on the panel to explore these issues. They are David Chalmers, with NYU’s Center for Brain an Mind Consciousness, and Susan Schneider, with the Department of Cognitive Sciences at the University of Connecticut.

In this Seeking Delphi™ minicast, I speak with both of them about some of these issues. The third panelist mentioned in the podcast is Allen Institute physicist, Kristoff Koch.

The experts on the panel agreed…classical digital computers can’t create consciousness. Neural networks? Neuromorphic chips? And what about quantum computing? My interview with whurley on quantum computing, immediately following his SXSW keynote on the subject, is below.

SXSW minicast #3: whurley on quantum computing

In case you missed it, the YouTube slide show link for SXSW 2018 minicast #1, on covering sessions on quantum computing and self-driving car safety, is below.

As introduction to the podcast, some of this material is reprinted from a post earlier today. Scroll down for the audio file or links to access it on iTunes or PlayerFM.

“The promise of autonomous vehicles is great.”–Dan Lipinski

“My opinion is that it’s a bridge too far to go to fully autonomous vehicles.”–Elon Musk

Wait–what? The man who thinks he can send humans on a one way trip to colonize Mars within 10 years, thinks fully autonomous vehicles are out of our reach? The Elon Musk quote above is from 2013. I would be surprised if he still feels that way–but who knows?

Segue to this morning, at the Intelligent Future interactive track at SXSW 2018 in Austin, TX. Nobody on the panel entitled “Who takes the wheel on self-driving car safety” suggested we won’t get there. But there was plenty of caution on how, how fast, and how far we go in doing so.

Most notable were comments by Andrew Reimer of MIT. He foresaw a gap of 50-100 years before fully autonomous cars–no human intervention–take over the lion’s share of driving, globally. His issues were not just technical; they included trust, complexity, infrastructure and good old fashioned habit. He was certain that manual driving would probably never completely go away. He sighted the example of a high end sports car owners wanting the enjoyment of driving.

“It might just be hobbyists,” he said, but made it clear that in some shape or form, the human factor is likely to survive for a very long time.

Quantum Computing

A session on “Quantum Computing: Science Fiction to Science Fact,” was somewhat misnamed. While the history of its theoretical origins were recounted by D-Wave’s Bo Ewald, the session really focused on the current trends and developments leading toward a 10-year or so future horizon.

Bo Ewald talks about meeting Richard Feynman

Ewald recounted how iconic physicist Richard Feynman first imagined quantum computing in 1981, published the first paper on it in 1982, and gave a talk on it at Los Alamos in 1983. Ewald was head of computing at Los Alamos in 1983 and met Feynman at that talk. Sheldon Cooper, eat your heart out.

Humanizing Autonomy

A sessiojn autonomous systems covered much of the same ground that was addressed in Seeking Delphi podcasts with Richard Yonck (#12) and John C. Havens (#17). last year. But one of the presenters, Liesl Yearsly of Akin, had an interesting means of illustrating how the material will affect us.

“The promise of autonomous vehicles is great.”–Dan Lipinski

“My opinion is that it’s a bridge too far to go to fully autonomous vehicles.”–Elon Musk

Wait–what? The man who thinks he can send humans on a one way trip to colonize Mars within 10 years, thinks fully autonomous vehicles are out of our reach? The Elon Musk quote above is from 2013. I would be surprised if he still feels that way–but who knows?

Segue to this morning, at the Intelligent Future interactive track at SXSW 2018 in Austin, TX. Nobody on the panel entitled “Who takes the wheel on self-driving car safety” suggested we won’t get there. But there was plenty of caution on how, how fast, and how far we go in doing so.

Most notable were comments by Andrew Reimer of MIT. He foresaw a gap of 50-100 years before fully autonomous cars–no human intervention–take over the lion’s share of driving, globally. His issues were not just technical; they included trust, complexity, infrastructure and good old fashioned habit. He was certain that manual driving would probably never completely go away. He sighted the example of a high end sports car owners wanting the enjoyment of driving.

“It might just be hobbyists,” he said, but made it clear that in some shape or form, the human factor is likely to survive for a very long time.

As for the issue of safety, Cathy Chase of Advocates for Highway and Auto Safety cited three critical areas of consideration to make self-driving car safety standard. The first is a morass of no fewer than 400 different laws that could be enacted–now–to make all driving safer. As an example, she mentioned automatic emergency breaking. Today it’s mostly only found as a feature in semi-autonomous luxury vehicles. To make it as standard as seat belts would require federal regulation. Second is the need for a shift in public attitudes; there needs to be reassurance. A majority of the public–at least in the US–does not yet trust self-driving cars.** The third is to avoid issue amnesia. In a rush to mainstream autonomous driving, congress could pass enabling laws prematurely, before all technical and regulatory issue are resolved.“

**My two cents on the issue of trust. To better understand why there is mistrust, consider the cognitive bias that Nobel economics laureate Daniel Kahneman calls “what you see is all there is.” or WYSIATI. When one Tesla on auto-pilot is involved in a fatal crash, it makes major national headlines. It’s right in front of us. Yet, over 100 in the US die every day in auto accidents caused by human error. Unless a celebrity is involved, none of them make news beyond their local area. Nobody pays attention unless they are directly affected. Statistically, at some point, self-driving vehicles are likely to be far safer than human-driven. But as long as the autonomous accidents make the big news, the public may not perceive them as safe.

Quantum Computing

A session on “Quantum Computing: Science Fiction to Science Fact,” was somewhat misnamed. While the history of its theoretical origins were recounted by D-Wave’s Bo Ewald, the session really focused on the current trends and developments leading toward a 10-year or so future horizon.

Bo Ewald talks about meeting Richard Feynman

Ewald recounted how iconic physicist Richard Feynman first imagined quantum computing in 1981, published the first paper on it in 1982, and gave a talk on it at Los Alamos in 1983. Ewald was head of computing at Los Alamos in 1983 and met Feynman at that talk. Sheldon Cooper, eat your heart out.

Ewald repeated this story for me in a brief interview which should be available as part of a Seeking Delphi™ minicast later this evening. I also asked him about the notion that we really don’t know for sure everything that quantum computing will be able to do. He agreed.

“For the past ten years, most of the discussion has been about quantum cryptography, ” had said. “this has nothing to do with what Feinman was talking about. He was interested in modeling nature.” He cited material sciences and system optimizations as areas of great promise for the future.