Thursday, November 20, 2014

Friday Thinking 21 November 2014

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

I keep saying that telecom policy is blood and guts stuff — giant principles of equity, speech, and the importance of free markets run headlong into the extraordinary political powers wielded by Comcast, Verizon, Time Warner Cable, and AT&T. All too often the drama is buried in an avalanche of acronyms and incremental influence. Then came yesterday’s message from President Obama.

Here was our best Obama, telling the FCC in plain language that it should consider acting like a regulator. The message actually brought a tear to my eye. It’s the equivalent of the moving part of the war movie when the gruff but effective leader calls his troops to their better selves, reminding them why they’re there in the first place.

So although the president sounded like the law-professor-in-chief yesterday (“I believe the FCC should reclassify consumer broadband service under Title II of the Telecommunications Act”), to me it was a General Patton moment. This is a battle cry designed to give heart to his administration — and particularly the corner of the executive branch crouching in terror behind the walls of the FCC.

The president is reminding us that we are a country that does great things.

It’s a big deal; it’s like the leadership that FDR wielded when he took on the giant private electrical companies that were controlling electrification — perfectly legally, at the time, but with terrible consequences for the nation — in the 1930s. The fight over whether high speed Internet access should have a cop on the beat is our version of the battle over electricity that dominated presidential politics nearly a century ago. Left to their own devices, the electrical trusts were systematically gouging richer Americans, leaving out poor and rural markets, and extracting profits wherever possible. There is nothing malign about this behavior — it is the natural tendency of profit-seeking companies to act this way when it comes to high-fixed cost physical infrastructure — but the incentives of the companies involved are not necessarily aligned with the country’s interest in competing and flourishing on the world stage. We are recapitulating the early story of electrification when it comes to high-speed Internet access. It has to stop.

The Internet is an inter-networking agreement. That agreement — a set of rules or protocols — defines how networks will pass information from one to another. If your network doesn’t abide by those rules, your network is not part of the Internet. One of those rules is that participating networks will pass along packets of data without regard for who sent them, where they’re going, what’s in them, or what application they’re supporting.

The Internet’s rules apply whether your network uses copper wire, optical fiber, radio waves, or carrier pigeons. The Internet is logically independent of any of its instantiations.

The major ISPs typically provide some physical infrastructure — usually via right of ways on public land, granted by local governments — that connects your house or office to some other, more capacious piece of physical infrastructure. But they do not own the Internet.

The Internet cannot be owned any more than the alphabet, good grammar, or politeness can be owned. And anyone who pretends otherwise is being a bully.

So, if the Internet can’t be owned, why is Net Neutrality even an issue? Because the companies that transmit bits to and from us would like to rewrite the rules of the Internet in their own interests…interests that do not always align with ours. For example, it’s cheaper for them to throttle our access (while continuing to charge us more than much of the rest of the developed world) than increase the amount of bandwidth they provide.

One of the mysteries of our species, now approaching eight billion persons, is the very limited way in which the intelligence of this vast number of individuals can be said to add up or combine. On the one hand, every society has a body of knowledge that is passed on from generation to generation, with frequent borrowing from neighbors and gradual enrichment with new information and new understanding. Human history can even be seen as the steady process of discovering ways to store and pass on information and ideas. On the other hand, differing ideas lead to failures to cooperate and misunderstandings, complicated by the emphasis on competition and the notion that knowledge and ideas are a form of property. All these wonderful brains operate in various degrees of separation, so that neither nations nor the species as a whole has the benefit of their combined power. Do we – that is to say, all members of the human species together – have the intelligence to order our living in such a way as to preserve the viability of our planet? Surely. Do we have the capacity to integrate that intelligence, to mobilize its combined potential toward that goal, and to act on it? Not yet.

This is a long 1hr 43min video - Despite it’s length this is a must view - he’s a bit hesitant at the beginning - but he builds a very interesting and solid narrative. The dozen companies that he has researched are also worth hearing about. -There is another way to organize people to get things done.

A talk, followed by Q&A, by Frederic Laloux about "Reinventing Organizations", a research and book that is turning into an international phenomenon.

Increasingly, employees and managers (but also doctors, nurses, teachers, etc.) are disillusioned with the way we run organizations today. We all somehow sense that there simply must be better ways to run our businesses, nonprofits, schools and hospitals.

This hopeful talk shares the key insights from groundbreaking research into the emergence, in different parts of the world, of truly powerful and soulful organizations that have made a radical leap beyond today's management thinking.

(It starts with the story of how organizations evolved over time. You can skip to minute 28 to hear the story of Buurtzorg, one of the extraordinary pioneering organizations, that revolutionized home care in the Netherlands. At minute 36, the talk goes into self-management. At minute 52 into wholeness at work. And at 1:06 into evolutionary purpose in organizations. Q&A start at 1:21.)

For anyone who enjoyed Kahneman’s “Thinking Fast and Slow” - this is a simple, clear 20 min video presentation of the fast & slow thinking. This is also vital to anyone interested in Artificial Intelligence. Well worth the view.

Monica Anderson is CTO and co-founder of Sensai Corporation, founder of Syntience Inc., and originator of a theory for learning called "Artificial Intuition" that may allow us to create computer based systems that can understand the meaning of language in the form of text.

Here she discusses Dual Process Theory, The Frame Problem, and some consequences of these for AI research.

Dual Process Theory is the idea that the human mind has two disparate modes of thinking - Subconscious Intuitive Understanding on one hand and Conscious Logical Reasoning on the other.

The Frame Problem is the idea that we cannot make comprehensive Models of the World because the world changes behind our backs and any Model we make is immediately obsolete.

The conclusion is that AI research since the 1950s has been solving the wrong problem. She also introduces Model Free Methods as an alternative path to AI, capable of sidestepping the Frame Problem.

Here is a wonderful paper (9 pages) by the same author, goes a bit deeper into the issues of the presentation above and is also relevant to science and Knowledge Management, and also brilliantly models great science writing. - this is really a must read.

The goal of any science and engineering education is to give the student the ability to “perform Reduction”. Some of you may not be familiar with this term, but you have all done it. It is the most commonly used process in science and engineering and we tacitly assume we will use it at every opportunity. Therefore there has been little need to discuss Reduction as a topic outside of epistemology and philosophy of science.

In what follows, I will be making the claim that for the limited purpose of creating an Artificial General Intelligence (AGI) we must avoid this common kind of Reduction. This article (second in a series) will discuss what Reduction is and why it is useless in the domains where AGI is expected to operate. The third article will discuss why it is also unnecessary. The fourth article will discuss available alternatives. As a bonus, we will come to Understand what it means to Understand something

Here’s a Canadian futurist who deserve great respect - and always a must read - you can also download the article as a pdf.

Being without existing: the futures community at a turning point? A comment on Jay Ogilvy's "Facing the fold"

The lack of a contemporary philosophy of the future has contributed to fragmenting the global community of futurists, many of whom go off in very different philosophical and professional directions (Millet and Staley, 2010).

Futures studies is haunted by an unresolved problem – how to deal with the unknowable and novelty rich future. For a long time now most futurists have accepted that prediction and probability are limited ways of thinking about the future. But knowing what does not work isnot the same as knowing what does. In ‘‘Facing the fold’’ Ogilvy (2011) provides a concise and powerful response – the ‘‘scenaric stance’’. His solution, as I will argue in this brief commentary, is based on reformulating the problem – the challenge is not that we must ﬁndways to ‘‘know’’ the future, rather we need to ﬁnd ways to live and act with not-knowing the future.

For Ogilvy the ‘‘scenaric stance’’ is a ‘‘state of mind’’that offers an ‘‘acute sense of freedom’’by holding ‘‘both the both/and and the either/or’’ points-of-view continuously. By taking the ‘‘scenaric stance’’ we reframe our intent and volition (Ogilvy, 2010) to ‘‘see both threats and opportunities shining forth in rich and vivid scenarios’’.

By taking this posture to the future we‘‘face the fold’’ – embracing liberation and responsibility, abandoning the false choicesbetween pessimism and optimism, hope and fear, we grasp indeterminacy without eschewing the closure of the now.To me the ‘‘scenaric stance’’ achieves something that so far has largely escaped the futuresstudies community – the combination of a focus on the ‘‘capacity to be free’’ (Sen, 1999) and a decisive break with the ‘‘probabilistic stance’’ on the basis of an ontological ratherthan epistemological point of departure. Furthermore, although Ogilvy does not explicitly situate his argument in the type of anticipatory systems perspective pioneered by Robert Rosen and recently developed in a Special Issue of Foresight (Miller and Poli, 2010; FuMee 3, n.d.; Butz et al. , 2003), I believe the ‘‘scenaric stance’’ requires such a point-of-view, as I will explain shortly.

Since were speaking about the future - here’s an interesting 150 page pdf by Ray Kurzweil. This is long - but he does have section of graphs at the end of this piece that are a must view. What he does is re-examines the predictions he made in his first two books for 2009 and 2010. He gives himself an B.

In this essay I review the accuracy of my predictions going back a quarter of a century. Included herein is a discussion of my predictions from The Age of Intelligent Machines (which I wrote in the 1980s), all 147 predictions for 2009 in The Age of Spiritual Machines (which I wrote in the 1990s), plus others. Perhaps my most important predictions are implicit in my exponential graphs.

These trajectories have indeed continued on course and I discuss these updated graphs below. My core thesis, which I call the “law of accelerating returns,” is that fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that “you can’t predict the future.” There are still many things — which project, company or technical standard will prevail in the marketplace, or when peace will come to the Middle East — that remain unpredictable, but the underlying price/performance and capacity of information is nonetheless remarkably predictable. Surprisingly, these trends are unperturbed by conditions such as war or peace and prosperity or recession.

Here is a vey short video of the future of education - a 7 min video - a must view

This video is about the current transformation happening in higher education, and an invitation to join in prototyping a new type of university education for the 21st century. More information herehttps://www.edx.org/course/mitx/mitx

Universities must evolve if they are to survive. A special issue of Nature examines the many ways to build a modern campus.

When the first universities emerged in eleventh-century Europe, their mission was education, scholarship and nothing else. They housed bright young clerics, studying the newly rediscovered works of ancient thinkers such as Aristotle and Euclid. Only in the nineteenth century, following the lead of Britain and Germany, did universities begin to give equal weight to a second mission: scientific research.

But in the past few decades, universities around the world have begun to take on further missions. Today they are supposed to be not only centres of education and discovery, but also engines of economic growth, beacons of social justice and laboratories for new modes of learning.

In the face of these sometimes conflicting requirements — not to mention financial pressure from cash-strapped governments — today's universities are evolving and changing at an unprecedented pace. In this special issue, Nature looks at some of the myriad ways in which universities around the world are trying to free themselves from old habits of thought, and to explore new ways of doing things.

In this series there is a good article on efforts to ‘flip the classroom’.

Innovative ways of teaching, learning and doing research are helping universities around the globe to adapt to the modern world.

Speaking about learning - this is a very interesting article by the daughter of Gregory Bateson and Margaret Mead - Mary Catherine Bateson. The article can be downloaded for free from the inaugural issue of a new journal “Human Computation - A Transdisciplinary Journal”. I think there is someone we all know that has been going on and on about ‘Social Computing’ as the message of the Digital Environment. :) This may just be the Future of Knowledge Management. This is an important concept that has to be integrated with our understanding of Artificial Intelligence - which fundamentally enables the emergence of a augmented intelligence for social computing.

As we build systems collecting and aggregating human contributions, we are well-advised to preserve and recognize the impact of individual voices and actions.

The field of human computation, then, has two faces. On the one hand, there is the aggregation of the effort of many different persons doing the same task or making similar inputs from different places, perhaps in-putting data about observations of threatened species or meteorological phenomena. On the other hand, there is the potential for the integration of multiple different kinds of input coming from diverse individuals to produce new and creative possibilities. This is ideally done in conversation where the participants are stimulated by their diverse points of view, aiming to discover new alternatives or to arrive at a consensus, to become “of one mind.” When the numbers involved make conversation awkward, the integration process can be assisted by technology.

Thus, there have been in recent decades a wide variety of proposed methods for facilitating productive conversation that may then be collated electronically. Even when the inputs are similar in kind, there is the possibility that the aggregation of multiple responses can be an important step toward solving a fundamental ethical problem in human society, namely the increasingly widespread conviction that “nothing I can do will make any difference.” Kant’s Categorical Imperative was an attempt to solve the problem by eliminating the question of scale and proposing that an action be evaluated as if it were universal, but this has not proved particularly effective in ever larger populations. The problem of taking responsibility for individual and local actions is most severe at the global level. Thus, for instance, individuals have difficulty believing that leaving an extra electric light burning in their suburban backyard is connected to the likelihood of lethal storms thousands of miles away. Exactly the same kind of reasoning discourages voters from going to the polls for local elections. How will people learn that what they do “counts”? By counting. Similarly, the endless series of petitions posted on the Internet and the more and more frequent demands to “rate our service,” are intended to give people the sense of contributing to common goals. The Vatican recently invited bishops to poll the faithful, and many responded and hope that their opinions will be heard and integrated in decision-making – that their words would really count.

Its achievements are undeniable. Having hosted what some historians call the greatest creation of wealth in human history,1 the San Francisco Bay Area had the fastest growth rate in the United States in 2012,2 the highest per-capita gross domestic product,3 one of the highest average IQs,4 and has been called one of the country’s greenest cities.5 If cities were people, then San Francisco would certainly be called a genius. But are we willing to extend that term to a city, or should we insist that genius is contained within the confines of the human head?

To understand this question, let’s start inside the head. For the most part, there’s no single process in charge. Most parts of our brain work free from any conscious control, and intelligence is an emergent property of neuron behavior: A brain is intelligent, even though the individual neurons that make it up are not. At a higher level, human minds have different functions that are sometimes in competition with each other. One part of the mind might desire cupcakes, but another part of the mind knows that eating them might make us grumpy. One part of our mind knows we’re looking at an optical illusion, but another is still fooled by it. The evolutionarily newer parts of our brain know it’s “just a movie,” but we get scared nonetheless.

These conflicts can affect even our highest faculties. According to neuroscientist Joshua Greene, moral judgments are made according to two separate processes in the brain, what he calls “personal” and “impersonal.” Suppose a train is about to kill five people on a track, and you are asked if it is morally justified to pull a switch that will cause it to run on a different track that would result in the death of only one person. Most people say that this answer to the “impersonal” version of this dilemma is morally justified.

Here’s a great 47 min video from the Santa Fe Institute. Provides some great insight on the increasing returns of larger urban regions.

Speaking of getting smarter - well maybe just having a better memory can make it seem like we’re smarter - enhancing memory may enable people to begin to make better connections - become wiser and smarter?

The science behind total recall: New player in brain function and memory

Is it possible to change the amount of information the brain can store? Maybe, according to a new international study. The research has identified a molecule that puts a brake on brain processing and when removed, brain function and memory recall is improved.

Like the water we drink or the air we breathe, the information we consume feeds the very essence of what it means to be human. Lantern establishes a new baseline of human knowledge. We are not fixing the world for people, we are giving them the information they need to fix it themselves.

Lantern continuously receives radio waves broadcast by Outernet from space. Lantern turns the signal into digital files, like webpages, news articles, ebooks, videos, and music. Lantern can receive and store any type of digital file on its internal drive. To view the content stored in Lantern, turn on the Wi-Fi hotspot and connect to Lantern with any Wi-Fi enabled device. All you need is a browser.

Researchers at the Cockrell School of Engineering at The University of Texas at Austin have achieved a milestone in modern wireless and cellular telecommunications, creating a radically smaller, more efficient radio wave circulator that could be used in cellphones and other wireless devices, as reported in the latest issue of Nature Physics.

The new circulator has the potential to double the useful bandwidth in wireless communications by enabling full-duplex functionality, meaning devices can transmit and receive signals on the same frequency band at the same time.

The key innovation is the creation of a magnetic-free radio wave circulator.

Since the advent of wireless technology 60 years ago, magnetic-based circulators have been in principle able to provide two-way communications on the same frequency channel, but they are not widely adopted because of the large size, weight and cost associated with using magnets and magnetic materials.

Freed from a reliance on magnetic effects, the new circulator has a much smaller footprint while also using less expensive and more common materials. These cost and size efficiencies could lead to the integration of circulators within cellphones and other microelectronic systems, resulting in substantially faster downloads, fewer dropped calls and significantly clearer communications.

The team of researchers, led by Associate Professor Andrea Alu, has developed a prototype circulator that is 2 centimeters in size — more than 75 times smaller than the wavelength of operation. The circulator may be further scaled down to as small as a few microns, according to the researchers. The design is based on materials widely used in integrated circuits such as gold, copper and silicon, making it easier to integrate in the circuit boards of modern communication devices.

To complement the quote of Susan Crawford at the beginning - here’s another great piece by David Weinberger - a must read, for anyone interested in the future of the Internet.

“…data hogs like Netflix might need to bear some of the cost of handling heavy traffic.” — ABCnews

That’s like saying your water utility is a water hog because you take long showers and over-water your lawn.

Streaming a high-def movie does take a whole bunch of bits. But if you hadn’t gone ahead and clicked on Taken 2 [SPOILER: she’s taken again], Netflix would not have sent those bits over the Internet. So Netflix isn’t a data hog. You are.

You’re a data hog.

No, you’re not.

Some people use the Internet ten minutes a day to check their email. Some people leave their computers on 24/7 to download entire video libraries. None of them are data hogs.

How can I say this so unequivocally? Because nobody gets a drop more data than what they pay for. The ISPs make damn sure of that. If you pay for, say, a 10 megabit per second connection, you are not getting any more than 10 megabits of data per second even if you have Bittorrent set to “Stun” all day every day.

The ISPs may not like that you are using all of the Internet you are paying them for. Well, boohoo.

MIT engineers have devised a way to rapidly test hundreds of different drug-delivery vehicles in living animals, making it easier to discover promising new ways to deliver a class of drugs called biologics, which includes antibodies, peptides, RNA, and DNA, to human patients.

In a study appearing in the journal Integrative Biology, the researchers used this technology to identify materials that can efficiently deliver RNA to zebrafish and also to rodents. This type of high-speed screen could help overcome one of the major bottlenecks in developing disease treatments based on biologics: It is challenging to find safe and effective ways to deliver them.

“Biologics is the fastest growing field in biotech, because it gives you the ability to do highly predictive designs with unique targeting capabilities,” says senior author Mehmet Fatih Yanik, an associate professor of electrical engineering and computer science and biological engineering. “However, delivery of biologics to diseased tissues is challenging, because they are significantly larger and more complex than conventional drugs.”

“By combining this work with our previously published high-throughput screening system, we are able to create a drug-discovery pipeline with efficiency we had never imagined before,” adds Tsung-Yao Chang, a recent MIT PhD recipient and one of the paper’s lead authors.

Your genome is the same right now as it was yesterday, last week, last year, or the day you were born. But your microbiomes—the combined genes of all the trillions of microbes that share your body—have shifted since the sun came up this morning. And they will change again before the next sunrise.

Christoph Thaiss from the Weizmann Institute of Science has discovered that the communities of microbes in out guts vary on a daily cycle. Some species rise to the fore during daylight hours and recede into the background at night, while others show the opposite pattern.

These cycles are a lot like our own body clocks, or circadian rhythms. Over a 24 hour period, the levels of many molecules in our body rise and fall in predictable fashion. These rhythms affect everything from our body temperature to our brain activity to how well we respond to medicine. But these clocks tick by themselves. You can reset them by exposing yourself to light at different times of day (which is what we do when we cross time zones and get jetlag), but they are still self-sustaining.

Our microbiome clock is not. The microbes aren’t waxing and waning of their own accord. Their world is completely dark. There’s no way for them to tell what time of the day it is, except for clues provided by us. The most important of these clues is food. Thanks to our own rhythms, we eat at regular times of the day, and it’s these feeding patterns that drive the cycles in our microbiome. Diet is the gear that synchronises the ticks of our clocks with those of our microbes.

By combining efforts and innovations, Wyss Institute scientists develop synthetic gene controls for programmable diagnostics and biosensors, delivered out of the lab on pocket-sized slips of paper

New achievements in synthetic biology announced today by researchers at the Wyss Institute for Biologically Inspired Engineering, which will allow complex cellular recognition reactions to proceed outside of living cells, will dare scientists to dream big: there could one day be inexpensive, shippable and accurate test kits that use saliva or a drop of blood to identify specific disease or infection — a feat that could be accomplished anywhere in the world, within minutes and without laboratory support, just by using a pocket–sized paper diagnostic tool.

That once far–fetched idea seems within closer reach as a result of two new studies describing the advances, published today in Cell, accomplished through extensive cross–team collaboration between two teams at the Wyss Institute headed by Wyss Core Faculty Members James Collins, Ph.D., and Peng Yin, Ph.D..

"In the last fifteen years, there have been exciting advances in synthetic biology," said Collins, who is also Professor of Biomedical Engineering and Medicine at Boston University, and Co–Director and Co–Founder of the Center of Synthetic Biology. "But until now, researchers have been limited in their progress due to the complexity of biological systems and the challenges faced when trying to re–purpose them. Synthetic biology has been confined to the laboratory, operating within living cells or in liquid–solution test tubes."

The conventional process can be thought of through an analogy to computer programming. Synthetic gene networks are built to carry out functions, similar to software applications, within a living cell or in a liquid solution, which is considered the "operating system".

"What we have been able to do is to create an in vitro, sterile, abiotic operating system upon which we can rationally design synthetic, biological mechanisms to carry out specific functions," said Collins, senior author of the first study, "Paper–Based Synthetic Gene Networks".

Speaking of synthetic biology and DIY approaches - here’s an interesting summit - for next month.

‘BIOFABRICATE’ is the world’s first summit dedicated to biofabrication for future industrial and consumer products. Biofabrication comprises highly disruptive technologies enabling design and manufacturing to intersect with the building blocks of life. Computers can now read and write with DNA. This is a world where bacteria, yeast, fungi, algae and mammalian cells grow and shape sustainable new materials.

There’s a bio-revolution on the horizon! We are beginning to design and fabricate with living cells: these are the factories of the future. The new start-ups are growing packaging, furniture, leather, flavours and fragrances, meat, bricks and bones. Hacker spaces are being joined by DIYBio labs. It’s time to discover how the C21st convergence of biology, computation and design will change what we wear, how we build, how we live, even how we will harvest our own bodies for our future healthcare.

Talking about mobile technology and ubiquitous sensors - this maybe fun for anyone interested in using their smartphone as Physics-science platform.

Soon, the growing capability of your smartphone could be harnessed to detect cosmic rays in much the same way as high-end, multimillion-dollar observatories.

With a simple app addition, Android phones, and likely other smartphone brands in the not-too-distant future, can be turned into detectors to capture the light particles created when cosmic rays crash into Earth’s atmosphere.

“The apps basically transform the phone into a high-energy particle detector,” explains Justin Vandenbroucke, a University of Wisconsin-Madison assistant professor of physics and a researcher at the Wisconsin IceCube Particle Astrophysics Center(WIPAC). “It uses the same principles as these very large experiments.”

Cosmic rays are energetic subatomic particles created, scientists think, in cosmic accelerators like black holes and exploding stars. When the particles crash into the Earth’s atmosphere, they create showers of secondary particles called muons.

Smartphone cameras use silicon chips that work through what is called the photoelectric effect, in which particles of light, or photons, hit a silicon surface and release an electric charge. The same is true for muons. When a muon strikes the semiconductor that underpins a smartphone camera, it liberates an electric charge and creates a signature in pixels that can be logged, stored and analyzed.

The information about the apps, Distributed Electronic Cosmic-Ray Observatory (DECO), and where the apps can be downloaded is here:

Lower-cost 3D printers for the consumer market offer only a limited selection of plastic materials, while industrial additive manufacturing (AM) machines can print parts made of high-performance metals. The application of a novel process called Selective Inhibition Sintering (SIS) in a consumer-priced metal AM machine is described in an article in 3D Printing and Additive Manufacturing, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. The article is available free on the 3D Printing and Additive Manufacturing website until December 6, 2014.

Payman Torabi, Matthew Petros, and Behrokh Khoshnevis, University of Southern California, Los Angeles, explain this innovative process, present sample parts printed using the technology, and discuss the next steps in research and development in the article "SIS -- The Process for Consumer Metal Additive Manufacturing" The SIS process differs from traditional research in powder sintering, which focuses on enhancing sintering (a process of fusing materials using heat and pressure); instead, SIS prevents sintering in selected regions of each powder layer.

"This technology uses a fundamentally new approach to 3D printing, one that could expand the reach of metal printing," says Editor-in-Chief Hod Lipson, PhD, Professor at Cornell University's Sibley School of Mechanical and Aerospace Engineering, Ithaca, NY.

3D printers that cost less than $5,000 are classified as Consumer printers.

I remember in 2003-4 being really excited about the hydrogen fuel cell car - and then waiting - but here’s something.

Akio Toyoda has seen the future, and it’s called “Mirai”. That’s the name of Toyota’s new fuel cell vehicle, which the company’s president announced in a video released the day before the car’s official launch.