I appplaud your efforts but if Venusians, Martians, Saturnians and other Lyran civilizations can't speed up Mars' habitation rate what makes you think anyone living on Earth can ?

Well, to answer your question as seriously as I can; I believe we can do it because it is in our collective nature to venture beyond what is considered impossible. I can't sit back and believe that we can't do it, when I know for a FACT that we have and can do it. Besides Volitzer, this isn't just about going to Mars. It's about solving the problems of prolonged durations in space and inhabiting the heavens above.

Our only real hurdle is, THE SYSTEM and the dynamics involved with it. If people did what they knew could be done and money was not at the heart of the problem, we would have done it already and then some.... And that is a fact too. Which means that I have to agree with Rachel for the moment about our current stand still on progressing toward the goals presented here in this thread. It's never been a matter of us not having the know how. It's merely a problem involving those who control the resources at this time.

Navigate: Home Latest, Technology 4-D Printing: The Solution to a $350 Billion Problem?4-D Printing: The Solution to a $350 Billion Problem?

By Josh Grasmick, The Daily Reckoning Aug 26, 2013, 11:16 AM

You’ve heard about 3-D printing technology. It’s poised to revolutionize the future of manufacturing globally. We call it the “Click, Print Anything Revolution.”

But do you know about 4-D printing?

4-D printing tech is what happens when 3-D printing gets “smart.” Like a seed that follows the inner instructions of its DNA, 3-D printed materials can be programmed to self-assemble.

These new materials can shift shapes in response to outside forces, such as contact with water, air, gravity, magnets and/or temperature change. “The idea behind 4-D printing,” says director of MIT’s Self-Assembly Lab Skylar Tibbits, “is that you take multimaterial 3-D printing… and you add a new capability, which is transformation.”

Tibbits is collaborating with Stratasys (NASDAQ: SSYS) to further his 4-D printing project at the Self-Assembly Lab. They believe the tech is powerful enough to disrupt “biology, material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts and even space exploration.”

Harsh environments like outer space would, indeed, be made more accommodating. But one sector with more immediate, more practical application is underground…

It’s time to rescue the most vital resource on the planet: water.

We take water for granted all the time. About 60% of your body is made up of it.

You can survive three weeks without food. But without water? Try three days. We don’t recommend it…

Throughout history, the great civilizations understood its value: Egypt, with its pyramids by the Nile, and Rome, with its monumental aqueducts.

But here in the U.S., our modern-day empire is in serious trouble. Because most of the big water systems were built within a decade after WWII, 30% of water pipes are 40-80 years old. 10% are older.

That’s why if you listen closely, dear reader, you may be able to hear it…

Water mains breaking around the country every two minutes — 700 a day, on average.

A few months ago around our Baltimore office, the city’s main street was flowing like a river. Still, that was nothing compared with what happened on the Potomac. A pipe erupted so fiercely helicopters had to be called in to rescue people before they drowned.

It’s the same everywhere else. In Philadelphia, cars and homes have been flooded. On the West coast, Los Angeles’ famous Ventura Boulevard has been swamped.

When something like that happens, you tend to be asked by local officials to stop watering your lawn and washing your car. Cut back on using toilets, they recommend. Same with dishwashers and washing machines. The fire departments need all they can get in case chaos breaks out.

But it becomes more than an inconvenience when it gets really bad. Even worse than property loss, bacteria and viruses can enter the greater water supply through broken pipes. The 2008 salmonella outbreak that sickened over 250 people in Alamosa, Colo., is a small example.

In fact, the nation’s drinking water system is so troubled the American Society of Civil Engineers gave it a grade of D-plus in its 2013 Report Card for America’s Infrastructure.

“You can’t have jobs, you can’t have businesses, homes, you can’t have hotels if this infrastructure isn’t in place,” says Eric Goldstein of the Natural Resources Defense Council.

And guess where action needs to be taken most?

I’ll give you a clue: It also has among the highest crime rates — official and unofficial.

Washington, D.C.’s, average pipe is 77 years old. In the wake of the Great Recession, funds dried up to fix the water problem. Some $10 billion were allocated from the stimulus package. But according to CNN, the funds needed over the next 20 years are $334.8 billion. The more we wait, the worse it gets.

So much for the government taking care of the public’s single most basic service: drinking water… Fortunately, our friend at MIT, Mr. Tibbits, has shown the potential of 4-D printing as a solution.

Tibbits is working more than a tad bit with a Boston company called Geosyntec to develop a new paradigm in water infrastructure. Rather than use fixed-capacity water pipes, they’re experimenting with nanoscale adaptive materials built from the environment.

Personally, I believe that the best technology is based on the work nature has already spent billions of years producing. 4-D printing with adaptive pipes to correct our water piping reminds me a lot of how human veins expand and contract to accommodate blood flow. The 4-D printing solution is similar.

“Imagine if water pipes could expand or contract to change capacity or change flow rate,” Tibbits said in a recent TED talk. “Or maybe [they] undulate like peristaltics to move the water themselves,” he said.

4 D Printing: The Solution to a $350 Billion Problem?The Next Generation of Water Infrastructure Could Be 4-D Printed

“This isn’t expensive pumps or valves,” he continues. “This is a completely programmable and adaptive pipe on its own.”

To show a brief demonstration of different ways this could work, Tibbits showed how a strand of 4-D-printed material folded into the letters M-I-T when placed in water.

Scientists at the University of Darmstadt in Germany have stopped light for one minute. For one whole minute, light, which is usually the fastest thing in the known universe and travels at 300 million meters per second, was stopped dead still inside a crystal. This effectively creates light memory, where the image being carried by the light is stored in crystals. Beyond being utterly cool, this breakthrough could lead to the creation of long-range quantum networks — and perhaps, tantalizingly, this research might also give us some clues on accelerating light beyond the universal speed limit.

Back in 1999, scientists slowed light down to just 17 meters per second, and then two years later the same research group stopped light entirely — but only for a few fractions of a second. Earlier this year, the Georgia Institute of Technology stopped light for 16 seconds — and now, the University of Darmstadt has stopped light for a whole minute.

To stop light, the German researchers use a technique called electromagnetically induced transparency (EIT). They start with a cryogenically cooled opaque crystal of yttrium silicate doped with praseodymium. (The image above is unrelated; sadly there isn’t an image of the actual crystal that was used to stop light.) A control laser is fired at the crystal, triggering a complex quantum-level reaction that turns it transparent. A second light source (the data/image source) is then beamed into the now-transparent crystal. The control laser is then turned off, turning the crystal opaque. Not only does this leave the light trapped inside, but the opacity means that the light inside can no longer bounce around — the light, in a word, has been stopped.

Currently, a significant fraction of the engineering cost and launch mass of space systems is required exclusively to enable the system to survive launch. This is particularly true for systems with physically large components, such as antennas, booms, and panels, which must be designed to stow for launch and then reliably deploy on orbit.

Furthermore, the sizes of apertures and spacecraft structures are limited by the requirement to stow them within available launch fairings. Deployable structures and inflatable/rigidizable components have enabled construction of systems with scales of several dozen meters, but their packing efficiency is not sufficient to enable scaling to the kilometer-size baselines desired for applications such as long-baseline interferometry and sparse aperture sensing.

We propose to develop a process for automated on-orbit construction of very large structures and multifunctional components. The foundation of this process is a novel additive manufacturing technique called 'SpiderFab', which combines the techniques of fused deposition modeling (FDM) with methods derived from automated composite layup to enable rapid construction of very large, very high-strength-per-mass, lattice-like structures combining both compressive and tensile elements. This technique can integrate both high-strength structural materials and conducting materials to enable construction of multifunctional space system components such as antennas.

The SpiderFab technique enables the constituent materials for a space structure to be launched in an extremely compact form, approaching perfect packing efficiencies, and processed on-orbit to form structures optimized for the micro-gee space environment, rather than launch environments. The method can also create structures with 2nd and higher orders of hierarchy, such as a 'truss-of-trusses', achieving 30X mass reductions over the 1st order hierarchy structures used in most space applications. This approach can therefore enable deployment of antenna reflectors, phased array antennas, solar panels, and radiators with characteristic sizes one to two orders of magnitude larger than current state-of-the-art deployable-structure technologies can fit within available launch shrouds.

The SpiderFab process for on-orbit construction of large, lightweight structures will dramatically reduce the launch mass and stowed volume of NASA systems for astronomy, Earth-observation, and other missions requiring large apertures or large baselines, enabling them to be deployed using much smaller, less expensive launch vehicles and thereby reducing total life cycle cost for these missions.

Potential applications include construction of multiple high-gain antennas in Earth and solar orbit to support a deep-space communications network, long-baseline interferometry systems for terrestrial planet finder programs, and submillimeter astronomy of cosmic structure. The proposed space system fabrication technologies will also enable these systems to be re-configurable and repairable on orbit, and can evolve to support ISRU of orbital debris in Earth orbit and asteroid materials in deep space exploration missions.

In the proposed effort, we will develop concept designs for space systems that will use the SpiderFab process to create and integrate very large apertures and other structures for NASA space science and exploration missions.

We will develop an architecture concept combining the SpiderFab process with robotic assembly technologies and automated quality control/metrology techniques to enable on-orbit construction of kilometer-scale antenna apertures to enable capabilities such as high-bandwidth communications with deep-space probes and radar imaging of deep-space objects. We will then evaluate the potential performance benefits for candidate missions relative to state-of-the-art deployable technologies.

The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.

Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.

You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine.

The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.

Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.

This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.

Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.

The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers. Several key advancements have been made in quantum computing in the last few years. Let's look at a few of the quantum computers that have been developed.1998

Los Alamos and MIT researchers managed to spread a single qubit across three nuclear spins in each molecule of a liquid solution of alanine (an amino acid used to analyze quantum state decay) or trichloroethylene (a chlorinated hydrocarbon used for quantum error correction) molecules. Spreading out the qubit made it harder to corrupt, allowing researchers to use entanglement to study interactions between states as an indirect method for analyzing the quantum information.2000

In March, scientists at Los Alamos National Laboratory announced the development of a 7-qubit quantum computer within a single drop of liquid. The quantum computer uses nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid, a simple fluid consisting of molecules made up of six hydrogen and four carbon atoms. The NMR is used to apply electromagnetic pulses, which force the particles to line up. These particles in positions parallel or counter to the magnetic field allow the quantum computer to mimic the information-encoding of bits in digital computers.

Researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer to date in August. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by NMR instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.2001

Scientists from IBM and Stanford University successfully demonstrated Shor's Algorithm on a quantum computer. Shor's Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5.2005

The Institute of Quantum Optics and Quantum Information at the University of Innsbruck announced that scientists had created the first qubyte, or series of 8 qubits, using ion traps.2006

Scientists in Waterloo and Massachusetts devised methods for quantum control on a 12-qubit system. Quantum control becomes more complex as systems employ more qubits.2007

Canadian startup company D-Wave demonstrated a 16-qubit quantum computer. The computer solved a sudoku puzzle and other pattern matching problems. The company claims it will produce practical systems by 2008. Skeptics believe practical quantum computers are still decades away, that the system D-Wave has created isn't scaleable, and that many of the claims on D-Wave's Web site are simply impossible (or at least impossible to know for certain given our understanding of quantum mechanics).

If functional quantum computers can be built, they will be valuable in factoring large numbers, and therefore extremely useful for decoding and encoding secret information. If one were to be built today, no information on the Internet would be safe. Our current methods of encryption are simple compared to the complicated methods possible in quantum computers. Quantum computers could also be used to search large databases in a fraction of the time that it would take a conventional computer. Other applications could include using quantum computers to study quantum mechanics, or even to design other quantum computers.

But quantum computing is still in its early stages of development, and many computer scientists believe the technology needed to create a practical quantum computer is years away. Quantum computers must have at least several dozen qubits to be able to solve real-world problems, and thus serve as a viable computing method.