3D Printed Shoes [1748]

Adidas to Mass-Produce 3D Printed Shoes in Vats of Warm Liquid Goo

Adidas just announced they’re partnering with 3D printing company Carbon to mass-produce a line of shoes with 3D printed mid-soles (the spongy bit that cushions your foot). Called Futurecraft 4D, they aim to make 5,000 pairs by the end of the year, ramping up production to 100,000 pairs next year.

While 3D printing is often touted for its ability to customize products, Adidas will start with a single design to test the tech. Their ultimate goal, however, is to customize each shoe to fit the unique contours of a person’s foot.

This isn’t the company’s first foray into the world of 3D printing—an earlier model of the Futurecraft shoe, made with Materialise, previously sold for $333—nor are they the only shoe company pursuing the technology.

What’s interesting about this project is the challenges Adidas says Carbon’s technology can solve. And whereas 3D printed shoes have mostly arrived in small numbers, Adidas commitment to ramp up production is notable.

The idea of printing objects on demand is exciting, but the reality is more nuanced. 3D printing is slow and costly. Traditional manufacturing processes like injection molding still reign supreme for mass manufacturing at cost.

Adidas and Carbon are optimistic this may be changing for some products.

Of the 3D printers we’ve covered over the years, Carbon's is a personal favorite. Instead of stacking layers to make an object, Carbon uses light and heat to selectively harden liquid resin. The result is very sci-fi. A digital design made manifest is hoisted from a vat of high-tech goo in a single finished piece.

Printing soles used to take Adidas 10 hours. Now it takes 90 minutes. And they aim to further reduce print time to 20 minutes. Also, each sole is printed continuously in one piece, which eliminates weak spots where layers meet. And the sole’s honeycomb geometry—the properties of which vary over the sole's length—wouldn’t be possible with injection molding.

“Mechanical engineers have been taunting the world with the properties of these structures for years,” according to Carbon cofounder, Joseph DeSimone. “You can’t injection-mold something like that, because each strut is an individual piece.”

The technology also allows for faster, more complete prototyping. Adidas ran through some 50 designs before landing on their final choice.

A typical process, which would require copious retooling, would try out a handful of designs before moving on. By 3D printing both the design and the final product, Adidas can skip tooling on both ends. And unlike prior prototypes, the design and the final product are made of the same material—limiting the likelihood the final product will perform differently.

In addition to Adidas, Nike, Under Armour, and New Balance have their own 3D printed shoe projects, but these have mostly been produced in small batches. While 100,000 pairs of shoes is a drop in the ocean relative to the hundreds of millions of pairs Adidas sells each year, it's a lot more than a few hundred pairs.

Jason is managing editor of Singularity Hub. He cut his teeth doing research and writing about finance and economics before moving on to science, technology, and the future. He is curious about pretty much everything, and sad he'll only ever know a tiny fraction of it all.

3D Printed Titanium Ribs and Sternum

It’s a bit like a Marvel superhero comic or a 70s sci-fi TV show—only it actually just happened. After having his sternum and several ribs surgically removed, a Spanish cancer patient took delivery of one titanium 3D printed rib cage—strong, light, and custom fit to his body.

It’s just the latest example of how 3D printing and medicine are a perfect fit.

The list of 3D printed body parts now includes dental, ankle, spinal, trachea, and even skull implants (among others). Because each body is unique, customization is critical. Medical imaging, digital modeling, and 3D printers allow doctors to fit prosthetics and implants to each person’s anatomy as snugly and comfortably as a well tailored suit.

In this case, the 54-year-old patient suffered from chest wall sarcoma, a cancer of the rib cage. His doctors determined they would need to remove his sternum and part of several ribs and replace them with a prosthetic sternum and rib cage.

This image shows how the 3D printed titanium implant attaches firmly to the patient's rib cage.

Titanium chest implants aren’t new, but the complicated geometry of the bone structure makes it difficult to build them. To date, the typically used flat plate implants tend to come loose and raise the risk of complications down the road.

Now, we can do better. We have the technology.

Complexity is free with 3D printing. It’s as easy to print a simple shape as it is to print one with intricate geometry. And with a 3D model based on medical scans, it’s possible to make prosthetics and implants that closely fit a patient’s body.

But it takes more than your average desktop Makerbot to print with titanium.

The finished implant. Image credit: Anatomics.

The surgeons enlisted Australian firm Anatomics—the company that designed a 3D printed skull implant to replace nearly all of a patient’s cranium last year—and CSIRO’s cutting-edge 3D printing workshop, Lab 22, to design and manufacture the implant.

Lab 22 owns and operates a million-dollar Arcam printer. Most 3D printed metal parts use a technology called selective laser sintering, in which layers of powdered metal are fused with a laser beam. Instead of a laser, however, the Arcam printer uses a significantly more powerful electron beam technology developed for aerospace applications. (GE, for example, is printing titanium aluminide turbine blades with the tech.)

The surgeons worked closely with Anatomics to design the implant based on CT scans of the patient’s chest. Using a precise 3D model, the printer built the titanium implant—a sternum and eight rib segments—layer by layer. The final product is firmly attached to the patient's remaining rib cage with screws.

According to CSIRO’s Alex Kingsbury, “It would be an incredibly complex piece to manufacture traditionally, and in fact, almost impossible.”

Once complete, the team flew the implant to Spain for the procedure. All went to plan. The patient left hospital 12 days after the surgery and is recovering well.

While customization is widely used to illustrate 3D printing's power, it can often be more of a perk than a necessity. In many cases, traditional mass manufacturing methods still make more sense because they're cheaper and faster.

In some industries, however, customization is critical.

Aerospace firms, for example, are making 3D printed parts for jet and rocket engines—where rapid prototyping speeds up the design process, and cheap complexity and customization yields parts that can't be made any other way.

And nowhere is customization more useful than in medicine. From affordable custom prosthetics to tailor-made medical implants to bioprinted organs—the potential, in terms of improving and even saving lives, is huge.

We can't rebuild and replace every body part yet, but that's where we're headed.

3D-PRINTED BIO-BOTS [707]

TINY 3D-PRINTED BIO-BOTS ARE PROPELLED BY MUSCLE CELLS

Robots come in all shapes and sizes—some are mechanical, and some aren’t. Last year, a team of scientists from the University of Illinois at Urbana-Champaign made a seven-millimeter-long 3D printed robot powered by the heart cells of a rat.

The device, made of 3D printed hydrogel—a water-based, biologically compatible gel—had two feet, one bigger than the other. The smaller, longer foot was coated in heart cells. Each time the cells contracted, the robot would crawl forward a few millimeters.

3D printing allowed the researchers to quickly fabricate and test new designs. But there was a problem. Because the heart cells beat spontaneously (like in a human heart), they couldn’t control the robot’s motion. So, the scientists designed a new bio-bot.

The new device is also made using a 3D-printed gel scaffold, but instead of heart cells, it uses skeletal muscle cells to move around. The contraction of the muscle cells is controlled by electric current. By varying the frequency of the current, researchers can make the bio-bots go faster and slower, or in absence of a current, turn them off.

The bio-bots’ overall design is also naturally inspired. The hydrogel is rigid enough to provide structural support, and at the same time, it can flexibly bend like a joint. The muscle cells are affixed to two tendon-like posts that serve double duty as the bot’s feet.

The researchers think the bio-bots may prove useful in medicine or in the environment.

“It’s exciting to think that this system could eventually evolve into a generation of biological machines that could aid in drug delivery, surgical robotics, ‘smart’ implants, or mobile environmental analyzers, among countless other applications,” said Caroline Cvetkovic, co-first author of the paper.

In the future, the researchers hope to make the hydrogel backbone capable of motion in multiple directions, instead of just a straight line. And they may integrate neurons to steer the bio-bots using light or chemical gradients.

“Our goal is for these devices to be used as autonomous sensors, ” said study leader Rashid Bashir. “We want it to sense a specific chemical and move towards it, then release agents to neutralize the toxin, for example. Being in control of the actuation is a big step forward toward that goal.”

4 Ways Your Competitors Are Stealing Your IT Talent [1008]

4 Ways Your Competitors Are Stealing Your IT Talent

Savvy companies are shopping for talent in what is arguably the best place to find it -- their competition. As the talent war heats up, poaching tech professionals is becoming increasingly common. Here's how it's done and how to stop it.

One of the best places for your competitors to find great talent is within the walls of your company. If your best and brightest have been jumping ship to work for your biggest rival, it's important to know how they're being recruited, why they are being targeted and what you can do to stop it. Here's how your competitors may be poaching your talent.

They're Using Professional Search Tactics

Savvy companies know that the best talent is often already employed - with their competitors. Hiring a professional search firm -- or if that's not financially feasible, copying their subtle approach -- can lure away even the most content employees. As thisInc. Magazine article points out, targeting successful talent and then making contact via social networks like Facebook or LinkedIn, or at professional networking events, conferences or industry events with the promise of a "great opportunity" can pique their interest and entice them to consider a move.

They're Using Tools Like Poachable or Switch

One of the biggest challenges for hiring managers and recruiters is finding passive candidates, says Tom Leung, founder and CEO of anonymous career matchmaking service Poachable.

"Passive job finding - and searching for passive candidates - has a lot of interest for both candidates and for hiring managers and recruiters. As the economy rebounds and the technology market booms it remains difficult to match potential candidates with key open positions," Leung says. Employees and candidates are demanding higher pay from potential employers while, at the same time, STEM jobs are taking twice as long to fill as non-STEM jobs.

"When we asked hiring managers and recruiters what their biggest challenge was, they told us their weak spot was luring great talent that was already employed. Everybody seems to be doing a decent job of blasting out job postings, targeting active candidates, interviewing them, but this passive recruiting is where people get stuck," says Leung.

Passive candidates are already employed and aren't necessarily unhappy, Leung says, but if the right opportunity came up, they would consider making a move. That's where tools like Lueng's Poachable and the new Switch solution come in.

"These folks might want to make a move, but they're too busy to check the job boards every day, and they're content where they are. What we do is help them discover what types of better, more fulfilling jobs are out there by asking them what 'dream job' would be tempting enough for them to move, and we help them find that," says Leung.

Are You Offering Competitive Benefits and Perks

Flexible work schedules, job-sharing, opportunities to work remotely, subsidized child and elder care, employee-paid healthcare packages, on-site gym facilities, a masseuse and unlimited vacation time are all important if you want to attract talented IT professionals.

"Companies that acknowledge and accommodate the fact that their talent has a life separate from work tend to have more engaged, loyal and productive employees, says Dice.com president Shravan Goli.

A March 2014 study from Dice.com surveyed tech pros and found benefits and perks like flexibility, free food and the ability to work with cutting-edge technology were key drivers of their decision to take a new position. "With approximately 2.9 percent unemployment rate in the IT industry, companies must get creative to attract and keep their top talent. Perks and benefits are one way they are looking beyond compensation," says Goli.

Offering Better Monetary Incentives

Your talent is one of your business' greatest assets, and if you're not doing everything you can to ensure they stay happy, especially where compensation is concerned, you could lose them - and be at a competitive disadvantage, according to theU.S. Small Business Administration.

"All companies have valued employees - those they can't afford to lose because of their skill, experience and commitment to their work. One way you can help them resist the temptation to stray is to show that you are invested in their future," according to data from the SBA.

The SBA advises giving these employees one-on-one time with management, discussing their professional goals and their importance, and sharing the company's vision for continued growth as well as the employee's role in that growth.

In addition, the SBA says, offering meaningful pay increases, a generous bonus structure and/or compensation like "long-term incentive plans tied to the overall success of the business, not just individual performance, can also send a clear message to your employees that they have a recognized and valuable role to play in your business as a whole."

50 Years of Moore’s Law [1189]

50 Years of Moore’s Law

The glorious history and inevitable decline of one of technology’s greatest winning streaks

Fifty years ago this month, Gordon Moore forecast a bright future for electronics. His ideas were later distilled into a single organizing principle—Moore’s Law—that has driven technology forward at a staggering clip. We have all benefited from this miraculous development, which has forcefully shaped our modern world.

In this special report, we find that the end won’t be sudden and apocalyptic but rather gradual and complicated. Moore’s Law truly is the gift that keeps on giving—and surprising, as well.

The Multiple Lives of Moore’s Law

Why Gordon Moore’s grand prediction has endured for 50 years

By Chris Mack

Illustration: Serge Bloch

A half century ago, a young engineer named Gordon E. Moore took a look at his fledgling industry and predicted big things to come in the decade ahead. In a four-page article in the trade magazine Electronics, he foresaw a future with home computers, mobile phones, and automatic control systems for cars. All these wonders, he wrote, would be driven by a steady doubling, year after year, in the number of circuit components that could be economically packed on an integrated chip.

A decade later, the exponential progress of the integrated circuit—later dubbed “Moore’s Law”—showed no signs of stopping. And today it describes a remarkable, 50-year-long winning streak that has given us countless forms of computers, personal electronics, and sensors. The impact of Moore’s Law on modern life can’t be overstated. We can’t take a plane ride, make a call, or even turn on our dishwashers without encountering its effects. Without it, we would not have found the Higgs boson or created the Internet.

But what exactly is Moore’s Law, and why has it been so successful? Is it evidence of technology’s inevitable and unstoppable march? Or does it simply reflect a unique time in engineering history, when the special properties of silicon and a steady series of engineering innovations conspired to give us a few decades of staggering computational progress?

I would argue that nothing about Moore’s Law was inevitable. Instead, it’s a testament to hard work, human ingenuity, and the incentives of a free market. Moore’s prediction may have started out as a fairly simple observation of a young industry. But over time it became an expectation and self-fulfilling prophecy—an ongoing act of creation by engineers and companies that saw the benefits of Moore’s Law and did their best to keep it going, or else risk falling behind the competition.

I would also argue that, despite endless paraphrasing, Moore’s Law is not one simple concept. Its meaning has changed repeatedly over the years, and it’s changing even now. If we’re going to draw any lessons from Moore’s Law about the nature of progress and what it can tell us about the future, we have to take a deeper look.

In the early 1960s, before Silicon Valley became known as Silicon Valley, Gordon Moore was director of research and development at Fairchild Semiconductor. He and others had founded the company in 1957 after defecting from Shockley Semiconductor Laboratory, where they’d done some of the early work on silicon electronic devices.

Fairchild was one of a small group of companies working on transistors, the now ubiquitous switches that are built by the billions onto chips and are used to perform computations and store data. And the firm quickly started carving out a niche.

At the time, most circuits were constructed from individual transistors, resistors, capacitors, and diodes that were wired together by hand on a circuit board. But in 1959, Jean Hoerni of Fairchild invented the planar transistor—a form of transistor that was constructed in the plane of the silicon wafer instead of on a raised plateau, or mesa, of silicon.

With this configuration, engineers could build wires above the transistors to connect them and so make an “integrated circuit” in one fell swoop on the same chip. Jack Kilby of Texas Instruments had pioneered an early integration scheme that connected devices with “flying wires” that rose above the surface of the chip. But Moore’s colleague Robert Noyce showed that planar transistors could be used to make an integrated circuit as a solid block, by coating the transistors with an insulating layer of oxide and then adding aluminum to connect the devices. Fairchild used this new architecture to build the first silicon integrated circuit, which was announced in 1961 and contained a whopping four transistors. By 1965, the company was getting ready to release a chip with 64 components.

Image: Intel

The Sweet Spot: Economics was at the core of Moore’s 1965 paper. He argued that for any particular generation of manufacturing technology, there is a cost curve. The cost of making a component declines the more you pack onto an integrated circuit, but past a certain point, yields decline and costs rise. The sweet spot, where the cost per component is at a minimum, moves to more and more complex integrated circuits over time.

Armed with this knowledge, Moore opened his 1965 paper with a bold statement: “The future of integrated electronics is the future of electronics itself.” That claim seems self-evident today, but at the time it was controversial. Many people doubted that the integrated circuit would ever fill anything more than a niche role.

You can forgive the skepticism. Although the first integrated chips were more compact than their hand-wired brethren, they cost significantly more—about US $30 per component in today’s dollars compared with less than $10 for stand-alone components. Only a handful of companies were making integrated circuits, and their only real customers were NASA and the U.S. military.

Compounding the problem was the fact that transistors were still unreliable. Of the individual transistors that were made, only a small fraction—just 10 to 20 percent, Moore later recalled—actually worked. Pack a half dozen of those devices together in an integrated circuit and you’d expect those small fractions to multiply, yielding a dismally small number of operational chips.

But this logic was flawed. It turned out that making a chip with eight transistors yields a fraction of operational chips similar to what you’d get by making eight stand-alone transistors. That’s because the probabilities aren’t independent. Defects take up space, and many types are distributed randomly, like paint splatter. If two transistors are placed close together, a single transistor-size flaw can take out both devices. As a result, putting two transistors side by side carries about the same risk of death by defect as one transistor by itself.

Moore was convinced that integration would ultimately prove economical. In his 1965 paper, as evidence of the integrated circuit’s bright future, he plotted five points over time, beginning with Fairchild’s first planar transistor and followed by a series of the company’s integrated circuit offerings. He used a semilogarithmic plot, in which one axis is logarithmic and the other linear and an exponential function will appear as a straight line. The line he drew through the points was indeed more or less straight, with a slope that corresponded to a doubling of the number of components on an integrated circuit every year.

From this small trend line, he made a daring extrapolation: This doubling would continue for 10 years. By 1975, he predicted, we’d see the number of components on an integrated circuit go from about 64 to 65,000. He got it very nearly right. By 1975, Intel, the company Moore cofounded after leaving Fairchild in 1968, was preparing charged-coupled-device (CCD) memory chips with some 32,000 components—only a factor of two off from his thousandfold prediction.

Looking back on this remarkable paper, I’ll note a few details that are often overlooked. First, Moore’s prediction was about the number of electronic components—not just transistors but also devices such as resistors, capacitors, and diodes. Many early integrated circuits actually had more resistors than transistors. Later, metal-oxide-semiconductor (MOS) circuitry, which relied less on nontransistor components, emerged, and the digital age began. Transistors dominated, and their number became the more useful measure of integrated circuit complexity.

The paper also reveals Moore’s focus on the economics of integration. He defined the number of components per chip not as the maximum or the average number of components but as the number for which the cost per component was at a minimum. He understood that the number of components that you can pack on a chip and the number that makes economic sense are not necessarily the same. Instead, there’s a sweet spot for every generation of chip-fabrication technology. As you add more components, you drive the cost per component down. But past a certain point, attempting to pack even more transistors into a given space will raise the possibility of killer defects and lower the yield of useful chips. At that point, the cost per component will start to rise. The goal of integrated circuit design and manufacturing was—and still is—to hit this sweet spot.

The truth is, I don’t think Moore’s Law is over. Instead, I’d argue it’s on the verge of morphing again.

As chip-fabrication technology has improved, the sweet spot has shifted to larger numbers of components and lower costs per component. Over the last 50 years, the cost of a transistor has been reduced from $30 in today’s dollars to a nanodollar or so. Moore could hardly have predicted such a dramatic reduction. But even in 1965, he understood that integrated circuits were about to change from an expensive, high-performance replacement for discrete components to a cheap, high-performance replacement, and that both performance and economics would favor integration.

Ten years later, Moore revisited his prediction and revised it. In an analysis he’d done for the 1975 IEEE International Electron Devices Meeting, he started by tackling the question of how the doubling of components actually happened. He argued that three factors contributed to the trend: decreasing component size, increasing chip area, and “device cleverness,” which referred to how much engineers could reduce the unused area between transistors.

Moore attributed about half of the doubling trend to the first two factors and the rest to “cleverness.” But when he considered the CCD memories that Intel was preparing to release, he decided that cleverness would soon go out the window. In CCD arrays, devices are packed together in tight grids with no wasted space to eliminate. So he predicted the doubling trend would soon be driven only by tinier transistors and bigger chips. As a result it would slow by half, doubling components once every two years instead of every year.

Ironically, CCD memory proved to be too error prone, so Intel never shipped any. But the prediction was nonetheless borne out in logic chips, such as microprocessors, which have grown at about a two-year doubling rate since the early 1970s. Memory chips, with their massive arrays of identical transistors, scaled faster, doubling every 18 months or so, mainly because they are simpler to design.

Of the three technology drivers Moore identified, one turned out to be special: decreasing the dimensions of the transistor. For a while at least, shrinking transistors offered something that rarely happens in the world of engineering: no trade-offs. Thanks to a scaling rule named for IBM engineer Robert Dennard, every successive transistor generation was better than the last. A shrinking transistor not only allowed more components to be crammed onto an integrated circuit but also made those transistors faster and less power hungry.

This single factor has been responsible for much of the staying power of Moore’s Law, and it’s lasted through two very different incarnations. In the early days, a phase I call Moore’s Law 1.0, progress came by “scaling up”—adding more components to a chip. At first, the goal was simply to gobble up the discrete components of existing applications and put them in one reliable and inexpensive package. As a result, chips got bigger and more complex. The microprocessor, which emerged in the early 1970s, exemplifies this phase.

But over the last few decades, progress in the semiconductor industry became dominated by Moore’s Law 2.0. This era is all about “scaling down,” driving down the size and cost of transistors even if the number of transistors per chip does not go up.

Although the Moore’s Law 1.0 and 2.0 eras have overlapped, the dominance of scaling down versus scaling up can be seen in the way the semiconductor industry describes itself. In the 1980s and early 1990s, the technology generations, or “nodes,” that define progress in the industry were named after dynamic RAM generations: In 1989, for example, we had the 4-megabyte DRAM node; in 1992, the 16-MB node. Each generation meant greater capability within a single chip as more and more transistors were added without raising the cost.

By the early 1990s, we’d begun to name our nodes after the shrinking features used to make the transistors. This was only natural. Most chips didn’t need to carry as many transistors as possible. Integrated circuits were proliferating, finding their way into cars and appliances and toys, and as they did so, the size of the transistor—with the implications for performance and cost savings—became the more meaningful measure.

Eventually even microprocessors stopped scaling up as fast as manufacturing technology would permit. Manufacturing now allows us to economically place more than 10 billion transistors on a logic chip. But only a few of today’s chips come anywhere close to that total, in large part because our chip designs generally haven’t been able to keep up.

Moore’s Law 1.0 is still alive today in the highest-end graphics processing units, field-programmable gate arrays, and perhaps a handful of the microprocessors aimed at supercomputers. But for everything else, Moore’s Law 2.0 dominates. And now it’s in the process of changing again.

This change is happening because the benefits of miniaturization are progressively falling away. It began in the early 2000s, when an unpleasant reality started to emerge. At that time, transistor sizes began to creep down below 100 nanometers, and Dennard’s simple scaling rule hit its limit. Transistors became so small that it was quite easy for electrons to sneak through them even when the devices were supposed to be off, leaking energy and lowering device reliability. Although new materials and manufacturing techniques helped combat this problem, engineers had to stop the practice of dramatically lowering the voltage supplied to each transistor in order to maintain a strong electrical clamp.

Because of the breakdown of Dennard scaling, miniaturization is now full of trade-offs. Making a transistor smaller no longer makes it both faster and more efficient. In fact, it’s very difficult to shrink today’s transistors and maintain even the same speed and power consumption of the previous generation.

As a result, for the last decade or so, Moore’s Law has been more about cost than performance; we make transistors smaller in order to make them cheaper. That isn’t to say that today’s microprocessors are no better than those of 5 or 10 years ago. There have been design improvements. But much of the performance gains have come from the integration of multiple cores enabled by cheaper transistors.

The economics has remained compelling because of an important and unheralded feature of Moore’s Law: As transistors have gotten smaller, we’ve been able to keep the cost of manufacturing each square centimeter of finished silicon about the same, year after year after year (at least until recently). Moore has put it at about a billion dollars an acre—although chipmakers seldom think in terms of acreage.

Keeping the cost of finished silicon constant for decades hasn’t been easy. There was steady work to improve yield, which started in the 1970s at around 20 percent and now sits at 80 to 90 percent. Silicon wafers—the round platters of silicon that are eventually cut into chips—also got bigger and bigger. The progressive boost in size lowered the cost of a number of manufacturing steps, such as deposition and etching, that are performed on a whole wafer at once. And crucially, equipment productivity has soared. The tools employed in lithography—the printing technology that’s used to pattern transistors and the interconnections between them—cost 100 times as much today as they did 35 years ago. But these tools pattern wafers 100 times as fast, making up the cost increase while delivering far better resolution.

These three factors—improved yields, larger wafers, and rising equipment productivity—have allowed chipmakers to make chips denser and denser for decades while keeping the cost per area nearly the same and reducing the cost per transistor. But now, this trend may be ending. And it’s largely because lithography has gotten more expensive.

Over the last decade, the difficulties of printing tiny features have raised the manufacturing cost per unit area of finished silicon about 10 percent per year. Since the area per transistor shrank by about 25 percent each year over the same period, the cost of each transistor kept going down. But at some point, manufacturing costs will rise faster than transistor area will fall, and the next generation of transistors will be more expensive than the last.

If lithography costs rise fast, Moore’s Law as we know it will come to a quick halt. And there are signs that the end could come quite soon. Today’s advanced chips are made with immersion lithography, which makes patterns by exposing water-immersed wafers to 193-nm, deep ultraviolet light. The planned successor is lithography using shorter-wavelength, extreme ultraviolet light. That technology was supposed to come on line as early as 2004. But it’s suffered delay after delay, so chipmakers have had to turn to stopgaps such as double patterning, which doubles up some steps to fashion the finest features. Double patterning takes twice as long as single patterning. Nonetheless, chipmakers are contemplating triple and even quadruple patterning, which will further drive up costs. A few years from now, we may look back on 2015 as the year the tide turned and the cost of transistors stopped falling and started to rise.

I’ve been known for making grand pronouncements at lithography conferences about the coming end of Moore’s Law. But the truth is, I don’t think Moore’s Law is over. Instead, I’d argue it’s on the verge of morphing again.

Going forward, innovations in semiconductors will continue, but they won’t systematically lower transistor costs. Instead, progress will be defined by new forms of integration: gathering together disparate capabilities on a single chip to lower the system cost. This might sound a lot like the Moore’s Law 1.0 era, but in this case, we’re not looking at combining different pieces of logic into one, bigger chip. Rather, we’re talking about uniting the non-logic functions that have historically stayed separate from our silicon chips.

An early example of this is the modern cellphone camera, which incorporates an image sensor directly onto a digital signal processor using large vertical lines of copper wiring called through-silicon vias. But other examples will follow. Chip designers have just begun exploring how to integrate microelectromechanical systems, which can be used to make tiny accelerometers, gyroscopes, and even relay logic. The same goes for microfluidic sensors, which can be used to perform biological assays and environmental tests.

All of these technologies allow you to directly connect a digital CMOS chip with the outside, analog world. This could have a powerful economic effect if the new sensors and actuators can take advantage of the low-cost, mass-production approaches common to silicon manufacturing.

But this new phase of Moore’s Law—what I call Moore’s Law 3.0 and what others in the semiconductor industry call “more than Moore”—may not make economic sense. Integrating nonstandard components onto a chip offers many exciting opportunities for new products and capabilities. What it doesn’t offer is the regular, predictable road map for continued success.

The path forward will be much murkier. Adding a new capability to a chip may make a company money today, but there’s no guarantee that adding another will earn it more money tomorrow. No doubt this transition will be painful for some established semiconductor companies, with the winners and losers yet to be determined.

Still, I think Moore’s Law 3.0 could be the most exciting rendition of the law yet. Once we get past our expectations for easily quantifiable progress, we could see an explosion of creative applications: bionic appendages that operate seamlessly with the body, smartphones that can sniff the air or test the water, tiny sensors that can power themselves from ambient energy sources, and a host of other applications we have yet to imagine. Moore’s Law as we know it might be coming to an end. But its legacy will keep us moving forward for a long time to come.

Moore’s Law Might Be Slowing Down, But Not Energy Efficiency

Miniaturization may be tough, but there's still room to drive down power consumption in modern computers

By Jonathan Koomey & Samuel Naffziger

Illustration: Serge Bloch

No one can say exactly when the era of Moore’s Law will come to a close. Nevertheless, semiconductor experts like us can’t resist speculating about that day because it will mark the end of an extraordinary period of history, with uncertain implications for one of the world’s most important industries.

Here’s what we do know. The last 15 years have seen a big falloff in how much performance improves with each new generation of cutting-edge chips. So is the end nigh? Not exactly, because even though the fundamental physics is working against us, it appears we’ll have a reprieve when it comes to energy efficiency.

There are many ways to gauge a computer’s efficiency, but one of the most easily calculated metrics is peak-output efficiency, which measures the efficiency of a processor when it’s running at its fastest.

Peak-output efficiency is typically quoted as the number of computations that can be performed per kilowatt-hour of electricity consumed. And according to a peer-reviewed paper published in 2011 in the IEEE Annals of the History of Computing, it doubled like clockwork every year and a half or so for more than five decades.

This trend started well before the first microprocessor, way back in the mid-1940s. But it began to come to an end around 2000. Growth in both peak-output efficiency and performance started to slow, weighed down by the physical limitations of shrinking transistors. Chipmakers turned to architectural changes—such as putting multiple computing cores in a single microprocessor—but they weren’t able to maintain historical growth rates.

These days, we’ve found, it takes about 2.7 years for peak-output efficiency to double. That’s a substantial slowdown. Historically, a decade of doubling boosted efficiency by a factor of a hundred; at current rates, it would take 18 years to see a hundredfold gain.

Fortunately, the news isn’t all bad. Our computing needs have changed. For years after Moore’s landmark 1965 paper, computers were expensive, relatively rare, and regularly pushed to their computing peak. Now that they’re ubiquitous and cheap, the emphasis in chip design has shifted from fast CPUs in stationary machines to ultralow-power processing in mobile appliances, such as laptops, cellphones, and tablets.

Today, most computers run at peak output only a small fraction of the time (a couple of exceptions being high-performance supercomputers and Bitcoin miners). Mobile devices such as smartphones and notebook computers generally operate at their computational peak less than 1 percent of the time based on common industry measurements. Enterprise data servers spend less than 10 percent of the year operating at their peak. Even computers used to provide cloud-based Internet services operate at full blast less than half the time.

In this new regime, a good power-management design is one that minimizes how much energy a device consumes when it’s idle or off. And the better indicator of energy efficiency is how much electricity a computer consumes on average—not when it’s operating at full blast.

We’ve recently defined a measure of efficiency that’s more in sync with how chips are used nowadays, which we call “typical-use efficiency.” Like peak-output efficiency, it’s measured in computations per kilowatt-hour. This time, however, it’s calculated by dividing the number of computations performed over the course of a year by the total electricity consumed—a weighted sum of the energy a processor and its supporting circuitry use in different modes over that same period. For example, a laptop might operate at peak power when its user is playing a game, but this only happens a tiny fraction of the time. Other common activities, such as word processing or video playback, might consume a tenth as much electricity, since only a fraction of the chip is needed for these functions, and smart power management can actively shut off circuitry between keystrokes and video frames.

Encouragingly, typical-use efficiency seems to be going strong, based on tests performed since 2008 on Advanced Micro Devices’ chip line. Through 2020, by our calculations for an AMD initiative, typical-use efficiency will double every 1.5 years or so, putting it back to the same rate seen during the heyday of Moore’s Law.

Data sources: AMD, Koomey et al. (2011)

These gains come from aggressive improvements to circuit design, component integration, and software, as well as power-management schemes that put unused circuits into low-power states whenever possible. The integration of specialized accelerators, such as graphics processing units and signal processors that can perform certain computations more efficiently, has also helped keep average power consumption down.

Of course, as with any exponential trend, this one will eventually end, and circuit designers will have become victims of their own success. As idle power approaches zero, it will constitute a smaller and smaller fraction of the energy consumed by a computer. In a decade or so, energy use will once again be dominated by the power consumed when a computer is active. And that active power will still be hostage to the physics behind the slowdown in Moore’s Law.

Over the next few decades, we’ll have to rethink the fundamental design of computers if we want to keep computing moving forward at historical rates. In the meantime, steady improvements in everyday energy efficiency will give us a bit more time to find our way.

This article originally appeared in print as “Efficiency’s Brief Reprieve.”

About the Authors

Jonathan Koomey is a research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University. IEEE Fellow Samuel Naffziger is an Advanced Micro Devices corporate fellow. They began collaborating on computing efficiency in 2014, as part of 25x20, an AMD energy-efficiency initiative that is targeting a 25X improvement in PC efficiency by 2020.

8 Ways AI Will Profoundly Change City Life by 2030 [1708]

8 Ways AI Will Profoundly Change City Life by 2030

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.

The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, a computer scientist, former president of the Association for the Advancement of Artificial Intelligence, and managing director of Microsoft Research's main Redmond lab.

Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.

1. Transportation

The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.

Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.

Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.

Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public's first experience with physically embodied AI systems and will strongly influence public perception.

2. Home and Service Robots

Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots' on-board computing capacity.

Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft's Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.

But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.

3. Healthcare

AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.

If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.

At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.

Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.

4. Education

The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.

AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.

Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.

5. Low-Resource Communities

In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.

Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.

6. Public Safety and Security

By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.

Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.

7. Employment and Workplace

The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.

AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it's hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.

These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.

8. Entertainment

Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.

Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.

But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.

That’s right: information professionals and knowledge workers spend over one-quarter of their time looking for information, writing emails and collaborating internally.

This means that streamlining knowledge management could have a dramatic effect on the productivity of an organisation. Furthermore, making information accessible and well organised helps unlock the value of the collective knowledge held by employees.

Fortunately, this does not require investing in expensive new tools. The same McKinsey report said that most companies could double the current value they get from social tools by removing online hierarchies and creating an environment that is more open, direct, trusting and engaging.

Here are eight ways to enhance knowledge management in an organisation.

1. Embrace the desire to socialise

Humans are social creatures. Employees have a natural tendency to socialise, and this does not have to be treated as slacking off or a distraction. Encouraging employees to form relationships encourages knowledge sharing because it is through these interactions that employees get to know each other.

Socialising enhances their awareness of each others’ strengths and weakness. They will know who to go to with specific queries and feel more comfortable reaching out, which helps them act faster and make better decisions.

2. Encourage dialogue and collaboration

Today’s employee wants to feel that their voice is heard within the organisation and they place a high premium on collaboration. They are active users of mobile and social technology, and do not want to stand on the sidelines – they want to get involved.

Employers cannot and should not fight this. Rather than bosses expounding about their ideas for hours, they should cultivate an atmosphere of open communication. Create opportunities for employees to share their thoughts and ideas with each other and allow for improvisation. Remember that true organisational change has to occur at every level.

3. Solicit feedback and questions

The old adage of “there is no such thing as a bad question” certainly holds true with knowledge management. Questions are how people learn, whether they are a CEO or an intern.

One of the best ways to get employees to share their knowledge and exchange insights is to seek feedback. Ask employees for help and solicit their opinions, expertise, and advice. Invite others to work with you, even to make small contributions. Be transparent by sharing what you are doing and why, and ask your team how they would do it differently. Lead by example.

4. Centralise information

As mentioned above, an organisation has a goldmine of collective knowledge at its disposal. In addition to open communication, a centralised repository where that knowledge can live is important, so employees can access it when they need to.

Take advantage of a platform that facilitates and documents employee interactions. This enables staff to quickly locate conversations and/or colleagues who can provide the insights they need for projects or decisions.

5. Generate new ideas

Good ideas can come from anywhere. Open up crowdstorming and collaborative brainstorming to the entire organisation by crowdsourcing product and service ideas. This allows you to identify potential challenges, collect a broad range of perspectives, and develop solutions in an intuitive, user-friendly forum.

6. Establish immediate communication and sharing

Communication is not just important on the individual level. B2B supply chains also involve various teams, branches, vendors and more.

Part of effective knowledge management is ensuring that all these moving parts are able to easily talk to each other, because otherwise your workflows will hit roadblocks. Remove as many silos as you can and streamline communication. Breaking down barriers will drive productivity.

7. Encourage a change mindset

Someone with a “change” or “growth” mindset approaches problems as opportunities. They embrace challenges, learn from their setbacks, don’t give up, and take control over their actions. For knowledge sharing to have the greatest results, this is the mindset you want to cultivate in your employees.

Leaders can do this by aligning the organisational structures and processes to support that vision. Set performance goals for individuals and for the organisation as a whole, and then motivate your team to achieve them. Leaders can also model change by setting examples of desired behaviors in day-to-day interactions, enlisting help from influential people within the organisation, and most importantly, ensure that teams are held accountable to the changes.

A change mindset involves helping employees grow. Develop their talent and skills by evaluating performance, rewarding high-performing individuals, and offering a range of educational opportunities so they can work on their weaknesses and hone their strengths.

Finally, make sure you have commitment and understanding from your employees by making sure employees know why changes need to happen and how they will be supported. Keep track of progress so it aligns with the company's overall mission and employees' daily work.

8. Tap into intrinsic motivation

Employees are more motivated to share knowledge when they find their work interesting, stimulating and enjoyable. The more motivated an employee feels, the more likely they are to share knowledge.

Instead of driving motivation through external feedback – which can leave workers feeling manipulated or controlled – inspire your team by encouraging autonomy. Autonomy is an essential part of motivation and job satisfaction, and employees who have some autonomy in what they do are more likely to feel enthusiastic about their work.

Areas such as scheduling, decision making and process management provide excellent opportunities for developing a confident, engaged team.

Ultimately, employees are more likely to share information and grow a company's productivity and competitive advantage when they feel heard, have access to the knowledge and resources they need, and have a positive environment with leaders who are committed to collaboration.