SpiNNaker – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 21:51:05 +0000en-UShourly1https://wordpress.org/?v=4.760365857Nanotech Grand Challenge & Federal Vision for Future Computinghttps://www.hpcwire.com/2016/08/08/nanotech-grand-challenge-federal-grand-vision-future-computing/?utm_source=rss&utm_medium=rss&utm_campaign=nanotech-grand-challenge-federal-grand-vision-future-computing
https://www.hpcwire.com/2016/08/08/nanotech-grand-challenge-federal-grand-vision-future-computing/#respondMon, 08 Aug 2016 21:41:21 +0000https://www.hpcwire.com/?p=29192What will computing look like in the post Moore’s Law era? There’s no shortage of ideas. A new federal white paper – A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge – tackles the ‘what’s next’ question and spells out seven specific research and development priorities and identifies the federal entities responsible.

]]>What will computing look like in the post Moore’s Law era? That’s probably a bad way to pose the question and certainly there’s no shortage of ideas. A new federal white paper – A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge – tackles the ‘what’s next’ question and spells out seven specific research and development priorities and identifies the federal entities responsible.

The document, roughly a year in the making, is from the National Nanotechnology Initiative (NNI). The NNI, you may know, has it roots in discussions arising in the late 90s and formal creation by the 21st Century Nanotechnology Research and Development Act in 2003. NNI encompasses a large number of activities has a $1.4B budget request for FY2017.

Intended from the start to be a long-term program with long-term R&D horizons, NNI released of the new vision paper on the first year anniversary of the National Strategic Computing Initiative (NSCI) – perhaps as encouragement to the NSCI community. Specifically, the vision paper supports Nanotechnology-Inspired Grand Challenge, announced last fall by the Obama Administration, to develop “transformational computing capabilities by combining innovations in multiple scientific disciplines.”

As described in the latest paper, “The Grand Challenge addresses three Administration priorities—the National Nanotechnology Initiative (NNI); the National Strategic Computing Initiative (NSCI); and the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to: create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.

Somewhat soberly, the report says, “While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these characteristics.”

NNI has categorized research and development needed to achieve the Grand Challenge into seven focus areas:

Materials

Devices and Interconnects

Computing Architectures

Brain-Inspired Approaches

Fabrication/Manufacturing

Software, Modeling, and Simulation

Applications

Nanotechnology, of course, is already an area of vigorous R&D. As the list of focus areas illustrates, the program covers a wide swath of technologies. Though brief, much of the directional discussion is fascinating. Here’s an excerpt from the materials section:

“The scaling limits of electron-based devices such as transistors are known to be on the order of 5 nm due to quantum-mechanical tunneling. Smaller devices can be made if information-bearing particles with mass greater than the mass of an electron are used. Therefore, new principles for logic and memory devices, scalable to ~1 nm, could be based on “moving atoms” instead of “moving electrons;” for example, by using nanoionic structures. Examples of solid-state nanoionic devices include memory (ReRAM) and logic (atomic/ionic switches).”

Despite the diversity of topics covered, the goal of emulating human brain-like capabilities runs throughout document. Indeed brain-inspired computing R&D is hot right now and making substantial progress.

IBM TrueNorth Platform

In late spring of this year, IBM and Lawrence Livermore National Laboratory announced a collaboration in which LLNL would receive a 16-chip TrueNorth system representing a total of 16 million neurons and 4 billion synapses. At almost the same time in Europe, two large-scale neuromorphic computers, SpiNNaker and BrainScaleS, were put into service and made available to the wider research community.

LLNL will also receive an end-to-end ecosystem to create and program energy-efficient machines that mimic the brain’s abilities for perception, action and cognition. The ecosystem consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.

“Lawrence Livermore computer scientists will collaborate with IBM Research, partners across the Department of Energy complex and universities (link is external) to expand the frontiers of neurosynaptic architecture, system design, algorithms and software ecosystem,” according to a project description on the LLNL web site.

The SpinNNaker project, run by Steve Furber, one of the inventors of the ARM architecture and a researcher at the University of Manchester, has roughly 500K arm processors. The reason for selecting ARM, said Meier, is that ARM cores are cheap, at least if you make them very simple (integer operation). The challenge is to overcome scaling required. “Steve implemented a router on each of his chips, which is able to very efficiently communicate, action potentials called spikes, between individual arm processors,” said Karlheinz Meier, a leader in the HBP project whose group developed the BrainScaleS machine.

The BrainScaleS effort, led by Meier, “makes physical models of cell neurons and synapses. Of course we are not using a biological substrate. We use CMOS. Technically it’s a mixed CMOs signal approach. In reality is it pretty much how the real brain operates. The big thing is you can automatically scale this by adding synapses. When it is running you can change the parameters,” Meier said.

It will be interesting to track neuromorphic computing’s advance and observe how effective various government programs are (or are not) moving forward.

Besides including discussion of technical challenges and promising approaches for each of the seven focus areas, the white paper lays out 5-, 10-, and 15-goals for each focus. Here’s a partial excerpt from the brain-inspired computing section:

“High-performance computing (HPC) has traditionally been associated with floating point computations and primarily originated from needs in scientific computing, business, and national security. On the other hand, brain-inspired approaches, while at least as old as modern computing, have traditionally aimed at what might be called pattern recognition applications (e.g., recognition/understanding of speech, images, text, human languages, etc., for which the alternative term, knowledge extraction, is preferred in some circles) and have exploited a different set of tools and techniques.

“Recently, convergence of these two computing paths has been mandated by the National Strategic Computing Initiative Strategic Plan, which places due emphasis on brain-inspired computing and pattern recognition or knowledge extraction type applications for enabling inference, prediction, and decision support for big data applications. DOE and NSF have demonstrated significant scientific advancements by investing and supporting HPC resources for open scientific applications. However, it is becoming apparent that brain-like computing capabilities may be necessary to enable scientific advancement, economic growth, and national security applications.

10-year goal: Identify and reverse engineer biological or neuro-inspired computing architectures, and translate results into models and systems that can be prototyped.

15-year goal: Enable large-scale design, development, and simulation tools and environments able to run at exascale computing performance levels or beyond. The results should enable development, testing, and verification of applications, and be able to output designs that can be prototyped in hardware.”

The new document is a fairly quick read and has a fair amount of technical detail. Here’s a link to the white paper: http://www.nano.gov/node/1635

]]>https://www.hpcwire.com/2016/08/08/nanotech-grand-challenge-federal-grand-vision-future-computing/feed/029192Beyond von Neumann, Neuromorphic Computing Steadily Advanceshttps://www.hpcwire.com/2016/03/21/lacking-breakthrough-neuromorphic-computing-steadily-advance/?utm_source=rss&utm_medium=rss&utm_campaign=lacking-breakthrough-neuromorphic-computing-steadily-advance
https://www.hpcwire.com/2016/03/21/lacking-breakthrough-neuromorphic-computing-steadily-advance/#respondMon, 21 Mar 2016 13:00:20 +0000http://www.hpcwire.com/?p=25755Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical.

]]>Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical.

This week neuromorphic computing takes another step forward with a workshop being offered to users from academia, industry and education interested in using two European neuromorphic systems that have been years in development and are coming online for broader use – the BrainScaleS system launching at the Kirchhoff Institute for Physics of Heidelberg University and SpiNNaker, a complementary approach and similarly sized system at the University of Manchester.

Ramping up BrainScaleS and SpiNNaker is an important milestone, strengthening Europe’s position in hardware development for alternative computing. Both projects are part of the European Human Brain Project, originally funded by the European Commission’s Future Emerging Technologies program (2005-2015). The webcast, which will be streamed live on Tuesday, will cover the architecture for both systems and approaches to application development.

BrainScaleS and SpiNNaker take different tacks for modeling neuron activity. One approach is to use traditional analog circuits — like the chips being developed by the BrainScaleS. Analog circuits can be fast and energy and efficient. Conversely, SpiNNaker’s architecture closely links a very large number of digital cores (also fast, and in this case, also energy efficient).

BrainScaleS’s neuromorphic hardware is based around wafer-scale analog, very large scale integration (VLSI). Each 20-cm-diameter silicon wafer contains 384 chips, each of which implements 128,000 synapses and up to 512 spiking neurons[i]. This gives a total of around 200,000 neurons and 49 million synapses per wafer. These VLSI models operate considerably faster than the biological originals and allow the emulated neural networks to evolve tens-of-thousands times quicker than real time. Put another way, a biological day of learning can be compressed to 100 seconds on the machine.

Leader of the BrainScaleS project, Prof. Dr. Karlheinz Meier (Heidelberg University) explains, “The BrainScaleS system goes beyond the paradigms of a Turing machine and the von Neumann architecture. It is neither executing a sequence of instructions nor is it constructed as a system of physically separated computing and memory units. It is rather a direct, silicon based image of the neuronal networks found in nature, realizing cells, connections and inter-cell communications by means of modern analogue and digital microelectronics.”

Learning – not external programming – is a key guiding principle. Unlike traditional computer architecture in which a structured program explicitly carries out an order of tasks, brains are fundamentally learning machines that turn patterns into programs.

Steve Furber, a professor at the University of Manchester and a co-designer of the ARM chip architecture, leads the SpiNNaker team. SpiNNaker is a contrived acronym derived from Spiking Neural Network Architecture. The machine consists of 57,600 identical 18-core processors, giving it 1,036,800 ARM968 cores in total. The die is fabricated by United Microelectronics Corporation (UMC) on a 130 nm CMOS process. Each System-in-Package (SiP) node has an on-board router to form links with its neighbors, as well as 128 Mbyte off-die SDRAM to hold synaptic weights.

SpiNNaker, too, is built to mimic the brain’s biological structure and behavior. It will exhibit massive parallelism and resilience to failure of individual components. With more than one million cores, and one thousand simulated neurons per core, SpinNNaker should be capable of simulating one billion neurons in real-time. This equates to a little over one percent of the human brain’s estimated 85 billion neurons.

Rather than implement one particular algorithm, SpiNNaker will be a platform on which different algorithms can be tested. Various types of neural networks can be designed and run on the machine, thus simulating different kinds of neurons and connectivity patterns.

Both BrainScaleS and SpiNNaker architectures will be discussed during the Web-based workshop on March 22, scheduled from 3 pm to 6 pm CET. Together, the systems located in Heidelberg and Manchester comprise the “Neuromorphic Computing Platform” of the Human Brain Project.

Much of the early work on both machines will be basic research on self-organization in neural networks. Other potential applications, for example, are in energy and time efficiency optimization, broadly similar to deep learning technology developed by companies like Google and Facebook for the analysis of large data volumes using conventional high performance computers.

IBM’s Dharmendra Modha

Europe, of course, is hardly alone in pursuing neuromorphic computing. Most prominent in the U.S. is IBM Research’s TrueNorth Chip effort. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, wrote an interesting commentary on the TrueNorth project that traces development of von Neumann architecture based computing and contrasts it with neuromorphic computing approaches: Introducing a Brain-inspired Computer. Though written in 2014, it remains relevant.

TrueNorth chip, introduced in August 2014, is a neuromorphicCMOSchip that consists of 4,096 hardware cores, each one simulating 256 programmable silicon “neurons” for a total of just over a million neurons. Each neuron has 256 programmable “synapses” which convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). In terms of basic building blocks, its transistor count is 5.4 billion.

Developed under the DARPA SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project, TrueNorth’s computing power has been characterized as roughly equivalent to the brainpower of a rodent. It also circumvents the von-Neumann-architecture bottlenecks, is very energy-efficient, consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt – literally a synaptic supercomputer in your palm.

BrainScaleS, SpiNNaker, and TrueNorth are just three examples of many ongoing neuromorphic computing projects. Turning them into commercial products or more general purpose computing machines remains a challenge.

Indeed, IBM put together a paper on cognitive computing commercialization and barriers[ii]. “New thinking, not only on the part of programmers and application developers, but also by organizational decision makers who seek to link technological possibilities to market opportunity. While incremental innovation can be achieved on the basis of existing knowledge in well-charted commercial territory, radical innovation entails far greater uncertainty.”

Among the barriers cited were: formulating business models and predicting future revenue to calibrate investment, defining strategy and structure to execute and finally, overcoming communicative and functional boundaries.

Much of the drive to push neuromorphic computing stems from the ongoing decline of Moore’s law, and this excerpt from a 2014 ACM article[iii] still sums circumstances today:

As the long-predicted end of Moore’s Law seems ever more imminent, researchers around the globe are seriously evaluating a profoundly different approach to large-scale computing inspired by biological principles. In the traditional von Neumann architecture, a powerful logic core (or several in parallel) operates sequentially on data fetched from memory. In contrast, “neuromorphic” computing distributes both computation and memory among an enormous number of relatively primitive “neurons,” each communicating with hundreds or thousands of other neurons through “synapses.” Ongoing projects are exploring this architecture at a vastly larger scale than ever before, rivaling mammalian nervous systems, and developing programming environments that take advantage of them. Still, the detailed implementation, such as the use of analog circuits, differs between the projects, and it may be several years before their relative merits can be assessed.

Researchers have long recognized the extraordinary energy stinginess of biological computing, most clearly in a visionary 1990 paper by the California Institute of Technology (Caltech)’s Carver Mead that established the term “neuromorphic.” Yet industry’s steady success in scaling traditional technology kept the pressure off.”

[i] “Spiking neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal. In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron’s state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.” From https://en.wikipedia.org/wiki/Spiking_neural_network.

]]>This year has seen some notable advancements in computer-based brain mimicry, not just on the artificial intelligence (AI) front, but also related to in silico brain simulations.

Watson’s vanquishing of Jeopardy champions Brad Rutter and Ken Jennings in February set the stage for the year. The now world-famous IBM super exhibited a sophisticated understanding of language semantics along with the ability to integrate that understanding into a complex analytics engine. Since the Jeopardy match, IBM has been looking to take the technology into the commercial realm, most notably in the health care arena.

Meanwhile projects like FACETS (Fast Analog Computing with Emergent Transient States) and SpiNNaker are working to uncover the nature of the brain at the level of the neuron. The goal here is not to create any kind of artificial intelligence system a la Watson, but rather to simulate the neuronal network of the brain for basic science research.

The FACETS project, managed by the University of Heidelberg, actually wrapped up last year. It’s sequel, BrainScaleS project booted up in January 2011, with the idea of developing of a “brain-inspired computer architecture” based on a custom-designed neural network hardware. BrainScaleS has links to Henry Markram’s famous Blue Brain work.

Blue Brain, based at the École Polytechnique Fédérale in Lausanne (EPFL), is perhaps the best-known of the brain mimicry projects. The idea is to perform detailed simulations of the brain at the scale of the neuronal network. In this case though, the work was done with conventional supercomputing hardware (if you can call Blue Gene conventional). The project has successfully simulated a rat cortical column.

The follow-on to Blue Brain, also headed by Markram, is the Human Brain Project. The goal here is to move from rats to human and simulate the entire brain.

The other bookend to the Watson AI story is also from IBM. Last week, the company unveiled their cognitive computing chips. This is basic research as well, but IBM is aiming the technology at developing thinking machines, rather than just using it to elucidate the workings of the brain.

I queried Markram about the significance to IBM’s latest chippery, who responded thusly: “This is a very important technology step. There are still many challenges ahead, but neuromorphic chips like IBM’s are bound to become key processing units in hybrid architectures of future computers.” He also recognized the work at FACETS/BrainScaleS and SpiNNaker as contributing to this growing body of knowledge.

So what does it all mean? For those of you who read about such development in the popular press, there has been plenty of speculation about the future of artificial brains. A lot of this is centered around how such technology will impact the human condition, particular how intelligent computers will displace human labor.

The big question is if such technology will ultimately benefit people or merely make them superfluous. Edward Tenner, a historian of technology and culture with a Ph.d in European history, believes it will be the former. From a piece he penned in The Atlantic:

Will people be obsolete? I doubt it. The economic theory of comparative advantage explains why. Assuming there will still be people, even if the computers are running everything, it will pay for them to let people do what they are relatively better at. There’s likely to be a higher opportunity cost for computers to do more intuitive analysis for which human brain-body system has evolved and concentrate on tasks at which their abilities are an even high-multiple than people’s. In the case of computers and people, as I suggested about IBM’s Watson and Jeopardy! there will always be elements of tacit knowledge and common sense that will be extremely expensive to achieve electronically.

His premise is that it will always be cheaper and more effective to have a real live human provide answers that involve intuition. “So even if, for example, computers surpass physicians on diagnostic reasoning,” he writes, “it will be cheaper, more effective, and safer to have their judgment double-checked by a real doctor.

Maybe. But I think one of the article’s commenters nailed it pretty well when he suggests that the real question is not whether computers will replace all labor, but how many jobs will be displaced by intelligent machines and how that impacts our traditional economic model. He writes:

In classical economics, employers furnish the capital, and workers produce raw materials and finished goods or services. There is tension between worker and management: both need each other, but both want a bigger piece of the profits from work; each has a strong bargaining position, and the compromise they reach determines wages and benefits. But what’s playing out on the world stage isn’t classical economics at all. With every passing year, owners of capital are relying less on workers and more on machines. The balance has shifted in favor of owners of capital.

We don’t have to wait for the future to see this play out. It’s been happening for decades, as businesses large and small have adopted IT. The commenter notes that multinational tech manufacture Foxconn will be shedding a million of its million and half workers manufacturing circuit boards over the next two years, thanks to assembly line robotics.

We’ve certainly seen similar downsizing across the manufacturing sector in general. A century ago, the same process happened in agriculture, a sector whose labor base continues to decline. It’s not that the industries are shrinking, just their labor force.

With the introduction of more sophisticated computing, machines are moving higher up the food chain. For example, over the last three decades at JP Morgan, profitability has risen by a factor of 30, but employee head count has only doubled. That’s directly attributable to computer technology raising productivity.

The advent of really intelligent machines like Watson and its neuromorphic brethren will accelerate all this, in ways we can only imagine. Even industries that are enjoying relatively rapid job growth today, like professional services, education, and health care, will eventually be impacted.

From my perspective, the key problem is that our social and economic systems are not ready for this. While everyone is fixated on globalization, I think that’s a side show compared to what will happen — and is happening — as intelligent technology displaces human labor worldwide.

It’s not just that people who have invested years of specialized training will find their jobs threatened. As the commenter noted above, the balance between capital and labor is shifting rapidly in favor of capital as the labor force is squeezed into fewer and fewer jobs that resist automation. The hope is that other industries will emerge to engage the masses again, as happened after the agricultural and industrial revolutions. But this time may be different.

]]>Under the category of “Grand Challenge” applications, perhaps none is grander than simulation of the human brain. Reflecting the complexity and scale of the brain with current computer technology is truly a daunting task. But a group of researchers and computer scientists at a number of UK universities are attempting to do just that under a project named SpiNNaker.

SpiNNaker, which stands for Spiking Neural Network architecture, aims to map the brain’s functions for the purpose of helping neuroscientists, psychologists and doctors understand brain injuries, diseases and other neurological conditions. The project is being run out of a group at University of Manchester, which designed the system architecture, and is being funded by a £5m grant from the Engineering and Physical Sciences Research Council (EPSRC). Other elements of the SpiNNaker system are being developed at the universities of Southampton, Cambridge and Sheffield.

For the casual observer, constructing a facsimile of the most complex organ in the human body from digital technology may see like a natural fit for computers. The view of the brain as a biological processor (and the processor as a digital brain) is well entrenched in popular culture. But the designs are fundamentally different.

Operationally, computers are precise, extremely fast and deterministic; brains are imprecise, slow, and non-deterministic. And, of course the underlying architectures are completely different. Computers relying on digital electronics, while the brain employs a complex mix of biomolecular structures and processes.

The SpiNNaker design meets the architecture of the brain halfway by going for lots of simple, low-power computing units, in this case, ARM968 processors. The initial Manchester-designed SpiNNaker multi-processor is a custom SoC with 18 of these processors integrated on-chip. (The original spec called for 20 processors per chip.) The multi-processor also incorporates a local bus, called Network-on-Chip or NoC, which links up the individual processors and off-chip memory. Each SpiNNaker node is reported to draw less than one watt of power, while delivering the computational throughput of a typical PC.

The design is purpose-built to simulate the action of spiking neurons. Spiking in this context means when neurons are stimulated above a certain threshold level to generate an event that can be propagated across a neural net. But instead of using neurotransmitters to do this, the computer is just passing data packets around.

To be truly useful, the spiking needs to happen in real-time. Fortunately, this is where computer technology shines. Electrical communication is actually more efficient than the biochemical version, so nothing exotic needs to be done in the hardware to make all this magical neural spiking a virtual reality.

And that may happen soon. The design phase of the project is coming to a close and the SpiNNaker team is starting to gather the pieces together. According to a news release this week, SpiNNaker chips were delivered in June (from Taiwan — presumable TSMC), and have passed their functionality tests. The plan is to build a 50,000-node machine with up to one million ARM processors.

While that seems like a lot, researchers estimate that it will only be enough to represent about one percent of the real deal. A human brain contains around 100 billion neurons along with 1,000 million connections and a single ARM processor in the SpiNNaker chip can only handle 1,000 neurons. The good news is that one percent may be enough to answer a lot of questions about the functional operation of the brain.

Even at one percent, the scale of the machine is probably the trickiest part of the project. With so many processors in the mix, there are bound to be individual failures at fairly regular intervals. To deal with the inevitable, the designers made SpiNNaker fault tolerant at multiple levels. For example, each of the ARM processors can be disabled if they fail at start-up and a chip can remain functional even if “several processors fail.” If an entire chip goes south, data can be rerouted to neighboring chips thanks to redundant inter-chip links.

The other challenge to scaling out is power, but here is where the ARM architecture pays dividends. The initial system of 50,000 nodes is estimated to draw just 23 KW to 36 KW of power. By supercomputing standards, that’s just a pittance. Of course, judged against the 20 watt version in our heads, SpiNNaker has a ways to go.

The power profile suggests that if there are no inherent scaling limitations in the hardware or software, the design could conceivably be used to build a machine that would support a “complete” human brain simulation for just a few megawatts. With improved process technology, that could easily slip into the sub-megawatt level.

For all that, SpiNNaker isn’t designed to simulate higher level cognitive features — the most interesting function of the brain. Inevitably that will require more complex hardware and software. So even if someone builds a super-sized SpiNNaker, it won’t come close to the functionality of the 100 percent organic version anytime soon.