DARPA – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 20 Mar 2018 00:17:45 +0000en-UShourly1https://wordpress.org/?v=4.9.460365857SRC Spends $200M on University Research Centershttps://www.hpcwire.com/2018/01/16/src-spends-200m-university-research-centers/?utm_source=rss&utm_medium=rss&utm_campaign=src-spends-200m-university-research-centers
https://www.hpcwire.com/2018/01/16/src-spends-200m-university-research-centers/#respondTue, 16 Jan 2018 22:10:39 +0000https://www.hpcwire.com/?p=46119The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communications, nanotechnology, and more. It’s not a bad way to begin 2018 for the winning institutions which include Notre Dame University, University of Michigan, University of […]

]]>The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communications, nanotechnology, and more. It’s not a bad way to begin 2018 for the winning institutions which include Notre Dame University, University of Michigan, University of Virginia, Carnegie Mellon University, Purdue University, and UC Santa Barbara.

SRC’s JUMP (Joint University Microelectronics Program) is a collaborative network of research centers sponsored by U.S. industry participants and DARPA. As described in the SRC web site, “[JUMP’s] mission is to enable the continued pace of growth of the microelectronics industry with discoveries which release the evolutionary constraints of traditional semiconductor technology development. JUMP research, guided by the university center directors, tackles fundamental physical problems and forges a nationwide effort to keep the United States and its technology firms at the forefront of the global microelectronics revolution.”

The six projects, funded over five years, were launched on January 1st and are listed below with short descriptions. Links to press releases from each center are at the end of the article:

ASCENT (Applications and Systems driven Center for Energy-Efficient Integrated NanoTechnologies at Notre Dame). “ASCENT focuses on demonstration of foundational material synthesis routes and device technologies, novel heterogeneous integration (package and monolithic) schemes to support the next era of functional hyper-scaling. The mission is to transcend the current limitations of high-performance transistors confined to a single planar layer of integrated circuit by pioneering vertical monolithic integration of multiple interleaved layers of logic and memory.”

ADA (Applications Driving Architectures Center at University of Michigan). “[ADA will drive] system design innovation by drawing on opportunities in application driven architecture and system-driven technology advances, with support from agile system design frameworks that encompass programming languages to implementation technologies. The center’s innovative solutions will be evaluated and quantified against a common set of benchmarks, which will also be expanded as part of the center efforts. These benchmarks will be initially derived from core computational aspects of two application domains: visual computing and natural language processing.”

Kevin Skadron, University of Virginia

CRISP (Center for Research on Intelligent Storage and Processing-in-memory at University of Virginia). “Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” says Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible.”

CONIX (Computing On Network Infrastructure for Pervasive Perception, Cognition, and Action at Carnegie Mellon University). “CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.”

CBRIC (Center for Brain-inspired Computing Enabling Autonomous Intelligence at Purdue University). Charged with delivering key advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems, “CBRIC will address these challenges through synergistic exploration of Neuro-inspired Algorithms and Theory, Neuromorphic Hardware Fabrics, Distributed Intelligence, and Application Drivers.”

ComSenTer (Center for Converged TeraHertz Communications and Sensing at UCSB). ComSenTer will develop the technologies for a future cellular infrastructure using hubs with massive spatial multiplexing, providing 1-100Gb/s to the end user, and, with 100-1000 simultaneous independently-modulated beams, aggregate hubs capacities in the 10’s of Tb/s. Backhaul for this future cellular infrastructure will be a mix of optical links and Tb/s-capacity point-point massive MIMO links.”

]]>https://www.hpcwire.com/2018/01/16/src-spends-200m-university-research-centers/feed/046119Linguists Use HPC to Develop Emergency-Response Translatorhttps://www.hpcwire.com/2017/12/03/linguists-use-hpc-develop-emergency-response-translator/?utm_source=rss&utm_medium=rss&utm_campaign=linguists-use-hpc-develop-emergency-response-translator
https://www.hpcwire.com/2017/12/03/linguists-use-hpc-develop-emergency-response-translator/#respondSun, 03 Dec 2017 21:58:52 +0000https://www.hpcwire.com/?p=44226We live on a planet of more than seven billion people who speak more than 7,000 languages. Most of these are “low-resource” languages for which there are a dearth of human translators and no automated translation capability. This presents a big challenge in emergency situations where information must be collected and communicated rapidly across linguistic barriers. […]

]]>We live on a planet of more than seven billion people who speak more than 7,000 languages. Most of these are “low-resource” languages for which there are a dearth of human translators and no automated translation capability. This presents a big challenge in emergency situations where information must be collected and communicated rapidly across linguistic barriers.

To address this problem, linguists at Ohio State University are using the Ohio Supercomputer Center’s Owens cluster to develop a general grammar acquisition technology.

This graph displays an algorithm that explores the space of possible probabilistic grammars and maps out the regions of this space that have the highest probability of generating understandable sentences. (Source: OSC)

The research is part of an initiative called Low Resource Languages for Emergent Incidents (LORELEI) that is funded through the Defense Advanced Research Projects Agency (DARPA). LORELEI aims to support emergent missions, e.g., humanitarian assistance/disaster relief, peacekeeping or infectious disease response by “providing situational awareness by identifying elements of information in foreign language and English sources, such as topics, names, events, sentiment and relationships.”

The Ohio State group is using high-performance computing and Bayseian methods to develop a grammar acquisition algorithm that can discover the rules of lesser-known languages.

“We need to get resources to direct disaster relief and part of that is translating news text, knowing names of cities, what’s happening in those areas,” said William Schuler, Ph.D., a linguistics professor at The Ohio State University, who is leading the project. “It’s figuring out what has happened rapidly, and that can involve automatically processing incident language.”

Schuler’s team is using Bayseian methods to discover a given language’s grammar and build a model capable of generating grammatically valid output.

“The computational requirements for learning grammar from statistics are tremendous, which is why we need a supercomputer,” Schuler said. “And it seems to be yielding positive results, which is exciting.”

The team originally used CPU-only servers but is now using the GPU computing capability of Ohio Supercomputing Center’s Owens cluster to model a larger number of grammar categories. The goal is to have a model that can be trained on a target language in an emergency response situation, so speed is critical. In August, the team ran two simulated disaster simulations in seven days using 60 GPU nodes (one Nvidia P100 GPU per node) but a real-world situation with more realistic configurations would demand even greater computational power, according to one of the researchers.

]]>https://www.hpcwire.com/2017/12/03/linguists-use-hpc-develop-emergency-response-translator/feed/044226Purdue Researchers Hit DARPA Cooling Target of 1000W/cm^2https://www.hpcwire.com/2017/10/24/purdue-researchers-hit-darpa-cooling-target-1000wcm2/?utm_source=rss&utm_medium=rss&utm_campaign=purdue-researchers-hit-darpa-cooling-target-1000wcm2
https://www.hpcwire.com/2017/10/24/purdue-researchers-hit-darpa-cooling-target-1000wcm2/#respondTue, 24 Oct 2017 16:17:07 +0000https://www.hpcwire.com/?p=41979Cooling is an ongoing challenge in all of computing. Now, a group of researchers from Purdue University have devised an ‘intra-chip’ cooling technique that hits the 1000-watt per square centimeter target singled out by DARPA. The new approach relies of fabricating microchannels in chips and flowing a coolant through them. Attaching heatsinks to chips has […]

]]>Cooling is an ongoing challenge in all of computing. Now, a group of researchers from Purdue University have devised an ‘intra-chip’ cooling technique that hits the 1000-watt per square centimeter target singled out by DARPA. The new approach relies of fabricating microchannels in chips and flowing a coolant through them.

Attaching heatsinks to chips has long been the common practice. However new efforts to stack chips on top of each other to increase performance and capacity complicates that approach.

“This presents a cooling challenge because if you have layers of many chips, normally each one of these would have its own system attached on top of it to draw out heat. As soon as you have even two chips stacked on top of each other the bottom one has to operate with significantly less power because it can’t be cooled directly confound that approach,” said Justin Weibel, a research associate professor in Purdue’s School of Mechanical Engineering, and co-investigator on the project.

The work has been funded with a four-year grant issued in 2013 totaling around $2 million from the U.S. Defense Advanced Research Projects Agency and the new findings are detailed in a paper[i] appearing on Oct. 12 in the International Journal of Heat and Mass Transfer. There’s also an article[ii] on the work posted yesterday on the Purdue web site.

Use of small microchannels is the key but doing so also complicates the process. “It’s been known for a long time that the smaller the channel the higher the heat-transfer performance,” said Kevin Drummond, one of the paper’s lead authors and doctoral student. “We are going down to 15 or 10 microns in channel width, which is about 10 times smaller than what is typical for microchannel cooling technologies.”

Although using ultra-small channels increases the cooling performance, it is difficult to pump the required rates of liquid flow through the tiny microchannels. The Purdue team overcame this problem by designing a system of short, parallel channels instead of long channels stretching across the entire length of the chip. A special “hierarchical” manifold distributes the flow of coolant through these channels.

“So, instead of a channel being 5,000 microns in length, we shorten it to 250 microns long,” said Suresh Garimella, PI on the project, “The total length of the channel is the same, but it is now fed in discrete segments, and this prevents major pressure drops. So this represents a different paradigm.” The channels were etched in silicon with a width of about 15 microns but a depth of up to 300 microns.

“I think for the first time we have shown a proof of concept for embedded cooling for Department of Defense and potential commercial applications,” Garimella said. “This transformative approach has great promise for use in radar electronics, as well as in high-performance supercomputers. In this paper, we have demonstrated the technology and the unprecedented performance it provides.”

“This number of 1,000 watts per square centimeter is sort of a Holy Grail of microcooling, and we’ve demonstrated this capability in a functioning system with an electrically insulated liquid,” Garimella said.

Image caption: A new electronics-cooling technique relies on microchannels, just a few microns wide, embedded within the chip itself. The device was built at Purdue University’s Birck Nanotechnology Center. (Purdue University photo/ Kevin P. Drummond)

]]>https://www.hpcwire.com/2017/10/24/purdue-researchers-hit-darpa-cooling-target-1000wcm2/feed/041979DARPA Pledges Another $300 Million for Post-Moore’s Readinesshttps://www.hpcwire.com/2017/09/14/darpa-pledges-another-300-million-post-moores-readiness/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-pledges-another-300-million-post-moores-readiness
https://www.hpcwire.com/2017/09/14/darpa-pledges-another-300-million-post-moores-readiness/#respondThu, 14 Sep 2017 17:24:20 +0000https://www.hpcwire.com/?p=39834The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies.

]]>Yesterday, the Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s law technologies that will benefit military and commercial users and contribute crucially to national security in the 2025 to 2030 time frame.

First made public in June (see HPCwire coverage here), ERI took shape over the summer as DARPA’s Microsystems Technology Office sought community involvement on the path forward for future progress in electronics. Based on that input, DARPA developed six new programs which are part of the overall larger vision of the Electronic Resurgence Initiative. The six programs are detailed in three Broad Agency Announcements (BAAs) published yesterday on FedBizOpps.gov. Each of the BAAs correlates to one of the ERI research pillars: materials and integration, circuit design, and systems architecture.

Planned investment is in the range of $200 million a year over four years. “ERI Page 3 Investments” refers to research areas that Gordon Moore predicted would become important for future microelectronics progress, cited on page 3 of Moore’s famous 1965 paper, “Cramming More Components onto Integrated Circuits.”

Also joining the ERI portfolio are several existing DARPA programs (including HIVE and CHIPS) as well as the Joint University Microelectronics Program (JUMP), a research effort in basic electronics education co-funded by DARPA and Semiconductor Research Corporation (SRC), an industry consortium based in Durham, N.C.

DARPA says that with the official roll out of the Electronics Resurgence Initiative, it “hopes to open new innovation pathways to address impending engineering and economics challenges that, if left unanswered, could challenge what has been a relentless half-century run of progress in microelectronics technology.”

DARPA is of course referring to the remarkable engine of innovation that is Moore’s law. Gordon Moore’s 1965 observation that transistor densities were doubling at roughly 24-month intervals set the stage for five decades of faster and cheaper microelectronics. But as node feature sizes approach the fundamental limits of physics, the design work and fabrication becomes ever more complex and expensive, jeopardizing the economic benefits of Moore’s dictum.

It’s something of a grand experiment, explained Bill Chappell, director of the Agency’s Microsystems Technology Office (MTO) in a press call, referring to the scale and scope of the Electronics Resurgence Initiative. DARPA has packaged up into one large announcement six different programs (released in three Broad Agency Announcements – BAAs — on FBO.gov). The six different programs will in sum receive $75 million in investment over the next year alone and on the order of $300 million over four years. Like all DARPA programs, the longevity and funding levels of these programs will be tied to performance.

“If we see that we’re getting broad resonance within the commercial industry and within the DoD industry, and unique partnerships are forming and/or unique capabilities are popping up for national defense, it will continue with the expectation or even grow,” said Chappell.

The DoD is finding it increasingly difficult to manufacture and design circuits, partly due to Moore’s law slowdowns and partly due to the scale of designs. “We are victim of our own success in that we have so many transistors available that we now have another problem which is complexity, complexity of manufacturing and complexity of design,” said Chappell. “So whether Moore’s law ends or not, at the DoD, from a niche development perspective we already have a problem on our hands. And we’re sharing that with the commercial world as well; you see a lot of mergers and acquisitions and tumult in the industry as they try to also grapple with some of the similar problems and the manpower required to get a design from concept into a physical product.”

Here’s a rundown on the six programs organized by their research thrust:

Foundations Required for Novel Compute (FRANC): Develop the foundations for assessing and establishing the proof of principle for beyond von Neumann compute topologies enabled by new materials and integration.

Domain-Specific System on Chip (DSSoC): Enable rapid development of multiapplication systems through a single programmable device.

Chappell gave additional context for the Software Defined Hardware program, noting that it will look at course-grained reprogrammability specifically for big data programs. “We have the TPU and the GPU for dense problems, for dense searches, and dense matrix manipulation. We have recently started the HIVE program, which does sparse graph search. But the big question that still exists is what if you have a dense and sparse dataset? We don’t have a chip under development or even concepts that are very good at doing both of those types of datasets.”

What DARPA is envisioning is a reprogrammable system, or chip, that is intelligent enough and has an intelligent enough just in time compiler to recognize the data and type of data it needs to operate on and reconfigure itself to the need of that moment. DARPA has done seedlings to demonstrate that it’s feasible but “it’s still a DARPA-hard concept to pull off,” said Chappell.

DARPA will hold a number of Proposers Days to meet with interested researchers. The FRANC program of the Materials and Integration thrust will be run in the form of a webinar on Sept.15 and that thrust’s other program, 3DsoC, will take place at DARPA headquarters in Arlington, Va., on Sept. 22. The Proposers Day for the Architectures thrust’s two programs, DSSoC and SDH, will take place near DARPA headquarters in Arlington, Va., on Sept. 18 and 19, respectively. The Proposers Days for both programs in the Design thrust—IDEA and POSH—will take place on Sept. 22, in Mountain View, Calif. Details about all of these Proposers Day events and how to register are included in this Special Notice, DARPA-SN-17-75, posted on FBO.gov.

Asked about the goals for ERI writ large, Chappell said, “Overall success will look like we’ve invented the ideas that will be part of that 2025 and 2030 electronics community in such a way that both our defense base has better access to technology, better access to IP, better design services and capabilities than they have today because of these relationships that we are trying to build while simultaneously US interests in electronics in regards to economic development, maintaining our dominant global position is secured because of the new ideas that we are creating through these investments.

“These $75 million next year and $300 million over the course of the next four years that we’re planning is for very far-out research which often times is not something that a commercial entity can do because of its speculative nature and/or not something the DoD can do because it isn’t necessarily solving a today problem, but a tomorrow problem.”

DARPA is known for funding high-risk, high-reward R&D with broad commercial impact, helping to invent both the Internet and GPS.

]]>https://www.hpcwire.com/2017/09/14/darpa-pledges-another-300-million-post-moores-readiness/feed/039834DARPA Continues Investment in Post-Moore’s Technologieshttps://www.hpcwire.com/2017/07/24/darpa-continues-investment-post-moores-technologies/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-continues-investment-post-moores-technologies
https://www.hpcwire.com/2017/07/24/darpa-continues-investment-post-moores-technologies/#respondMon, 24 Jul 2017 16:58:51 +0000https://www.hpcwire.com/?p=38042The U.S. military long ago ceded dominance in electronics innovation to Silicon Valley, the DoD-backed powerhouse that has driven microelectronic generation for decades. With Moore’s Law clearly running out of steam, the Defense Advanced Research Projects Agency (DARPA) is attempting to reinvigorate and leverage a vibrant domestic chip sector with a $200 million initiative designed […]

]]>The U.S. military long ago ceded dominance in electronics innovation to Silicon Valley, the DoD-backed powerhouse that has driven microelectronic generation for decades. With Moore’s Law clearly running out of steam, the Defense Advanced Research Projects Agency (DARPA) is attempting to reinvigorate and leverage a vibrant domestic chip sector with a $200 million initiative designed among other things to push the boundaries of chip architectures like GPUs.

DARPA recently announced that its Electronics Resurgence Initiative seeks to move beyond Moore’s Law chip scaling. Among the new fronts to be opened by the defense agency are extending GPU frameworks that underlie machine-learning tools to develop “reconfigurable physical structures that adjust to the needs of the software they support.”

While it remains unclear how enterprises might benefit directly from the chip initiative overseen by DARPA’s Microsystems Technology Office, the agency does have a reputation dating back to the earliest days of the Internet for funding high-risk technology R&D that eventually makes its way into the commercial sector.

The DARPA effort also attempts to lay the groundwork for a post Moore’s Law era where, according to the agency, research will focus on “integrating different semiconductor materials on individual chips, ‘sticky logic’ devices that combine processing and memory functions and vertical rather than only planar integration of microsystem components.”

As the focus of chip technology zeroes in on data driven enterprise applications, DARPA said it would cast a wider net to harness semiconductor innovation that would lead to a post-Moore’s Law generation of microelectronic systems benefitting military and commercial users.

The effort runs in parallel with recent attempts by DoD to tap into the sustained burst of technology and development innovation in Silicon Valley. As the technology entrepreneur Steve Blank has documented, the 20th century electronics explosion was initially funded by the U.S military beginning as early as World War II, continuing throughout the Cold War confrontation with the former Soviet Union.

The DARPA effort primarily seeks to establish new development models that go beyond chip scaling. “We need to break away from tradition and embrace the kinds of innovations that the new initiative is all about,” emphasized William Chappell, director of DARPA’s Microsystems Technology Office. The program will “embrace progress through circuit specialization and to wrangle the complexity of the next phase of advances, which will have broad implications on both commercial and national defense interests,” Chappell added.

The post-Moore’s Law research effort will complement the recently created Joint University Microelectronics Program (JUMP), a research effort in basic electronics being co-funded by DARPA and Semiconductor Research Corporation (SRC), an industry consortium based in Durham, N.C. Among the chip makers contributing to JUMP are IBM, Intel Corp., Micron Technology and Taiwan Semiconductor Manufacturing Co.

SRC members and DARPA are expected to kick in more than $150 million for the five-year project. Focus areas include high-frequency sensor networks, distributed and cognitive computing along with “intelligent memory and storage.”

As DARPA continues to invest in device technology, it is also attempting to leverage what Chappell calls the “software-defined world.” The agency sees virtualization and other software technologies as one way of addressing skyrocketing weapons costs. Hence, the agency is also investing more research funding in areas such as algorithm development and circuit design for applications such as dynamic spectrum sharing, a capability that would allow the military to squeeze more capacity out of crowded electromagnetic spectrum.

]]>https://www.hpcwire.com/2017/07/24/darpa-continues-investment-post-moores-technologies/feed/038042DARPA Selects Five Teams for Neural Engineering Programhttps://www.hpcwire.com/2017/07/10/darpa-selects-five-teams-neural-engineering-program/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-selects-five-teams-neural-engineering-program
https://www.hpcwire.com/2017/07/10/darpa-selects-five-teams-neural-engineering-program/#respondMon, 10 Jul 2017 20:06:02 +0000https://www.hpcwire.com/?p=37590Interfacing directly with the human neural system to promote health and expand human capacities is an ongoing goal in brain research. Today, the Defense Advanced Research Projects Agency (DARPA) announced five contracts has been issued in support of it Neural Engineering System Design (NESD) program announced last year. The list of winners is below. NESD’s […]

]]>Interfacing directly with the human neural system to promote health and expand human capacities is an ongoing goal in brain research. Today, the Defense Advanced Research Projects Agency (DARPA) announced five contracts has been issued in support of it Neural Engineering System Design (NESD) program announced last year. The list of winners is below.

NESD’s formal goal is development of an “implantable system able to provide precision communication between the brain and the digital world. Such an interface would convert the electrochemical signaling used by neurons in the brain into the ones and zeros that constitute the language of information technology, and do so at far greater scale than is currently possible.” The work has the potential to significantly advance scientists’ understanding of the neural underpinnings of vision, hearing, and speech and could eventually lead to new treatments for people living with sensory deficits.

“The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and precision sensory interface for therapeutic applications,” said Phillip Alvelda, the founding NESD Program Manager. “By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel, NESD aims to enable rich two-way communication with the brain at a scale that will help deepen our understanding of that organ’s underlying biology, complexity, and function.”

Not surprisingly, the project is necessarily a cross-disciplinary. Among the many disciplines represented in the teams are neuroscience, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, mathematics, computer science, and wireless communications. In addition to overcoming engineering-oriented hardware, biocompatibility, and communication challenges, the teams must also develop advanced mathematical and neuro-computation techniques to decode and encode neural data and compress those troves of information so they are tractable within the available bandwidth and power constraints.

Brown University team led by Dr. Arto Nurmikko will seek to decode neural processing of speech, focusing on the tone and vocalization aspects of auditory perception. The team’s proposed interface would be composed of networks of up to 100,000 untethered, submillimeter-sized “neurograin” sensors implanted onto or into the cerebral cortex. A separate RF unit worn or implanted as a flexible electronic patch would passively power the neurograins and serve as the hub for relaying data to and from an external command center that transcodes and processes neural and digital signals.

Columbia University team led by Dr. Ken Shepard will study vision and aims to develop a non-penetrating bioelectric interface to the visual cortex. The team envisions layering over the cortex a single, flexible complementary metal-oxide semiconductor (CMOS) integrated circuit containing an integrated electrode array. A relay station transceiver worn on the head would wirelessly power and communicate with the implanted device.

Fondation Voir et Entendre team led by Drs. Jose-Alain Sahel and Serge Picaud will study vision. The team aims to apply techniques from the field of optogenetics to enable communication between neurons in the visual cortex and a camera-based, high-definition artificial retina worn over the eyes, facilitated by a system of implanted electronics and micro-LED optical technology.

John B. Pierce Laboratory team led by Dr. Vincent Pieribone will study vision. The team will pursue an interface system in which modified neurons capable of bioluminescence and responsive to optogenetic stimulation communicate with an all-optical prosthesis for the visual cortex.

Paradromics, Inc., team led by Dr. Matthew Angle aims to create a high-data-rate cortical interface using large arrays of penetrating microwire electrodes for high-resolution recording and stimulation of neurons. As part of the NESD program, the team will seek to build an implantable device to support speech restoration. Paradromics’ microwire array technology exploits the reliability of traditional wire electrodes, but by bonding these wires to specialized CMOS electronics the team seeks to overcome the scalability and bandwidth limitations of previous approaches using wire electrodes.

University of California, Berkeley, team led by Dr. Ehud Isacoff aims to develop a novel “light field” holographic microscope that can detect and modulate the activity of up to a million neurons in the cerebral cortex. The team will attempt to create quantitative encoding models to predict the responses of neurons to external visual and tactile stimuli, and then apply those predictions to structure photo-stimulation patterns that elicit sensory percepts in the visual or somatosensory cortices, where the device could replace lost vision or serve as a brain-machine interface for control of an artificial limb.

DARPA structured the NESD program to facilitate commercial transition of successful technologies. Key to ensuring a smooth path to practical applications, teams will have access to design assistance, rapid prototyping, and fabrication services provided by industry partners whose participation as facilitators was organized by DARPA and who will operate as sub-contractors to the teams.

]]>https://www.hpcwire.com/2017/07/10/darpa-selects-five-teams-neural-engineering-program/feed/037590DARPA Picks Intel, Qualcomm, PNNL, 2 Others to Tackle HIVE Projecthttps://www.hpcwire.com/2017/06/05/darpa-picks-intel-qualcomm-pnnl-2-others-tackle-hive-project/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-picks-intel-qualcomm-pnnl-2-others-tackle-hive-project
https://www.hpcwire.com/2017/06/05/darpa-picks-intel-qualcomm-pnnl-2-others-tackle-hive-project/#respondMon, 05 Jun 2017 15:56:33 +0000https://www.hpcwire.com/?p=36206Getting the most from big data is an ongoing challenge. The Defense Advanced Research Projects Agency (DARPA) last Friday selected of five participants for its Hierarchical Identify Verify Exploit (HIVE) program, announced last summer, intended to develop a new high performance data handling platform. “Today’s hardware is ill-suited to handle such data challenges, and these […]

]]>Getting the most from big data is an ongoing challenge. The Defense Advanced Research Projects Agency (DARPA) last Friday selected of five participants for its Hierarchical Identify Verify Exploit (HIVE) program, announced last summer, intended to develop a new high performance data handling platform.

“Today’s hardware is ill-suited to handle such data challenges, and these challenges are only going to get harder as the amount of data continues to grow exponentially,” according to Trung Tran, a program manager in DARPA’s Microsystems Technology Office (MTO) heading up HIVE. The goal is to develop a “powerful new data-handling and computing platform specialized for analyzing and interpreting huge amounts of data with unprecedented deftness.”

“The HIVE program is an exemplary prototype for how to engage the U.S. commercial industry, leverage their design expertise, and enhance U.S. competitiveness, while also enhancing national security,” said William Chappell, director of MTO, in the release announcing the selections. “By forming a team with members in both the commercial and defense sectors, we hope to forge new R&D pathways that can deliver unprecedented levels of hardware specialization.”

As described by DARPA, a core HIVE goal is creation of a “graph analytics processor which incorporates the power of graphical representations of relationships in a network more efficiently than traditional data formats and processing techniques according to DARPA. Examples of these relationships among data elements and categories include person-to-person interactions as well as seemingly disparate links between, say, geography and changes in doctor visit trends or social media and regional strife.”

“In combination with emerging machine learning and other artificial intelligence techniques that can categorize raw data elements, and by updating the elements in the graph as new data becomes available, a powerful graph analytics processor could discern otherwise hidden causal relationships and stories among the data elements in the graph representations.

DARPA suggests such a graph analytics processor might achieve a ‘thousandfold improvement in processing efficiency’ over today’s best processors, enabling the real-time identification of strategically important relationships as they unfold in the field rather than relying on after-the-fact analyses in data centers.

]]>https://www.hpcwire.com/2017/06/05/darpa-picks-intel-qualcomm-pnnl-2-others-tackle-hive-project/feed/036206DARPA Ramps Up Spectrum Challenge with Information Dayshttps://www.hpcwire.com/2016/08/04/darpa-ramps-spectrum-challenge-information-days/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-ramps-spectrum-challenge-information-days
https://www.hpcwire.com/2016/08/04/darpa-ramps-spectrum-challenge-information-days/#respondThu, 04 Aug 2016 15:27:24 +0000https://www.hpcwire.com/?p=29144Next week, the DARPA-sponsored Spectrum Collaboration Challenge (SC2) to invent a better way to dynamically carve up and use of electromagnetic spectrum will hold its first information days. The information sessions are among the first concrete steps in what will be a long haul for teams – the winning designs aren’t expected until lat 2019. […]

]]>Next week, the DARPA-sponsored Spectrum Collaboration Challenge (SC2) to invent a better way to dynamically carve up and use of electromagnetic spectrum will hold its first information days. The information sessions are among the first concrete steps in what will be a long haul for teams – the winning designs aren’t expected until lat 2019. The team whose radio design most reliably achieves successful communication in the presence of other competing radios could win as much as $3,500,000.

It’s no secret the spectrum scarcity problem has been growing for years: the current practice of licensing fixed chunks of spectrum is already short on available bandwidth and will soon be swamped by demand from the frenetic proliferation of wireless devices.

As described by DARPA, competitors will “develop a new wireless paradigm in which radio networks will autonomously collaborate and reason about how to share the RF spectrum, avoiding interference and jointly exploiting opportunities to achieve the most efficient use of the available spectrum. SC2 teams will develop these breakthrough capabilities by taking advantage of recent advances in artificial intelligence (AI) and machine learning, and the expanding capacities of software-defined radios.”

Both information days will be held as both a live town hall style meeting at DARPA and as a webinar. These information sessions are a chance to find out more about SC2, ask questions, and explore potential teaming opportunities. Meeting specifics:

The SC2 Competitors Information Day will be held on August 10, 2016, 8:00am EDT. And is directed towards individuals, teams and proposers interested in participating in the competitive SC2 Tournaments by either signing up as an Open Track team or by applying to the Proposal Track by responding to DARPA-BAA-16-47.

The SC2 Competition Architecture Day will be held on August 11, 2016, 8:00am EDT and is directed towards proposers interested in responding to DARPA-BAA-16-48. Successful respondents to this BAA will be providing research services in support of SC2 but not participating in the competitive events.

The broad idea, says DARPA, is not just to challenge innovators in academia and business to produce breakthroughs in collaborative AI, but also to catalyze a new spectrum paradigm that can help usher in an era of spectrum abundance. That’s a tall order, but necessary in an increasingly interconnected world where wireless devices increasingly dominate.

]]>https://www.hpcwire.com/2016/08/04/darpa-ramps-spectrum-challenge-information-days/feed/029144DARPA Seeks New Computing Paradigmshttps://www.hpcwire.com/2015/03/26/darpa-seeks-new-computing-paradigms/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-seeks-new-computing-paradigms
https://www.hpcwire.com/2015/03/26/darpa-seeks-new-computing-paradigms/#respondFri, 27 Mar 2015 00:59:20 +0000http://www.hpcwire.com/?p=17923May you live in interesting times, cautions the age-old proverb. As computing chips face the fundamental limitations of miniaturization, it is sure to be interesting times, indeed. One of the most pressing issues facing the scientific community is the inability of today’s best computers to process the large-scale simulations needed for understanding complex physical systems. “Over […]

]]>May you live in interesting times, cautions the age-old proverb. As computing chips face the fundamental limitations of miniaturization, it is sure to be interesting times, indeed. One of the most pressing issues facing the scientific community is the inability of today’s best computers to process the large-scale simulations needed for understanding complex physical systems.

“Over the past half century, as supercomputers got faster and more powerful, such simulations became ever more accurate and useful,” states the Defense Advanced Research Projects Agency (DARPA). “But in recent years even the best computer architectures haven’t been able to keep up with demand for the kind of simulation processing power needed to handle exceedingly complex design optimization and related problems.”

To remedy this situation, DARPA is seeking ideas on how to speed up the computation of the complex mathematics that undergird scientific computing. Specifically the agency is looking for assistance with a class of equations, known as partial differential equations. These equations, which describe fundamental physical principles of motion, diffusion and equilibrium, involve continuous rates of change over a large range of physical parameters. These are problems that are not easily broken into discrete parts to be solved by individual CPUs.

“The standard computer cluster equipped with multiple central processing units (CPUs), each programmed to tackle a particular piece of a problem, is just not designed to solve the kinds of equations at the core of large-scale simulations, such as those describing complex fluid dynamics and plasmas,” said Vincent Tang, program manager in DARPA’s Defense Sciences Office.

“A processor specially designed for such equations may enable revolutionary new simulation capabilities for design, prediction, and discovery. But what might that processor look like?” asks the DARPA invitation.

Before the digital era, equations were solved analog-style by manipulating continuously changing values instead of discrete measurements. The analog computer goes back more than 100 years but was displaced when transistor-based digital computers rose to prominence in the 1950s and 1960s based on their ability to solve a wide range of problems.

DARPA suggests that the time is right for taking another look at using analog substrates for the efficient simulation of “systems governed by complex, simultaneous, locally interacting, and non-linear phenomena,” especially given the advances that have been made in microelectromechanical systems, optical engineering, microfluidics, metamaterials and even DNA computing. If the performance advantage is significant enough, the analog coprocessor could be the next big thing in heterogenous computing.

The RFI seeks new processing paradigms that have the potential to overcome current barriers in computing performance – analog, digital, or hybrid approaches are all welcome.

From the announcement:

The RFI invites short responses that address the following needs, either singly or in combination:

Scalable, controllable, and measurable processes that can be physically instantiated in co-processors for acceleration of computational tasks frequently encountered in scientific simulation.

Algorithms that use analog, non-linear, non-serial, or continuous-variable computational primitives to reduce the time, space, and communicative complexity relative to von Neumann/CPU/GPU processing architectures.

Technology development beyond these areas will be considered so long as it supports the RFI’s goals.

DARPA is particularly interested in engaging nontraditional contributors to help develop leap-ahead technologies in the focus areas above, as well as other technologies that could potentially improve the computational tractability of complex nonlinear systems.

DARPA’s Request for Information (RFI) – titled Analog and Continuous-variable Co-processors for Efficient Scientific Simulation (ACCESS) – is available at: http://go.usa.gov/3CV43. Responses are due by 4:00 p.m. Eastern on April 14, 2015.

]]>https://www.hpcwire.com/2015/03/26/darpa-seeks-new-computing-paradigms/feed/017923DARPA Targets Autocomplete for Programmershttps://www.hpcwire.com/2014/11/06/darpa-targets-autocomplete-programmers/?utm_source=rss&utm_medium=rss&utm_campaign=darpa-targets-autocomplete-programmers
https://www.hpcwire.com/2014/11/06/darpa-targets-autocomplete-programmers/#commentsFri, 07 Nov 2014 00:36:36 +0000http://www.hpcwire.com/?p=16032If Rice University computer scientists have their way, writing computer software could become as easy as searching the Internet. Two dozen computer scientists from Rice, the University of Texas-Austin, the University of Wisconsin-Madison and the company GrammaTech have joined forces to turn this promise into a reality. With $11 million in DARPA-funding, the group will […]

]]>If Rice University computer scientists have their way, writing computer software could become as easy as searching the Internet. Two dozen computer scientists from Rice, the University of Texas-Austin, the University of Wisconsin-Madison and the company GrammaTech have joined forces to turn this promise into a reality.

With $11 million in DARPA-funding, the group will spend the next four years developing a tool called PLINY that will both “autocomplete” and “autocorrect” code.

The project is one of the so-called hard challenges that DARPA has identified.

As this article on the Rice University website explains, the tool will make predictive suggestions much like today’s Web browsers and smart phones offer to complete queries or fix misspellings.

“Imagine the power of having all the code that has ever been written in the past available to programmers at their fingertips as they write new code or fix old code,” said Vivek Sarkar, Rice’s E.D. Butcher Chair in Engineering, chair of the Department of Computer Science and the principal investigator (PI) on the PLINY project. “You can think of this as autocomplete for code, but in a far more sophisticated way.”

Adding further credence to the promise of this technology is whose funding it, the agency credited with inventing the Internet, cloud computing, virtual reality, autonomous vehicles and lots more in its 50 year history. The Defense Advanced Research Projects Agency (DARPA) is ponying up the money for this four-year effort as part of its Mining and Understanding Software Enclaves (MUSE) program. Hundreds of billions of lines of publicly available open-source computer code will be used to create a searchable database of properties, behaviors and vulnerabilities.

PLINY gets its name from the Roman naturalist who compiled the very first encyclopedia. At its core, PLINY will have a data-mining engine with access to an enormous repository of open-source code. The engine will utilize a mix of deep program analyses and big-data analytics to stand up a database that can be called on when a programmer needs help. Answers will be formulated based on Bayesian statistics. PLINY’s suggestion engine will proffer a range of solutions from the most to least probable.

The benefit to society from a tool like this is enormous. In the digital age, software undergirds nearly everything. With the impending death of Moore’s law, the performance onus shifts to software developers. Although this will affect HPCers first, it will eventually spread to the entire computing spectrum. Programmers of every stripe will need all the help they can get.