Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

aarondubrow writes: As supercomputing becomes central to the work and progress of researchers in all fields, new kinds of computing resources and more inclusive modes of interaction are required. The National Science Foundation announced $16M in awards to support two new supercomputing acquisitions for the open science community. The systems — "Bridges" at the Pittsburgh Supercomputing Center and "Jetstream," co-located at the Indiana University Pervasive Technology Institute and The University of Texas at Austin's Texas Advanced Computing Center — respond to the needs of the scientific computing community for more high-end, large-scale computing resources while helping to create a more inclusive computing environment for science and engineering.
Reader 1sockchuck adds this article about why funding for the development of supercomputers is more important than ever:
America's high-performance computing (HPC) community faces funding challenges and growing competition from China and other countries. At last week's SC14 conference, leading researchers focused on outlining the societal benefits of their work, and how it touches the daily lives of Americans. "When we talk at these conferences, we tend to talk to ourselves," said Wilf Pinfold, director of research and advanced technology development at Intel Federal. "We don't do a good job communicating the importance of what we do to a broader community." Why the focus on messaging? Funding for American supercomputing has been driven by the U.S. government, which is in a transition with implications for HPC funding. As ComputerWorld notes, climate change skeptic Ted Cruz is rumored to be in line to chair a Senate committee that oversees NASA and the NSF.

An anonymous reader writes "Writer and professor of philosophy at the University of California, Berkeley Alva Noe isn't worried that we will soon be under the rule of shiny metal overlords. He says that currently we can't produce "machines that exhibit the agency and awareness of an amoeba." He writes at NPR: "One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeopordy! with Watson. We used 'it' the way we use clocks.""

dcblogs writes At the supercomputing conference, SC14, this week, a U.S. Dept. of Energy offical said the government has set a goal of 2023 as its delivery date for an exascale system. It may be taking a risky path with that amount of lead time because of increasing international competition. There was a time when the U.S. didn't settle for second place. President John F. Kennedy delivered his famous "we choose to go to the moon" speech in 1962, and seven years later a man walked on the moon. The U.S. exascale goal is nine years away. China, Europe and Japan all have major exascale efforts, and the government has already dropped on supercomputing. The European forecast of Hurricane Sandy in 2012 was so far ahead of U.S. models in predicting the storm's path that the National Oceanic and Atmospheric Administration was called before Congress to explain how it happened. It was told by a U.S. official that NOAA wasn't keeping up in computational capability. It's still not keeping up. Cliff Mass, a professor of meteorology at the University of Washington, wrote on his blog last month that the U.S. is "rapidly falling behind leading weather prediction centers around the world" because it has yet to catch up in computational capability to Europe. That criticism followed the $128 million recent purchase a Cray supercomputer by the U.K.'s Met Office, its meteorological agency.

dcblogs writes U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops. The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals. If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. How adequate this planned investment will look three years from now is a question. Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said.

The modern electronics industry relies on inputs and supply chains, both material and technological, and none of them are easy to bypass. These include, besides expertise and manufacturing facilities, the actual materials that go into electronic components. Some of them are as common as silicon; rare earth minerals, not so much. One story linked from Slashdot a few years back predicted that then-known supplies would be exhausted by 2017, though such predictions of scarcity are notoriously hard to get right, as people (and prices) adjust to changes in supply. There's no denying that there's been a crunch on rare earths, though, over the last several years. The minerals themselves aren't necessarily rare in an absolute sense, but they're expensive to extract.
The most economically viable deposits are found in China, and rising prices for them as exports to the U.S., the EU, and Japan have raised political hackles. At the same time, those rising prices have spurred exploration and reexamination of known deposits off the coast of Japan, in the midwestern U.S., and elsewhere.

Alex King is director of the Critical Materials Institute, a part of the U.S. Department of Energy's Ames Laboratory. CMI is heavily involved in making rare earth minerals slightly less rare by means of supercomputer analysis; researchers there are approaching the ongoing crunch by looking both for substitute materials for things like gallium, indium, and tantalum, and easier ways of separating out the individual rare earths (a difficult process). One team there is working with "ligands – molecules that attach with a specific rare-earth – that allow metallurgists to extract elements with minimal contamination from surrounding minerals" to simplify the extraction process. We'll be talking with King soon; what questions would you like to see posed? (This 18-minute TED talk from King is worth watching first, as is this Q&A.)

dcblogs writes A supercomputer upgrade is paying off for the U.S. National Weather Service, with new high-resolution models that will offer better insight into severe weather. This improvement in modeling detail is a result of a supercomputer upgrade. The National Oceanic and Atmospheric Administration, which runs the weather service, put into production two new IBM supercomputers, each 213 teraflops, running Linux on Intel processors. These systems replaced 74-teraflop, four-year old systems. More computing power means systems can run more mathematics, and increase the resolution or detail on the maps from 8 miles to 2 miles.

An anonymous reader writes IBM has announced the "Watson Discovery Advisor" a cloud-based tool that will let researchers comb through massive troves of data, looking for insights and connections. The company says it's a major expansion in capabilities for the Watson Group, which IBM seeded with a $1 billion investment. "Scientific discovery takes us to a different level as a learning system," said Steve Gold, vice president of the Watson Group. "Watson can provide insights into the information independent of the question. The ability to connect the dots opens up a new world of possibilities."

Bismillah (993337) writes The Pawsey Supercomputing Centre in Australia has started unboxing and installing its new upgraded 'Magnus' supercomputer, which could become the largest such system in the southern hemisphere, with up to one petaFLOPS performance.

An anonymous reader writes: A while back we discussedCode Combat, a multiplayer game that lets players program their way to victory. They recently launched a tournament called Greed, where coders had to write algorithms for competitively collecting coins. 545 programmers participated, submitting over 126,000 lines of code, which resulted in 390 billion statements being executed on a 673-core supercomputer. The winner, going by the name of "Wizard Dude," won 363 matches, tied 14, and lost none! He explains his strategy: "My coin-collecting algorithm uses a novel forces-based mechanism to control movement. Each coin on the map applies an attractive force on collectors (peasants/peons) proportional to its value over distance squared. Allied collectors and the arena edges apply a repulsive force, pushing other collectors away. The sum of these forces produces a vector indicating the direction in which the collector should move this turn. The result is that: 1) collectors naturally move towards clusters of coins that give the greatest overall payoff,
2) collectors spread out evenly to cover territory. Additionally, the value of each coin is scaled depending on its distance from the nearest enemy collector, weighting in favor of coins with an almost even distance. This encourages collectors not to chase lost coins, but to deprive the enemy of contested coins first and leave safer coins for later."

aarondubrow writes: "Graphene, a one-atom-thick form of the carbon material graphite, is strong, light, nearly transparent and an excellent conductor of electricity and heat, but a number of practical challenges must be overcome before it can emerge as a replacement for silicon in electronics or energy devices. One particular challenge concerns the question of how graphene diffuses heat, in the form of phonons. Thermal conductivity is critical in electronics, especially as components shrink to the nanoscale. Using the Stampede supercomputer at the Texas Advanced Computing Center, Professor Li Shi simulated how phonons (heat-carrying vibrations in solids) scatter as a function of the thickness of the graphene layers. He also investigated how graphene interacts with substrate materials and how phonon scattering can be controlled. The results were published in the Proceedings of the National Academy of Sciences, Applied Physical Letters and Energy and Environmental Science."

itwbennett (1594911) writes "Intel and SGI have built a proof-of-concept supercomputer that's kept cool using a fluid developed by 3M called Novec that is already used in fire suppression systems. The technology, which could replace fans and eliminate the need to use tons of municipal water to cool data centers, has the potential to slash data-center energy bills by more than 90 percent, said Michael Patterson, senior power and thermal architect at Intel. But there are several challenges, including the need to design new motherboards and servers."

First time accepted submitter jwpeterson writes "Like chess and go, pentago is a two player, deterministic, perfect knowledge, zero sum game: there is no random or hidden state, and the goal of the two players is to make the other player lose (or at least tie). Unlike chess and go, pentago is small enough for a computer to play perfectly: with symmetries removed, there are a mere 3,009,081,623,421,558 (3e15) possible positions. Thus, with the help of several hours on 98304 threads of Edison, a Cray supercomputer at NERSC, pentago is now strongly solved. 'Strongly' means that perfect play is efficiently computable for any position. For example, the first player wins."

Hugo Villeneuve writes "What piece of code, in a non-assembler format, has been run the most often, ever, on this planet? By 'most often,' I mean the highest number of executions, regardless of CPU type. For the code in question, let's set a lower limit of 3 consecutive lines. For example, is it:

A UNIX kernel context switch?

A SHA2 algorithm for Bitcoin mining on an ASIC?

A scientific calculation running on a supercomputer?

A 'for-loop' inside on an obscure microcontroller that runs on all GE appliance since the '60s?"

Nerval's Lobster writes "IBM believes its Watson supercomputing platform is much more than a gameshow-winning gimmick: its executives are betting very big that the software will fundamentally change how people and industries compute. In the beginning, IBM assigned 27 core researchers to the then-nascent Watson. Working diligently, those scientists and developers built a tough 'Jeopardy!' competitor. Encouraged by that success on live television, Big Blue devoted a larger team to commercializing the technology—a group it made a point of hiding in Austin, Texas, so its members could better focus on hardcore research. After years of experimentation, IBM is now prepping Watson to go truly mainstream. As part of that upgraded effort (which includes lots of hype-generating), IBM will devote a billion dollars and thousands of researchers to a dedicated Watson Group, based in New York City at 51 Astor Place. The company plans on pouring another $100 million into an equity fund for Watson's growing app ecosystem. If everything goes according to IBM's plan, Watson will help kick off what CEO Ginni Rometty refers to as a third era in computing. The 19th century saw the rise of a "tabulating" era: the birth of machines designed to count. In the latter half of the 20th century, developers and scientists initiated the 'programmable' era—resulting in PCs, mobile devices, and the Internet. The third (potential) era is 'cognitive,' in which computers become adept at understanding and solving, in a very human way, some of society's largest problems. But no matter how well Watson can read, understand and analyze, the platform will need to earn its keep. Will IBM's clients pay lots of money for all that cognitive power? Or will Watson ultimately prove an overhyped sideshow?"

Nerval's Lobster writes "The comparatively recent addition of supercomputing to the toolbox of biomedical research may already have paid off in a big way: Researchers have used a bio-specialized supercomputer to identify a molecular 'switch' that might be used to turn off bad behavior by pathogens. They're now trying to figure out what to do with that discovery by running even bigger tests on the world's second-most-powerful supercomputer. The 'switch' is a pair of amino acids called Phe396 that helps control the ability of the E. coli bacteria to move under its own power. Phe396 sits on a chemoreceptor that extends through the cell wall, so it can pass information about changes in the local environment to proteins on the inside of the cell. Its role was discovered by a team of researchers from the University of Tennessee and the ORNL Joint Institute for Computational Sciences using a specialized supercomputer called Anton, which was built specifically to simulate biomolecular interactions among proteins and other molecules to give researchers a better way to study details of how molecules interact. 'For decades proteins have been viewed as static molecules, and almost everything we know about them comes from static images, such as those produced with X-ray crystallography,' according to Igor Zhulin, a researcher at ORNL and professor of microbiology at UT, in whose lab the discovery was made. 'But signaling is a dynamic process, which is difficult to fully understand using only snapshots.'"

alphadogg writes "With memory, as with real estate, location matters. A group of researchers from AMD and the Department of Energy's Los Alamos National Laboratory have found that the altitude at which SRAM resides can influence how many random errors the memory produces. In a field study of two high-performance computers, the researchers found that L2 and L3 caches had more transient errors on the supercomputer located at a higher altitude, compared with the one closer to sea level. They attributed the disparity largely to lower air pressure and higher cosmic ray-induced neutron strikes. Strangely, higher elevation even led to more errors within a rack of servers, the researchers found. Their tests showed that memory modules on the top of a server rack had 20 percent more transient errors than those closer to the bottom of the rack. However, it's not clear what causes this smaller-scale effect."

dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'"
Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).

jfruh writes "Have you ever wanted to write code for Watson, IBM's Jeopardy-winning supercomputer? Well, now you can, sort of. Big Blue has created a standardized server that runs Watson's unique learning and language-recognition software, and will be selling developers access to these boxes as a cloud-based service. No pricing has been announced yet."

Probably -- if the device I want supports itProbably -- if it works as promisedProbably -- credit cards will be like checks in another decadeNot sure -- no strong opinions either wayDoubtful -- not a useful technology to meDoubtful -- it will be too fragmentedDoubtful -- privacy/security concernsDoes throwing my spare change at the cashier count as mobile?