The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.

933 Comments

Mary Jo Foley / ZDNet:
Microsoft releases Visual Studio for Mac, which is a rebranded version of Xamarin Studio, and Visual Studio 2017 for Windows in preview — Microsoft is releasing a first Visual Studio for Mac preview, as well as a near-final Release Candidate of Visual Studio 2017 for Windows.

Competitors Amazon Web Services (AWS), IBM SoftLayer, and Microsoft Azure have launched GPU-backed instances in the past. Google is looking to stand out by virtue of its per-minute billing, rather than per-hour, and its variety of GPUs available: the Nvidia Tesla P100 and Tesla K80 and the AMD FirePro S9300 x2.

This cloud infrastructure can be used for a type of artificial intelligence (AI) called deep learning. It’s in addition to Google’s custom-made tensor processing units (TPUs), which will be powering Google’s Cloud Vision application programming interface (API). The joint availability of GPUs and TPUs should send a signal that Google doesn’t see TPUs as being a one-to-one alternative to GPUs.

Also today Google announced the formation of a new Cloud Machine Learning group. Google cloud chief Diane Greene named the two leaders of the group: Jia Li, the former head of research at Snapchat, and Fei-Fei Li, the former head of Stanford’s Artificial Intelligence Lab and also the person behind the ImageNet image recognition data set and competition. As Greene pointed out, both of the leaders are women, and also respected figures in the artificial intelligence field.

3D XPoint was debuted with big claims in 2015. However, there are many wild guess and speculations because details have not been shared in public domain

Early this year, details about 3D XPoint came out from EE Times interview with Guy Blalock, co-CEO of IM Flash. 3D XPoint is well-known phase change memory and switch (PCMS).

According to Micron, 3D XPoint has many technical and operational challenges, such as 100 new materials raising supply chain issues, cutting fab throughput by 15%, 3x-5x increase in capital expenses compared to planar NAND, and heavily depending on lithography tools. Therefore, 3D XPoint becomes expensive. The 2nd generation 3D XPoint with 4-layer stacking is expected to be about 5 times more expensive than planar NAND. So, it seems difficult to be an affordable storage device. Instead, 3D NAND will serve the storage market.

According to Intel/Micron, 3D XPoint is aiming at high-end SSD and DDR4 NVDIMM markets. Though, 3D XPoint-based SSD will serve niche market because of cost issue. The sweet spot of 3D XPoint should be the DDR4 NVDIMM because of low read latency (i.e. ~100ns).

Intel rolled out its intentions for a soup-to-nuts offering in artificial intelligence, but at least one of the key dishes is not yet cooked.

The PC giant will serve up the full range of planned products it acquired from Nervana Systems. They will take on mainly high-end jobs especially in training neural networks, an area now dominated by Nvidia’s graphics processors.

Intel’s acquisition of Movidius has not yet closed, leaving a wide opening in computer vision and edge networks. Separately, the company announced several AI software products, services and partnerships.

Compared to Q2 2016, total GPU shipments including discrete and integral chips in the mobile and desktop markets increased by 20%; good but not enough to recover from the volume we saw in Q3 2015. Individually, total AMD sales increased by 15%, and Intel saw 18% boost, but it was NVIDIA that was the most successful with an impressive 39% increase.

Earlier this week, a post written by programmer and teacher Bill Sourour went viral. It’s called “Code I’m Still Ashamed Of.”

In it he recounts a horrible story of being a young programmer who landed a job building a website for a pharmaceutical company. The whole post is worth a read, but the upshot is he was duped into helping the company skirt drug advertising laws in order to persuade young women to take a particular drug.

He later found out the drug was known to worsen depression and at least one young woman committed suicide while taking it. He found out his sister was taking the drug and warned her off it.

Decades later, he still feels guilty about it, he told Business Insider.

Software developers ‘kill people’

Martin argues in that talk that software developers better figure out how to self-regulate themselves and fast.

“Let’s decide what it means to be a programmer,”Martin says in the video. “Civilization depends on us. Civilization doesn’t understand this yet.”

His point is that in today’s world, everything we do like buying things, making a phone call, driving cars, flying in planes, involves software. And dozens of people have already been killed by faulty software in cars, while hundreds of people have been killed from faulty software during air travel.

“We are killing people,” Martin says. “We did not get into this business to kill people. And this is only getting worse.”

Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.

Programmers confess

Sourour’s “ashamed” post went viral on Hacker News and Reddit and it unleashed a long list of confessions from programmers about the unethical and, sometimes, illegal things they’ve been asked to do.

Bootcamps without ethics

A common theme among these stories was that if the developer says no to such requests, the company will just find someone else do it. That may be true for now, but it’s still a cop-out, Martin points out.

“We rule the world,” he said. “We don’t know it yet. Other people believe they rule the world but they write down the rules and they hand them to us. And then we write the rules that go into the machines that execute everything that happens.”

The U.S. believes it will be ready to seek vendor proposals to build two exascale supercomputers — costing roughly $200 to $300 million each — by 2019. The two systems will be built at the same time and be ready for use by 2023

Researchers at Facebook have attempted to build a machine capable of reasoning from text – but their latest paper shows true machine intelligence still has a long way to go.

The idea that one day AI will dominate Earth and bring humans to their knees as it becomes super-intelligent is a genuine concern right now. Not only is it a popular topic in sci-fi TV shows such as HBO’s Westworld and UK Channel 4’s Humans – it features heavily in academic research too.

Research centers such as the University of Oxford’s Future of Humanity Institute and the recently opened Leverhulme Centre for the Future of Intelligence in Cambridge are dedicated to studying the long-term risks of developing AI.

The key to potential risks about AI mostly stem from its intelligence. The paper, which is currently under review for 2017′s International Conference on Learning Representations, defines intelligence as the ability to predict.

“An intelligent agent must be able to predict unobserved facts about their environment from limited percepts (visual, auditory, textual, or otherwise), combined with their knowledge of the past”

Although EntNet shows machines are far from developing automated reasoning, and can’t take over the world yet, it is a pretty nifty way of introducing memory into a neural network.
Machines are still pretty dumb

Intel rolled out its intentions for a soup-to-nuts offering in artificial intelligence, but at least one of the key dishes is not yet cooked.

The PC giant will serve up the full range of planned products it acquired from Nervana Systems. They will take on mainly high-end jobs especially in training neural networks, an area now dominated by Nvidia’s graphics processors.

Intel’s acquisition of Movidius has not yet closed, leaving a wide opening in computer vision and edge networks. Separately, the company announced several AI software products, services and partnerships.

Movidius’ chief executive made a brief appearance in a break-out session at an Intel AI event here, but could not say when the acquisition will close or what hurdles lay ahead. “We look forward to joining the family,” he said, after sketching out his plans for low-power inference chips for cars, drones, security cameras and other products.

“AI will transform most industries we know today, so we want to be the trusted leader and developer of it,” said Intel chief executive Brian Krzanich in a keynote launching the half-day event.

Alternative HPC architectures will only happen if a strong supporting software ecosystem is in place.

Efforts were already underway from ARM to build up our HPC software ecosystem and we immediately saw that OpenHPC aligned well with those efforts. In June, we were officially announced as a founding member of OpenHPC and less than six months later, I’m pleased to announce that ARMv8-A will be the first alternative architecture with OpenHPC support. The initial baseline release of OpenHPC for ARMv8-A will be available as part of the forthcoming OpenHPC v1.2 release at SC16. This is yet another milestone that levels the playing field for the ARM server ecosystem and will accelerate choice within the HPC community.

Welcome to the OpenHPC site. OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others.

Using FPGAs to optimize high-performance computing, without specialized knowledge.

For most scientists, what is inside a high-performance computing platform is a mystery. All they usually want to know is that a platform will run an advanced algorithm thrown at it. What happens when a subject matter expert creates a powerful model for an algorithm that in turn automatically generates C code that runs too slowly? FPGA experts have created an answer.

A more promising approach for workload optimization using considerably less power is hardware acceleration using FPGAs. Much as in the early days of FPGAs where they found homes in reconfigurable compute engines for signal processing tasks, technology is coming full circle and the premise is again gaining favor. The challenge with FPGA technology in the HPC community has always been how the scientist with little to no hardware background translates their favorite algorithm into a reconfigurable platform.

Most subject-matter experts today are first working out system-level algorithms in a modeling tool such as MATLAB. It’s a wonderful thing to be able to grab high-level block diagrams, state diagrams, and code fragments, and piece together a data flow architecture that runs like a charm in simulation. Using MATLAB Coder, C code can be generated directly from the model, and even brought back into the model as MEX functions to speed up simulation in many cases. The folks at the MathWorks have been diligently working to optimize auto-generated code for particular processors, such as leveraging Intel Integrated Performance Primitives.

While some algorithms vectorize well, many simply don’t, and more processor cores may not help at all unless a careful multi-threading exercise is undertaken. Parallel GPU programming is also not for the faint of heart.

Moving from the familiar territory of MATLAB models and C code to the unfamiliar regions of LUTs, RTL, AXI, and PCI Express surrounding an FPGA is a lot to ask of most scientists. Fortunately, other experts have been solving the tool stack issues surrounding Xilinx technology, facilitating a move from unaided C code to FPGA-accelerated C code.

The Xilinx Virtex-7 FPGA offers an environment that addresses the challenges of integrating FPGA hardware with an HPC host platform.

A typical acceleration flow partitions code into a host application running on the HPC platform, and a section of C code for acceleration in an FPGA. Partitioning is based on code profiling, identifying areas of code that deliver maximum benefit when dropped into executable FPGA hardware. The two platforms are connected via PCI Express, but a communication link is only part of the solution

To keep the two platforms synchronized, AXI messages can be used from end-to-end. Over a PCI Express x8 interface, AXI throughput between a host and an acceleration board exceeds 2GB/sec. Since AXI is a common protocol used in most Virtex-7 intellectual property (IP) blocks, it forms a natural high-bandwidth interconnect between the host and the Virtex-7 device including the C-accelerated Compute Device block. A pair of Virtex-7 devices are also easily interconnected using AXI as shown.

This is the same concept used in co-simulation, where event-driven simulation is divided between a software simulator and a hardware acceleration platform.

Parallelism used to be the domain of supercomputers working on weather simulations or plutonium decay. It is now part of the architecture of most SoCs.

But just how efficient, effective and widespread has parallelism really become? There is no simple answer to that question. Even for a dual-core implementation of a processor on a chip, results can vary greatly by software application, operating system, and use case. Tools have improved, and certain functions that can be done in parallel are better defined, but gaps remain in many areas with no simple way to close them.

That said, there is at least a better understanding of what issues remain and how to solve them, even if that isn’t always possible or cost-effective.

“To achieve parallelism it is necessary to represent at some level of granularity what needs to be done concurrently,”

Concurrency and parallelism used to be almost synonymous terms when parallel architectures were first introduced.

“If you look at how architectures have evolved, parallelism and concurrency have gone on to mean different things,”

Part one in a series. Processing architectures continue to become more complex, but is the software industry getting left behind? Who will help them utilize the hardware being created?

Eleven years ago processors stopped scaling due to diminishing returns and the breakdown of Dennard’s Law. That set in motion a chain of events from which the industry has still not fully recovered.

The transition to homogeneous multi-core processing presented the software side with a problem that they did not know how to solve, namely how to optimize the usage of the compute capabilities made available to them. They continue to struggle with that problem even today. At the same time, many systems required the usage of processing cores with more specialized functionality. The mixing of these processing elements gave us heterogeneous computing, a problem that sidestepped many of the software problems.

LipNet, the lipreading network developed by researchers at the University of Oxford and DeepMind, can now lipread from TV shows better than professional lipreaders.

The first LipNet paper, which is currently under review for International Conference on Learning Representations – ICLR 2017, a machine learning conference, was criticised for using a limited dataset to test LipNet’s accuracy. The GRID corpus is made up of sentences that have a strict word order and make no sense on their own.

HPE’s latest results show a company emerging slimmer and fitter through diet (cost-cutting) and exercise (spin-merger deals) but facing tougher markets in servers and storage – the new normal, as CEO Meg Whitman says.

A look at the numbers and the earnings call from the servers and storage points of view shows a company with work to do.

The server business saw revenue of $3.5bn in the quarter, down 7 per cent year-on-year and up 5 per cent quarter-on-quarter. High-performance compute (Apollo) and SGI servers did well. Hyper-converged is growing and has more margin than the core ISS (Industry Standard Servers). Synergy and mission critical systems also did well.

But the servers business was affected by strong pressure on the core ISS ProLiant racks, a little in the blade server business, and also low or no profitability selling Cloudline servers, the ones for cloud service providers and hyperscale customers.

“Cloudline is a pretty big business for us. And when done correctly, we actually make money on Cloudline. But we just have to be sure every deal has to be looked at on a one-off basis, which is what’s the forward pricing going to look like? And I basically said to the team, listen, we do not want to be doing negative deals here for the most part. What’s the point in selling things at loss?”

Although HPE’s CEO said hyper-converged was doing well, there is some way to go. Gartner ranks HPE as the leader in the hyper-converged and integrated systems magic quadrant, with EMC second and Nutanix third.

In the all-flash array (AFA) business, HPE grew 3PAR AFA revenues 100 per cent year-on-year to a $750m annual run rate, which compares with NetApp at $1bn and Pure at $631m. Our sense is that Dell-EMC leads this market, followed by NetApp, then HPE, with Pure in fourth place.

Comparing HPE to other AFA suppliers we see Dell EMC with five AFA products: XtremIO, DSSD, all-flash VMAX and Unity, and an all-flash Isilon product.

Rakers thinks “HPE has $5bn+ in excess cash” and is wondering about “the company’s next move given a healthy excess net operating cash position”. Its merger and acquisitions strategy is a key focus for him.

There are hundreds of applications for OS X that place information in the menu bar. Usually, I can find one that almost does what I want, but not quite. Thankfully I found BitBar, which is an open-source project that allows you to write scripts and have their output refreshed and put on the menu bar.

You can download the binary or the source code here. There is also a huge library of user-contributed scripts so you don’t have to start from scratch.

Artificial intelligence is getting its teeth into lip reading. A project by Google’s DeepMind and the University of Oxford applied deep learning to a huge data set of BBC programmes to create a lip-reading system that leaves professionals in the dust.

The AI system was trained using some 5000 hours from six different TV programmes, including Newsnight, BBC Breakfast and Question Time. In total, the videos contained 118,000 sentences.

By only looking at each speaker’s lips, the system accurately deciphered entire phrases, with examples including “We know there will be hundreds of journalists here as well” and “According to the latest figures from the Office of National Statistics”.

The AI vastly outperformed a professional lip-reader who attempted to decipher 200 randomly selected clips from the data set.

The professional annotated just 12.4 per cent of words without any error. But the AI annotated 46.8 per cent of all words in the March to September data set without any error.

“We believe that machine lip readers have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces (Siri will never have to hear your voice again) and speech recognition in noisy environments,” says Assael.

Neural networks ought to be very appealing to hackers. You can easily implement them in hardware or software and relatively simple networks can perform powerful functions. As the jobs we ask of neural networks get more complex, the networks require more artificial neurons. That’s why researchers are pursuing dense integrated neuron chips that could do for neural networks what integrated circuits did for conventional computers.

Researchers at Princeton have announced the first photonic neural network. We recently talked about how artificial neurons work in conventional hardware and software. The artificial neurons look for inputs to reach a threshold which causes them to “fire” and trigger inputs to other neurons.

Someone reads a blog post, it’s trending on Twitter, and we just came back from a conference where there was a great talk about it. Soon after, the team starts using this new shiny technology (or software architecture design paradigm), but instead of going faster (as promised) and building a better product, they get into trouble. They slow down, get demotivated, have problems delivering the next working version to production.

Describing behind-schedule teams that “just need a few more days to sort it all out,” he blames all the hype surrounding React.js, microservices, NoSQL, and that “Test-Driven Development Is Dead” blog post by Ruby on Rails creator David Heinemeier Hansson.

Software development teams often make decisions about software architecture or technological stack based on inaccurate opinions, social media, and in general on what is considered to be “hot”, rather than solid research and any serious consideration of expected impact on their projects. I call this trend Hype Driven Development, perceive it harmful and advocate for a more professional approach I call “Solid Software Engineering”. Learn more about how it works and find out what you can do instead.

MIT and CSAIL researchers have developed a training technique designed to train neural networks so they provide not only predictions and classifications but rationales for their decisions.

In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

Neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

Brad Smith / The Official Microsoft Blog:
Microsoft’s acquisition of LinkedIn approved by EU, to close in the coming days as it has now received all the regulatory approvals needed — It was roughly six months ago, on June 13, that we announced that Microsoft would acquire LinkedIn. At that time, we said that we aimed to close the deal by the end of the year.

How could Dr Michael Lynch raise a $1 billion venture capital fund while being sued for $5 billion over alleged fraud in the $11 billion sale of his company Autonomy to HP? “The reality is, that doesn’t take much time” since he has a team of lawyers on the case, Lynch said on stage during TechCrunch Disrupt London.

HP originally paid Lynch $730 million for his stake in Autonomy. Now its trying to recover that money and what it thinks it overpaid for the big data company. HP ended up having to write-down nearly $9 billion of the $11 billion buyout after Autonomy fell apart in its arms. Lynch is countersuing for $160 million, claiming the fraud suit ruined his reputation.

Autonomy must have been working if clients were paying it hundreds of millions in cash. “If something’s wrong with a business, people don’t pay you”

China and Russia are populous, wealthy nations that the technology industry has long-regarded as exceptional growth prospects.

And then along came Edward Snowden, whose suggestions that American vendors were complicit in the United States’ surveillance efforts gave governments everywhere a reason to re-think their relationship with big technology companies.

Russia and China both responded by citing a combination of national security concerns and a desire to grow their own technology industries as the twin motivations for policies that make it harder for foreign technology companies to access their markets.

Both nations now operate approved vendor lists that government agencies and even business must consider when shopping for technology. Russia’s forcing web companies to store personal data on its soil. China demands to see vendors’ source code and has made the price of admission to its market a joint venture with a local firm, along with a technology transfer deal. Last month China also passed a security law requiring vendors to assist local authorities with investigations while further restricting internet freedoms.

“If China were a smaller market there is no way the government would get away with the controls of the internet, supporting domestic industry and requiring technology transfer,” he says. “You could not get away with it and still be part of the global supply chain.”

Storage startup Excelero is supportive of NVMe drives and of NVMe over fabrics-style networking. It has a unique way of using NVMe drives to create a virtual SAN accessed by RDMA. An upcoming NASA Ames case study will describe how its NVMesh technology works in more detail.

Broadcom is shutting down efforts to develop its own server-class 64-bit ARM system-on-chip, multiple sources within the semiconductor industry have told The Register.

It appears the secretive project, codenamed Vulcan, did not survive Broadcom’s acquisition by Avago and is gradually being wound down. Engineering resources have been quietly moved to other product areas and staff have left or been let go, we’re told. Vulcan’s designers have been applying for jobs at rival semiconductor giants, revealing that Broadcom’s server-grade processor dream is all but dead, it is claimed.

All traces of Vulcan have been scrubbed from Broadcom’s website

Meanwhile, AMD is keenly focused on its x86 Zen processor at the moment, leaving its ARM server chip plans on the shelf for now. So who’s left standing? Well, there’s Cavium and its ThunderX ARM data center silicon, and Qualcomm’s Centriq server system-on-chip that is due to start sampling this quarter and arrive on the market in the second half of 2017.

“ARM-based servers have been hyped in the market for 6-plus years, with little to show for it in terms of real customer adoption,” Gina Longoria, a senior analyst at Moor Insights and Strategy, told The Register on Tuesday.

This week, HP unveiled a working prototype of an ambitious new computer system, dubbed ‘the Machine’, which the company claims is the world’s first demonstration of what it calls memory-driven computing.

The idea is that the Machine – which was first announced back in 2014 – will massively outperform existing technology, by placing extra reliance on memory to perform calculations, with less dependence on computer processors.

And while the Machine prototype we have so far is only being shown as a proof-of-concept of what the technology could ultimately be, there appears to be some truth to the performance claims.

HP Enterprise – the business-focused side of the corporation – says its simulations show that memory-driven computing can achieve improved execution speeds up to 8,000 times faster than conventional computers.

But before we get too excited, the Machine is likely to be years away from a commercial release, and its primary market is high-end servers that companies use to bring you things like Facebook and YouTube, not consumer PCs.

But that doesn’t mean we shouldn’t get excited, because while the Machine is ultimately a business tool, HP says the architecture it runs – memory-driven computing – could one day find a home in consumer products, down to even the smart devices such as internet-connected cameras and lighting systems that make up the Internet of Things.

At its core, the Machine uses photonics – the transmission of information via light, rather than the electrons of conventional PCs – to help processors access data from a massive memory pool.

The prototype system currently uses 8 terabytes of memory in total – about 30 times the amount a conventional server might hold, and hundreds of times more memory than the amount of RAM a typical consumer computer would have.

HP plans to eventually develop systems with hundreds of terabytes of memory

Memory-driven computing looks like it will provide a huge performance boost when it hits – we’ll have to wait and see just when that will be.

China’s data center giants have become the next big hope to give traction to ARM’s server initiative.

When Macom bought Applied Micro last week and said it would sell off its X-Gene ARM server unit, the writing was on the wall. Applied has a solid business with big U.S. data centers and in 2017 and beyond they are buying bandwidth in the form of 100-400G Ethernet — not ARM servers.

In the wake of the news I heard multiple reports Broadcom was ending Vulcan, its plan for a beefy ARM server SoC made in a FinFET process with a custom core. The risky product was expected to be cancelled ever since penny-pinching Avago bought the company. (A former Broadcom engineer told me the company also canceled plans for a set-top processor using custom ARM cores.)

A representative of Cavium said he is evaluating whether to bid on the Applied X-Gene 3 and Broadcom Vulcan IP, both now up for sale. He already hired some engineers let go from both programs. Cavium’s ThunderX2 is riding high expectations in this space but may not be available in volume until 2018.

A representative from Qualcomm said he had seen some resumes from the Broadcom and Applied processor engineers. His company had already decided not to buy the ARM server IP from either company. Qualcomm is poised to soon launch its own chip, announced nearly four years ago.

Few other companies are left driving the initiative to put a dent in Intel’s Xeon processor, which commands the majority of server sockets these days.

Micron Technology is declaring spinning disk dead with the introduction of its first solid state drives (SSDs) using its 3D NAND for the enterprise market.

All-flash storage array vendors such as Violin Memory and others have been pushing the message that hard drives are dead for a number of years now, Micron sees spinning media winding down because its new 5100 line of enterprise SATA SSDs are able to offer a lower total cost of ownership (TCO), said Scott Shadley, the company’s principal technologist for its storage business.

In a telephone interview with EE Times, he said the launch of the 5100 series comes on the heels of the company’s success in the client segment with 1100 series of SSDs using Micron’ 3D technology. Shadley acknowledges it isn’t the first to then enterprise market with 3D NAND SSDs, but said Micron is looking to be strategic with its offerings.

Even though “NVMe is the trend of the day,” he added, there isn’t enough support for it yet. Micron will be looking at offer SAS and PCIe SSDs using its 3D NAND down the road, however.

A dozen key stakeholders announced an effort to define cross-vendor, royalty-free, open application programming interfaces for virtual reality. It aims to reduce fragmentation and make it easier to write applications that run well across a growing range of VR products.

The Khronos Group announced a call for participation in the effort it will host to specify application- and driver-level APIs. The full scope of the effort has yet to be defined. However, the work is expected to include APIs for tracking headsets, controllers, and other objects, as well as rendering content on range of displays.

“As well as providing application portability, a goal of the standard is to enable new and innovative sensors and devices to implement up to a standard driver interface to be used easily by VR runtimes,” said Neil Trevett, president of Khronos, in an email exchange with EE Times.

So far, the group includes VR chip and system vendors such as AMD, ARM, Google, Intel, Nvidia, Oculus, and Valve. The group also includes the developers of the open-source VR products, software developer Epic Games, and eye-tracking specialist Tobii.

The startup has its own tech to push – NVDIMM-X – but, even so, is revealing about XPoint DIMMery.

Future 3D XPoint DIMMs may make it practical for main memory to hold terabytes – 6TB (6,000GB) is predicted. 3D XPoint DIMMs will probably have a slower bandwidth than double data rate (DDR) DIMMs, perhaps with their contents cached in MCDRAM (multi-channel DRAM), HBM memory to compensate for this. Such DDR DIMM caches could be about 10 per cent of the capacity of the main memory, so these caches can be 600GB in size – a far cry from the 4KB main memory on the machines from the early 1970s.

If this pairing of XPoint DIMM and a DRAM cache DIMM is correct then several consequences follow:

1. For every XPoint DIMM two DIMM slots are needed, effectively halving the potential XPoint DIMM capacity on a host.
2. Memory bus capacity is needed to transfer data from XPoint DIMM to cache DIMM.
3. XPoint is a backing store to a cache DIMM and effective caching algorithms can make alternative and less expensive backing stores more attractive.

Let’s further assume DRAM access speed equals 1 time unit and XPoint access speed equals 5 time units. Then the total access time can be calculated as:

Let’s employ flash DIMMs instead of XPoint ones, with an access time of 50 time units, 10 times slower, and use the same DRAM caching scheme and hit rate. What is the total access time for 1 million IOs?

The average time per access is 3.45 time units. The difference from 1.2 is significant

Finke has this to say about XPoint cost: “It is touted to have a cost of about one-half that of DRAM, but still 5x that of NAND.”

Will you pay five times as much for a near 3X speed boost? We imagine that any accompanying DRAM cache DIMM would cost extra, effectively putting up the XPoint DIMM cost. So you might have to pay more than 5X the NAND price.”

Worldwide PC shipments are forecast to decline by 6.4% in 2016, according to IDC. This is an improvement over August’s projection for a decline of 7.2% in 2016. While IDC’s outlook for 2017 remains at minus 2.1% year-over-year growth, the absolute volumes are slightly higher based on stronger 2016 shipments.

Dave Gershgorn / Quartz:
Leaked slides from an invitation-only event on December 6 reveal Apple’s LiDAR and AI research efforts, including more efficient neural networks — Apple has long been secretive about the research done within its Cupertino, California, labs. It’s easy to understand why.

Apple has long been secretive about the research done within its Cupertino, California, labs. It’s easy to understand why. Conjecture can spin out of even the most benign of research papers or submissions to medical journals, especially when they’re tied to the most valuable company on the planet.

But it seems Apple is starting to open up about its work, at least in artificial intelligence.

Grey IT. Shadow IT, Shadow IT. There are several names, but it is the same phenomenon. It manifests itself in a variety of ways and there is a need for the business to which the IT organization from a solution can not be found or finding a solution from your IT organization is made difficult.

When a business has received too many requests answers “no”, it leaves itself to seek a solution. This may result in excel, which is distributed or referral by e-mail to different parts of the organization, or it can be found in the need for a satisfactory cloud service, which will be introduced over data management.

Customer experience optimization is based on the knowledge that all the company’s customers are applied in the contact surfaces and channels, thus creating a unified customer experience possible. The company in terms of shadow-IT will lead to fragmentation of knowledge and the increase in the number of platforms. If the information is scattered shadow-IT As a result, it takes time and energy integration systems, and cleaning up the master data, rather than to focus on creating solutions that improve the customer experience.

Almost every larger organization uses bureaucratic gate process, which must go through before a new project receives approval. This process causes the slow pace of decision-making and increases the workload of the business.

When such an undertaking the business recognize the need for the new app, the answer information management is often “no” or else the business will have to follow the lead given approval process, which causes extra work and delay the solution of the problem.

The more shadow-IT, which are produced, the greater the data to a different location. This has several disadvantages: data security is difficult to take care of in a complex environment and the information has been destroyed. Employees in different roles operate with partial data, which reduces efficiency and makes it difficult to do the job.

The first is that the processor is not based on the x86 architecture. Second, the fact that the manufacturer is not Intel. The new product is in fact Centriq Qualcomm 2400 processor, which has already been displayed in deliveries. Commercial distribution circuit will be next year in the second half.

Centriq 2400 Family Concept is based on Qualcomm’s data centers to tailor ARMv8 processor, which the company calls the Falkor. One chip can be up to 48 cores.

Qualcomm has already demonnut new processors in a typical server application, which combines Linux, Java and Apache Spark.

Google, HTC, Oculus, Samsung, Sony and Acer have teamed up to form the Global Virtual Reality Association (GVRA) in an effort to reduce fragmentation and failure in the industry. GVRA aims to “unlock and maximize VR’s potential,” but there are little details as to what this may mean for consumers

Speech synthesis is nothing new, but it has gotten better lately. It is about to get even better thanks to DeepMind’s WaveNet project. The Alphabet (or is it Google?) project uses neural networks to analyze audio data and it learns to speak by example. Unlike other text-to-speech systems, WaveNet creates sound one sample at a time and affords surprisingly human-sounding results.

AI will soon help programmers improve development, says Diego Lo Giudice, VP and principal analyst at Forrester, in an article published on ZDNet today. He isn’t saying that programmers will be out of jobs soon and AIs will take over. But he is making a compelling argument for how AI has already begun disrupting how developers build applications.

Much has been written about how artificial intelligence (AI) will put white-collar workers out of a job eventually. Will robots soon be able to do what programmers do best — i.e., write software programs? Actually, if you are or were a developer, you’ve probably already written or used software programs that can generate other software programs. That’s called code generation; in the past, it was done through “next” generation programming languages (such as a second-, third-, fourth-, or even fifth-generation languages), today are called low code IDEs. But also Java, C and C++ geeks have been turning high level graphical models like UML or BPML into code.

But that’s not what I am talking about: I am talking about a robot (or bot) or AI software system that, if given a business requirement in natural language, can write the code to implement it — or even come up with its own idea and write a program for it.

Don’t panic! This is still science fiction, but it won’t be too long before we can use AI to improve development

We can see early signs of this: Microsoft’s Intellisense is integrated into Visual Studio and other IDEs to improve the developer experience. HPE is working on some interesting tech previews that leverage AI and machine learning to enable systems to predict key actions for participants in the application development and testing life cycle, such as managing/refining test coverage, the propensity of a code change to disrupt/break a build, or the optimal order of user story engagement.

Interestingly, our interviewees saw testing as the most popular phase of the software delivery life cycle in which to apply AI. This makes sense, as quality is of paramount importance in the age of digital acceleration; it’s hard to both guarantee quality and speed to keep up with continuous delivery or the growing delivery cadence of modern development teams; and to achieve high quality it is expensive.

Cray, using Microsoft’s brain-like neural-network software on its XC50 supercomputer with 1,000 Nvidia Tesla P100 graphic processing units (GPU), claimed a deep learning milestone this week, saying it can perform deep learning tasks that once took days in a matter of days and hours through a collaboration with the Swiss National Supercomputing Centre.

The Swiss National Supercomputing Centre, a high-performance computing (HPC) center located in the Swiss city of Lugano that goes by the accronym CSCS, is making the deep learning applications used available to open source so that anyone with an XC50 can use them.

Microsoft officials say they’ve sold Surface Hub conferencing systems to more than 2,000 customers since March, and that they’ve now got their supply situation under control.

Microsoft has not revealed any unit shipment numbers for any of its Surface-branded devices. But on December 12, the company did provide a few related data points around demand, especially around its Surface Hub conferencing systems.

Cray, using Microsoft’s brain-like neural-network software on its XC50 supercomputer with 1,000 Nvidia Tesla P100 graphic processing units (GPU), claimed a deep learning milestone this week, saying it can perform deep learning tasks that once took days in a matter of days and hours through a collaboration with the Swiss National Supercomputing Centre.

The Swiss National Supercomputing Centre, a high-performance computing (HPC) center located in the Swiss city of Lugano that goes by the accronym CSCS, is making the deep learning applications used available to open source so that anyone with an XC50 can use them.

In the years since Lean first revolutionized the manufacturing sector, the basic principles have also shown benefits in other industries and other departments, most notably within technology. But new research emphasizes the major impact Lean can have not just in your IT departments, but across your entire organization.

The ultimate goal and guiding principle of Lean is creating perfect value for customers through a perfect value creation process with zero waste. In the day-to-day implementations of Lean, that translates to creating more value with fewer resources and inefficiencies.

A funeral seems like the last place to find professional leadership lessons, but at the service celebrating her mother’s life, LaVerne Council found inspiration she brings every day in her role as assistant secretary for Information and Technology and CIO, Office of Information and Technology, U.S. Department of Veterans Affairs.

Today’s IT organizations need authentic and bold leadership to guide their digital transformation and drive innovation and growth. But it’s also key to solving another corporate puzzle: how to attract, hire and retain talent.

Inspiring loyalty

In today’s job market, companies can’t promise lifelong job security, and employees don’t expect it. But what organizations can offer, and what more and more workers are looking for, is purpose, mission and shared values. That starts with authentic leadership

With the launch of their Polaris family of GPUs earlier this year, much of AMD’s public focus in this space has been on the consumer side of matters. However now with the consumer launch behind them, AMD’s attention has been freed to focus on what comes next for their GPU families both present and future, and that is on the high-performance computing market. To that end, today AMD is taking the wraps off of their latest combined hardware and software initiative for the server market: Radeon Instinct. Aimed directly at the young-but-quickly-growing deep learning/machine learning/neural networking market, AMD is looking to grab a significant piece of what is potentially a very large and profitable market for the GPU vendors.

Broadly speaking, while the market for HPC GPU products has been slower to evolve than first anticipated, it has at last started to arrive in the last couple of years.

The U.S. Environmental Protection Agency (EPA) is aiming to improve the energy efficiency of future computer servers. A few months ago, the agency published Draft 1, Version 3 of its ENERGY STAR Computer Server Specification.

In order to be eligible for the program, a server must meet all of the following criteria:

Marketed and sold as a computer server
Packaged and sold with at least one AC-DC or DC-DC power supply
Designed for and listed as supporting at least one or more computer server operating systems and/or hypervisors
Targeted to run user-installed enterprise applications
Provide support for ECC and/or buffered memory
Designed so all processors have access to shared system memory and are visible to a single OS or hypervisor

Today, the IEEE kicked off a broad initiative to make ethics a part of the design process for systems using artificial intelligence. The effort, in the works for more than a year, hopes to spark conversations that lead to consensus-driven actions and has already generated three standards efforts.

The society published a 138-page report today that outlines a smorgasbord of issues at the intersection of AI technology and values. They range from how to identify and handle privacy around personal information to how to define and audit human responsibilities for autonomous weapon systems.

The report raises a laundry list of provocative questions, such as whether mixed-reality systems could be used for mind control or therapy. It also provides some candidate recommendations and suggests a process for getting community feedback on them.

“The point is to empower people working on this technology to take ethics into account,”

Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems represents the collective input of over one hundred global thought leaders from academia, science, government and corporate sectors in the fields of Artificial Intelligence, ethics, philosophy, and policy.