Items tagged with deep learning

It's said that seeing is believing, but can you always believe what you see? That's a question that will come up more and more as companies like NVIDIA push the envelope with graphics rendering and, interestingly enough, artificial intelligence. As it relates to that, in addition to launching a new Titan RTX graphics card today, NVIDIA also announced a new deep learning-based model for generating 3D environments based on real-world data. A team of NVIDIA researchers led by Bryan Catanzaro, Vice President of Applied Deep Learning at NVIDIA, used a conditional generative neural network as a starting...Read more...

The initial focus on NVIDIA's recently launched GeForce RTX 2080 Ti and GeForce RTX 2080 graphics cards has been on how well they perform in games, especially when cranking up the resolution to 4K (3840x2160). That will continue to be a point of interest, though it's not the only one. A fresh set of benchmarks making the rounds highlight how the new cards perform in deep learning workloads. Before we get to the numbers, let's talk about why this matters. As you might already know, the GeForce RTX series pushes consumer graphics cards into new territory. Typically with each new generation of graphics...Read more...

Do you need any more proof that your ball handling skills suck? Well, look no further than Carnegie Mellon University and DeepMotion, which together trained AI how to dribble a basketball and pull off increasingly advanced moves as it learned. It can be hard to often rationalize in our minds that professional basketball players who make dribbling, crossovers, and pump fakes seem so effortless have honed those skills over many years of practice. However, researchers at Carnegie Mellon and DeepMotion were able to teach AI how to pull off similar feats in a matter of hours (via...Read more...

One of the most annoying things about Windows 10 is that it often decides to apply updates right when you are in the middle of working. This is typically halfway through a long document or during meeting. A new Windows 10 Redstone 5 build that Microsoft has released to some Insiders dubbed 19H1 has new tech to avoid this issue. The Windows 10 now has changes specifically to the update system that takes advantage of new cloud-based logic. The goal of this cloud-based logic is to avoid unexpected updates when you are trying to work. The tech uses a predictive model that aims to improve over time...Read more...

It’s happened to us all at some point in time — you capture an image in less-than-ideal light conditions and the end result is a is grainy photo filled with digital noise. While you may still be able to make out many of the details in the photograph, wouldn’t it be nice if you could somehow magically restore it to near perfect condition the way it was meant to be seen? That’s exactly what researchers from NVIDIA, MIT and Aalto University have been able achieve using deep learning artificial intelligence called Noise2Noise. NVIDIA used a team of Tesla P100 GPUs along with...Read more...

It's not quite the Skynet that Tesla CEO Elon Musk has warned us about, but researchers from the Musk-based OpenAI initiative have made a breakthrough in AI algorithms using Dota 2 as a testbed. OpenAI's achievement is remarkable due in part to its scope. Most AI versus human matches -- be it go or a computer game -- involves a single computer against a single human (as was the case with OpenAI’s victory last year). But OpenAI has managed to train its AI to master competing against humans on a five-player team. The team of five neural networks that was developed is collectively...Read more...

Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. It's not just challenging for humans, either. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they...Read more...

Gigabyte today announced a couple of new 4U GPU servers for the datacenter, both packed with multiple NVIDIA Tesla GPUs to bring massive parallel computing capabilities to the sector. According to Gigabyte, its new G481-S80 and G481-HA0 offer some of the highest GPU density available in the 4U form factor—the former accommodates eight SXM2 form factor GPUs, such as NVIDIA's Volta-based Tesla V100 or Pascal-based Tesla P100, and the latter packs 10 GPUs. These new servers also feature NVIDIA's NVLink technology supporting bi-directional communication between GPUs, allowing for higher bandwidth...Read more...

Hot on the heels of the debut of its 8th gen Core series, and also its brand-new top-end Core X chips, Intel just announced Loihi. With this new chip, Intel is going all-in on artificial intelligence (AI) and self-learning. It also drops a term you may have heard recently: neuromorphic computing; in effect, neural systems simulation. In his blog post, Intel's Corporate VP and Managing Director of Intel Labs Dr. Michael Mayberry lays down a couple of great examples of how AI could benefit our lives in the future. Picture, for example, stoplight-mounted cameras being tied to an AI backend that adjusted...Read more...

Microsoft was on hand at the Hot Chips 2017 show and rolled out a new deep learning acceleration platform dubbed Project Brainwave. Microsoft's Doug Burger says that the platform is a "major leap forward in both performance and flexibility for cloud-based serving of deep learning models." Microsoft says that it designed the system for real-time AI so the system is able to process requests as fast as they are received with ultra-low latency. Microsoft says that real-time AIs are becoming increasingly important to process live data streams in a cloud infrastructure. That sort of data includes things...Read more...

Intel is expanding its reach into the deep learning field today with the launch of the Neural Compute Stick (NCS), which as developed by its Movidius subsidiary. The Movidius NCS is aimed at democratizing deep learning and artificial intelligence, with Intel billing it as “the world’s first self-contained AI accelerator in a USB format.” The Movidius NCS is powered by the Myriad 2 vision processing unit (VPU), which promises 100 gigaflops of performance all while operating within a 1-watt power envelope. Given its low-power requirements, Intel is aiming the Movidius NCS at developers, research...Read more...

We talked yesterday of an example of how deep learning and artificial intelligence can be used to put words in people's mouths, creating video proof of something someone said, even if they didn't really say it. Prospects like that are downright scary, but so too are the realities of the jobs AI will be able to take away from humans. Case in point: professional photography editing. This is a bit of an odd one, as most photographers will edit their own photos, so maybe we should consider this an example of how AI could help someone get through their workflow more efficiently. And perhaps even...Read more...

Many of us have had to heed the warning of, "Don't put words in my mouth" at some point over the course of our lives, but generally speaking, no one actually means it from in a literal sense. In time, though, thanks to deep learning and artificial intelligence, putting words in someone's mouth could become a legitimate reality. Complementing the upcoming SIGGRAPH conference in Los Angeles, researchers from University of Washington have worked their magic to put words into former President Barack Obama's mouth. At least in this case, the words said actually did come from Obama's mouth, but the footage...Read more...

A board game like "Go" might not look complicated on the surface to the untrained eye, which could lead the uninformed to believe that it wouldn't be all that difficult for a computer to best a human player in a head-to-head match. We've seen many examples in the past where that hasn't been the case (IBM's Watson is a good start), and it's because despite their simple nature, the number of solutions/moves at any given time is sometimes astronomical. Last month, we wrote about Google's DeepMind and its challenge of going up against the world's best Go player, Ke Jie. Fast-forward to now, and...Read more...