Categories

Tag: Quadro

Dental schools, especially ones that provide services to the public, spend a lot of effort easing pain. Managing the IT infrastructure for their hundreds of students, instructors and administrators should be equally painless.

Digital technology is fast becoming a key part of diagnosis and treatment planning in dental practices. Recent advances in 3D scanning, modeling and printing technologies save valuable time for dentists and patients, while offering more personalized approaches to dental care.

New York’s Touro College of Dental Medicine (TCDM) is one of the first schools in the nation with a curriculum dedicated to digital dentistry. It additionally provides affordable dental services to the public through its dental clinic with over 160 Linux-based thin clients, with another 200 for the school.

The college’s faculty and students rely on virtualized desktops to view and interact with complex 3D models as they treat patients in the clinic or work in the simulation lab, X-ray rooms or imaging facilities. The IT staff is able to centrally manage, monitor and secure the operation remotely, while maintaining HIPAA compliance.

That’s because the college uses a virtual desktop infrastructure (VDI) powered by NVIDIA Quadro Virtual Data Center Workstation (Quadro vDWS). Due to its limited IT budget and its dentists and students requiring unlimited mobility, TCDM chose Quadro vDWS instead of physical 3D workstations. Graphics acceleration provided by Quadro vDWS provides “workstation-like” performance on low-cost thin clients, from any location.

The setup brings ongoing cost savings, ensures TCDM can manage and secure its environment from a central location, and easily scales to handle the school’s yearly influx of students, faculty and patients.

To design and implement this high-performing, cost-effective IT system, the school turned to Hudson River CIO Advisors, a New York-based IT managed service provider specializing in health tech.

The college has numerous facilities with 3D imaging requirements, including a simulation lab, where students practice 3D modeling and computer-aided design and computer-aided manufacturing; a clinic, where they digitally X-ray and scan patients’ teeth; and a research lab, where they work on models, milling crowns and 3D printing of dentures.

“TCDM has complex IT infrastructure requirements,” said Behan Venter, co-founder at Hudson River CIO Advisors. “Our team needed to engineer a solution that would support faculty, staff and large groups of students all concurrently using popular, graphics-intensive dental applications as well as running the full Microsoft Office suite and streaming instructional videos.”

The NVIDIA VDI system is incredibly easy to use with seamless session migration. Faculty and students can go to TCDM’s clinic to treat patients, or they can work in the simulation lab, X-ray rooms or imaging facilities. No matter where they are, they can work uninterrupted — as soon as they log in, work done at another station displays almost instantly.

“VDI on this scale needs high-availability IT, and it needs to deliver high-performance 3D graphics,” said Venter. “Thanks to the NVIDIA virtual GPU, the environment is fully capable of meeting these requirements.”

Working with 8K video will be easier and more accessible than ever thanks to collaboration between RED Digital Cinema and NVIDIA revealed last night during an industry event at the historic Linwood Dunn Theater in Hollywood.

In front of leaders from Adobe, Colorfront, HP and others, RED and NVIDIA announced an NVIDIA CUDA-accelerated REDCODE RAW decode SDK that gives software developers and studios a powerful new way to work with 8K video.

“Our mission is to bring cinema-grade images and performance to content creators everywhere,” said Jarred Land, president of RED Digital Cinema. “RED, NVIDIA and our industry partners are leveling the playing field, making the technology for high-resolution processing and image quality accessible to everyone.”

While consumers are snapping up millions of 4K TVs each month and Netflix, Hulu, Amazon and others are streaming a growing tsunami of content in the format, 8K is becoming the new frontier in video with 100,000 8K TVs hitting the market this year.

But 8K’s real importance to video editors today is making 4K post-production more flexible. “Overshooting” resolution lets creators do more with their footage such as stabilize, pan, crop and zoom in on the best parts of a shot. Compositors can benefit from more precise masks for keying and image tracking.

Just downsampling from 8K to 4K reduces artifacts, such as noise, and produces higher quality visuals. The fact 8K is exactly four times the size of 4K makes the operation much simpler.

Yet, as 8K production has increased, the need for massive CPU processing power or single-purpose hardware like the RED ROCKET-X puts it beyond reach for most content creators. That’s changing.

Opening the Door to 8K Editing

Earlier this year, NVIDIA and RED announced an initiative to accelerate 8K video processing by offloading the compute-intensive decoding and debayering of REDCODE RAW footage onto a single NVIDIA GPU.

At last night’s event, these real-time, 24+ frames per second capabilities were demonstrated running on an NVIDIA Quadro RTX 6000 GPU to play back, edit and color-grade RAW 8K footage on a system with a single-CPU HP Z4 Workstation — eliminating the need for either a $6,750 RED ROCKET-X or a $20,000 dual-processor workstation.

NVIDIA GPUs are the only solutions capable of playing RED MONSTRO’s 8192×4320 frames at 24 FPS with no pre-caching or proxy generation. The GPU is processing every frame as it needs it, so jumping around the timeline is quick and responsive, and scrubbing is smooth.

And the acceleration isn’t limited to 8K — the new SDK runs across a variety of legacy GeForce, TITAN and Quadro desktop and notebook GPUs, benefiting 4K, 5K and 6K workflows as well.

Colorfront, a pioneer in 8K workflows, was also on-hand to demonstrate faster-than-realtime RAW processing and 8K playback in HDR.

“Colorfront has been shipping 8K-capable systems for several years now and we are delighted to join with RED and NVIDIA and other industry leaders to celebrate a faster, more streamlined future for 8K,” said Colorfront managing director Aron Jaszberenyi. “With the new RED SDK allowing wavelet decompression on NVIDIA GPUs, Colorfront can do all the RAW processing in GPU and output 8K video (up to 60p) using AJA Kona5 video cards. With this latest advance – faster-than-realtime Debayer and decompression of 8K RAW footage, with simultaneous display of the 8K image on an 8K HDR monitor – Colorfront and our partners have achieved a significant milestone.”

Filmmakers and content creators are excited about the advances taking place.

“A few years ago, things like real-time playback or real-time encoding when exporting footage was not even possible,” said Director and Cinematographer Phil Holland. “As GPUs advanced further, this has empowered content creators to receive these performance gains in much more modest systems. Working in native raw formats, real-time effects, significantly faster exports, as well as much faster in application playback have all been huge time savers as digital cinema cameras advanced from 4K to 5K, to 6K, to 8K and likely beyond.”

Availability

Released RED R3D SDK and REDCINE-X PRO software are planned to be available at the end Q1 2019. Beta versions of the SDK have been made available to major third parties to support integration. Stay tuned for more details on REDCINE-X PRO beta.

Unveiled at the annual Autodesk University Conference in Las Vegas, the Quadro RTX 4000 puts real-time ray tracing within reach of a wider range of developers, designers and artists worldwide.

Professionals from the manufacturing, architecture, engineering and media creation industries witnessed a seismic shift in computer graphics with the launch of Turing in August. The field’s greatest leap since the invention of the CUDA GPU in 2006, Turing features new RT Cores to accelerate ray tracing and next-gen Tensor Cores for AI inferencing which, together for the first time, make real-time ray tracing possible.

The Quadro RTX 4000 features a power-efficient, single-slot design that fits in variety of workstation chassis. Other benefits include:

Video encode and decode engines — accelerate video creation and playback for multiple video streams with resolutions up to 8K.

Among early users of the Quadro RTX 4000 is the global architectural firm CannonDesign. Ernesto Pacheco, director of visualization at the company, had this to say:

“Our designers need tools that unleash their creative freedom to design amazing buildings. Real-time rendering with the new Quadro RTX 4000 is unbelievably fast and smooth right out of the gate — no latency and the quality and accuracy of the lighting is outstanding. It will enable us to accelerate our workflow and let our designers focus on the design process without the technology slowing them down.”

See the Quadro RTX 4000 at Autodesk University

This week at Autodesk University, NVIDIA is in booth C1201 demonstrating the powerful new capabilities of Quadro RTX GPUs. Designers, engineers and artists can interact in real time with their complex designs and visual effects in ray-traced photo-realistic detail and realize increased throughput with their rendering workloads for significant time and cost savings.

Visit our booth to experience a real-time, immersive walkthrough powered by the Quadro RTX 4000. By deploying the Enscape3D plugin and strapping on an HMD, you’ll step inside a full-scale Autodesk Revit model and make changes in real time.

“We’re working with NVIDIA at Autodesk University to showcase how Autodesk Revit, powered by NVIDIA Turing’s RTX platform, can bring the power of real-time photorealistic rendering to enable millions of designers and architects to create and visualize content in a new way,” said Nicolas Mangon, vice president of AEC strategy and marketing at Autodesk. “By reviewing models in an immersive context, teams can collaborate and interact with their data for real-time problem solving.”

OEM Support

Leading OEMs have voiced their support for new Turing-based Quadro RTX 4000 GPUs:

“We are excited to offer the NVIDIA Quadro RTX 4000, 5000 and 6000 GPUs on select Dell Precision rack and tower workstation platforms from next quarter. These new solutions will help customers to work smarter and faster, and the NVIDIA Quadro RTX 4000 will equip AI/ML ecosystems for associated workflows such as model inferencing. Dell Precision customers have already been reaping the benefits of integrated AI / ML technology on their devices. Launched earlier this year, the Dell Precision Optimizer 5.0 tool employs a trained machine learning model to automatically adjust the system, optimize settings and deliver up to 552 percent improvement in application performance.”

— Tom Tobul, vice president and general manager of Commercial Specialty Products at Dell.

“The ability for real-time ray tracing is driving the greatest advancement in computer graphics in almost two decades. The amazing horsepower of Z by HP Workstations combined with the new capabilities of one or more Quadro RTX 4000 GPUs means millions of creatives, engineers and other professionals can create their best work ever.”

— Xavier Garcia, vice president and general manager of Z by HP at HP Inc.

“The power and possibilities of the new NVIDIA Quadro RTX 4000 will change the way many of our customers will create and design the world around them. Lenovo is proud to support this latest addition to the Quadro RTX family across our ThinkStation P Series portfolio. Together, creative and technical professionals will now be able to unlock new levels of performance and AI-based capabilities in order to make more informed decisions faster and tackle demanding design and visualization workloads with ease.”

— Rob Herman, general manager of the Lenovo Workstation & Client AI Group at Lenovo.

Availability and Pricing

The Quadro RTX 4000 will be available starting December on nvidia.com and from leading workstation manufacturers, including Dell, HPI and Lenovo, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan, and Leadtek and Ingram in Asia Pacific.

Developers can access the powerful new capabilities of NVIDIA RTX through industry-leading OptiX, DXR and Vulkan APIs. Estimated street price for the Quadro RTX 4000 is $900.

In preparation for the emerging VirtualLink standard, Turing GPUs have implemented hardware support according to the “VirtualLink Advance Overview.” To learn more about VirtualLink, please see http://www.virtuallink.org.

Image courtesy of CannonDesign.

VirtualLink is a trademark of the VirtualLink Consortium. USB Type-C and USB-C are trademarks of USB Implementers Forum.

Call it a virtuous circle. GPUs are accelerating increasing numbers of data science and HPC workloads. This has enabled a wide range of scientific breakthroughs, including five of this year’s six Gordon Bell Prize finalists. These advances boost mindshare — GPUs are featuring prominently in sessions, demos and new product offerings throughout SC18, taking place this week in Dallas.

And we’re completing the loop by making it easier to deploy software from our NGC container registry. Its pre-integrated and optimized containers bring the latest enhancements and performance improvements for industry-standard software to NVIDIA GPUs. As the registry grows — the number of containers has doubled in the last year — users have even more ways to take advantage of GPU computing.

More Applications, New Multi-Node Containers and Singularity

The NGC container registry now offers a total of 41 frameworks and applications (up from 18 last year) for deep learning, HPC and HPC visualization. Recent additions include CHROMA, Matlab, MILC, ParaView, RAPIDS and VMD. We’ve also increased their capabilities and made them easier to deploy.

At SC18, we announced new multi-node HPC and visualization containers, which allow supercomputing users to run workloads on large-scale clusters.

Large deployments often use a technology called message passing interface (MPI) to execute jobs across multiple servers. But building an application container that leverages MPI is challenging because there are so many variables that define an HPC system (scheduler, networking stack, MPI and various drivers versions).

The NGC container registry simplifies this with an initial rollout of five containers supporting multi-node deployment. This makes it significantly easier to run massive computational workloads on multiple nodes with multiple GPUs per node.

And to make deployment even easier, NGC containers can now be used natively in Singularity, a container technology that is widely adopted at supercomputing sites.

New NGC-Ready Program

To expand the places where people can run HPC applications, we’ve announced the new NGC-Ready program. This lets users of powerful systems with NVIDIA GPUs deploy with confidence. Initial NGC-Ready systems from server companies include:

NGC Containers Deployed by Premier Supercomputing Centers

NGC container registry users represent a variety of industries and disciplines, from large corporations to individual researchers. Among these are two of the top education and research facilities in the country: Clemson University and the University of Arizona.

Research facilitators for Clemson’s Palmetto cluster continually received requests to support multiple versions of the same applications. Installing, upgrading and maintaining all of these different versions was time consuming and resource intensive. Maintaining all of these different versions bogged down the support staff and hampered user productivity.

The Clemson team successfully tested HPC and deep learning containers such as GROMACS and TensorFlow from the NGC container registry on their Palmetto system. Now they recommend users leverage NGC containers for their projects. Additionally, the containers run in their Singularity deployment, making it easier to support across their systems. With NGC containers, Clemson’s Palmetto users can now run their preferred application versions without disrupting other researchers or relying on the system admins for deployment.

At the University of Arizona, system admins for the Ocelote cluster would be inundated with update requests whenever new versions of the TensorFlow deep learning framework came out. Due to the complexity of installing TensorFlow on HPC systems — which can take as long as a couple of days — this became a resource issue for their modest-sized team and often led to unhappy users.

“Our cluster environment by necessity does not get updated fast enough to keep up with the requirements of the deep learning workflows,” says Chris Reidy, principal HPC systems administrator at the University of Arizona. “We made a significant investment in NVIDIA GPUs, and the NGC containers leverage that investment. We have significant interest in various fields ranging from traditional molecular dynamics codes like NAMD to machine learning and deep learning, and the NGC containers are built with an optimized and fully tested software stack to provide a quick start to getting research done.”

Reidy tested various HPC, HPC visualization and deep learning containers from NGC in Singularity on their cluster. Following instructions available in the NGC documentation, he was able to easily get the NGC containers up and running. They’re now the preferred way of running these applications.

Nefertari’s tomb is hailed as one of the finest in all of Egypt. And now visitors can explore it in exquisite detail without hopping on a transoceanic flight.

Nefertari was known as the most beautiful of five wives of Ramses II, a pharaoh renowned for his colossal monuments. The tomb he built for his favorite queen is a shrine to her beauty — every centimeter of the walls in the tomb’s three chambers and connecting corridors is adorned with colorful scenes.

Like most of the tombs in the Valley of the Queens, this one had been plundered by the time it was discovered by archaeologists in 1904. And while preservation efforts have been made, the site remains extremely fragile, not to mention remote to most of the world’s population.

Simon Che de Boer and his New Zealand-based VFX R&D company, realityvirtual.co, have found a way to digitally preserve Nefertari’s tomb and give countless individuals the chance to see inside it.

Nefertari: A Journey to Eternity is a VR experience that uses high-end photogrammetry, visual effects techniques and AI to create an amazingly detailed experience that returns Queen Nefertari’s tomb to its original glory. Visitors can digitally walk around, view the scene from different angles and zoom in for a closer look.

It’s an amazingly realistic substitute for those who might otherwise have to travel to the other side of the Earth to experience it.

Powerful Data Crunching with NVIDIA Quadro GPUs

To replicate the tomb’s elaborate details, Che de Boer captured nearly 4,000 42-megapixel photographs of the site, then combined photogrammetry (the science of making measurements from photographs) with deep learning methods for processing and visualization.

NVIDIA GPUs played a critical role in processing the many hours of photogrammetric data collected onsite, crunching it many times faster than would be possible on CPUs.

GPUs were also integral to performing 3D reconstruction and presenting detailed textures. Working on powerful HP workstations equipped with high-end NVIDIA Quadro GPUs, realityvirtual.co converted the data to a dense 24Bn 3D point-cloud using CapturingReality for initial creation. Autodesk MeshMixer and Maya were used for initial clean-up. They then used an in-house, proprietary pipeline to work on the refinements and efficiencies — filling holes, extrapolating material characteristics, removing noise and cleaning up artifacts.

Che de Boer capturing imagery inside the tomb.

These very large datasets were then optimized for real-time rendered in Unreal Engine at a stable 90 frames per second, retaining all 24Bn points of detail utilizing texture streaming from Granite. Full dynamic lighting, volumetric fog, reflections, effects and 3D spacial audio.

“With these large datasets, speed of processing and playback is key,” said Che de Boer. “NVIDIA’s new architecture combined with Unreal Engine adds a level of speed and power that’s unbeatable with this enormous amount of data.”

AI: Creating More Realistic VR

No visit to an ancient tomb would be believable without removing the signs of recent modernization. To accomplish this, realityvirtual.co collected all the data that was encapsulated from the location and used the programmable Tensor Cores and 24Gb VRAM capacity from a single high-end NVIDIA Quadro GPU to train their super-sampling set.

By teaching the computer to understand what it was looking at, it could then modify the image to how it would have appeared with the modern artifacts removed. For instance, exit signs, plaques, handrails, floorboards and halogen lighting were painted out via in-painting methods and replaced with contextually aware content from the spaces around them.

To cover gaps in images, remove unwanted elements or fix overlap areas in the source photogrammetry images, realityvirtual.co infilled these areas using elements from the surrounding environment by leveraging a new AI-based method for Image InPainting developed by NVIDIA Research and available to software developers soon through the NVIDIA NGX technology stack. (Learn more about AI InPainting.)

“Without the kind of memory the high-end NVIDIA Quadro provides, processing the data from our 42-megapixel images would not have been possible,” said Che de Boer. “We use NVIDIA CUDA cuDNN extensively in both our photogrammetry and AI processes and throughout all aspects of our creation pipeline to achieve the most realism. It looks absolutely amazing. You get a real sense of being there and it’s only going to get better once we integrate NVIDIA RTX real time raytracing into our future releases.”

More recent in-house releases of the “Tomb” have been run through realityvirtual.co’s own super-sampling methods. This essentially trains their super-sampling on their own datasets, adding another level of detail to the final texture maps.

At that point, a viewer can’t distinguish the final pixels no matter how close they get to the Tomb’s artifacts. In addition more recent projects are now using realityvirtual.co’s deepPBR methods to extrapolate contextually aware normals, delit diffuse, roughness and displacement. They’re invaluable for working with physically based rendering engines such as Unreal Engine.

All this data was trained on itself, a great example of AI using its own data to improve itself. The result is an educational simulation that’s available on the STEAM gaming platform for free, but requires a Vive, Rift or Windows VR headset.

To continue documenting heritage sites an digitally preserving them for years to come, Che de Boer recently formed a strategic relationship with Professor Sarah Kenderdine at EPFL, a prestigious research university in Lausanne Switzerland. Together they’re looking to virtually re-create New Zealand’s ChristChurch Cathedral as it existed before it was damaged by a 2011 earthquake as well as other locations that can not yet be disclosed, though are of the most prestigious nature.

“These are locations that everyone knows about but only few get to access to,” said Che de Boer. “Our goal is to make these sites accessible to people around the world who wouldn’t otherwise get an opportunity to experience them in their lifetime.”

After centuries of steadily decimating the world’s whale populations, humans have taken significant steps to reverse their catastrophic impact.

One of the latest: an AI-powered project spearheaded by the Canadian government that aims to minimize collisions between ships and North Atlantic right whales, 50-foot-long creatures that got their name from early whalers for being the “right” ones to kill.

The problem is relatively new. North Atlantic right whales are moving into new waters, possibly following plankton that are being driven north by climate change.

Forced to follow their prey, the whales have adopted a new migration path through an established shipping route. With the global population of right whales now estimated to be just 500, protecting these endangered creatures from ship strikes has quickly become an urgent issue.

Over the past few years, Transport Canada (the national transportation department) has been working to prevent commercial shipping vessels from striking the whales. The program flies aircraft manned with marine biologists over the shipping lanes during the migration season.

When the biologists spot whales in a shipping lane, vessels are alerted to slow down. The resulting disruption to shipping operations is significant – resulting in higher costs, longer shipping times and lower revenue.

In a proof of concept, Transport Canada ran a new variation of the program in August, using an unmanned aerial system (UAS) and AI software to detect whales in shipping lanes.

A right whale mother and her calf.

Planck Aerosystems, which makes autonomous drones for monitoring ships, ports and borders, was brought in to develop AI software that could help identify North Atlantic right whales within the data (still images and video) being collected by the UAS. The new solution has the potential of simultaneously helping to protect the whales and Canada’s shipping economy.

“The kind of decisions that the government needs to make can have huge financial impact,” said Gaemus Collins, chief technology officer at Planck. “Our software was made to assist biologists in searching through thousands of images to find the few that may contain whales. Ultimately the biologists make the final determination on whether ‘yes, this is a whale, and yes, it’s a right whale, and to provide feedback to Transport Canada regarding their presence.”

Mission: Dataset

The company has discussed with the U.S. National Oceanic and Atmospheric Administration a similar project off the coast of Southern California. In that effort, computer vision tools would automatically detect whales from aerial drone imagery.

NOAA officials knew of Transport Canada’s efforts, suspected that Planck’s technology would support that mission, and made the introductions.

Planck’s first task was to create a dataset to use in training a deep learning model.

“They had the mission pretty well scoped out, but they didn’t have data from previous missions to train the algorithm; this was a big challenge” said Collins. “We had to create a system to re-train the detection algorithm in the field, every day, after the UAS flights were already underway.”

Initially, Transport Canada officials provided Planck with 1080p imagery, which Planck was able to process on NVIDIA’s compact Jetson TX2 embedded AI computing device. But it turned out that 1080p images didn’t provide enough resolution to identify the whale species, or any additional markings on whales. (One of the project objectives was to collect sufficient image data to track the whales individually in the future.)

Transport Canada ended up switching to a 24-megapixel imager. Planck paired this with an NVIDIA Quadro P5000 GPU, which provides sufficient RAM to do inference on such large image files. CUDA 9 and cuDNN 7.1 were featured in all systems to allow for GPU-accelerated training and inference.

Training occurred in the cloud-based Google Compute Engine, where Planck could allocate as many NVIDIA Tesla V100 GPUs as it needed.

Planck leveraged a deep learning library called darknet. This was used for training and deploying Planck’s object detection algorithm. TensorRT was brought into the fold later to speed up the inference time of the algorithm.

Takeaways and Next Steps

Collins said one of the primary lessons Planck learned during the project, which has been handed off to Transport Canada, was just how computationally expensive such ventures are. Simply put, it’s just not possible to have too much speed, and eventually computing needs will stress the hardware.

“Even when you throw a very high-power GPU at this problem, it’s still worth it to optimize the software as well,” said Collins.

Collins said he hopes there will be further opportunities for Planck to work with other government agencies on similar environmental programs. The company already has had discussions with Fisheries and Oceans Canada about using the technology to detect and track seals and sea lions.

For now, however, Collins expressed pride at what Planck has accomplished and looks forward to using it to protect whales around the globe.

“We’ve proven with this project that deep learning tools can be used to automate the detection and classification of marine mammal species,” said Collins. “We’d like to see this system deployed on a larger scale in more locations where whale strikes are a regular occurrence.”

Now used in more than 150 U.S. residency programs, the app has a reference library of surgical maps and a virtual human patient that trains surgeons to make the right decisions at the right time during a procedure.

Digital Surgery, a member of NVIDIA’s Inception virtual accelerator program, is also developing an operating room tool called GoSurgery. It improves coordination between surgeons and their teams to manage workflows and aid in real-time operating room decisions.

“We know humans aren’t perfect, so we use digital tools to help them improve their capacity,” said Andre Chow, co-founder of Digital Surgery.

No One Asks Who’s the Best Pilot on a Route

When a person needs surgery, the first question they ask is, “Who’s the best surgeon?”

But this mentality doesn’t apply to every industry. Airplane pilots are responsible for the lives of everyone on board — but it’s not typical to think “Who’s the best pilot?” before buying a ticket and stepping onto a flight.

That’s because the airline industry has worked hard to provide a standard level of safety for pilots using tools like autopilot and radars, says Chow. “We believe that should be the case with surgery as well.”

Yet, when Chow and co-founder Jean Nehme were training to become surgeons, they noticed that every surgeon likes to do things slightly differently.

Digital Surgery aims to close the disparities in surgery quality around the world by bolstering the doctors’ skills with powerful software. This technology gives surgeons an interactive way to rehearse operations digitally and learn best practices across different surgical specialties.

Brain Training for Surgeons

The company’s first product, the Touch Surgery app, has a library of surgical videos and simulations with a virtual human patient. The app hosts more than 200 simulations across 15 surgical specialties including orthopedics, neurosurgery and oral surgery.

Surgical residents and healthcare professionals can use the free app to learn, review or rehearse a procedure. Rendered on NVIDIA Quadro GPUs, the simulations test users’ knowledge about correct operating technique. Chow calls it “brain training for surgeons.”

It’s been validated in more than 15 different research publications as an effective mobile training tool. The Digital Surgery team sees the potential for its applications to serve as training aids in areas of the world lacking safe surgical services.

Using simulation footage from the app’s virtual surgeries and hundreds of thousands of surgical videos as a training database, the company developed its second product, the GoSurgery AI platform.

GoSurgery uses operating room camera streams that are fed into its neural network. The algorithms determine which instruments are being used and what stage of the operation the surgeon is in. Each team member has a screen displaying guidance based on the neural network’s real-time inferences.

This operating room platform is powered by NVIDIA embedded technology. It’s currently being used at several sites in the U.K., with plans to expand to the United States as well.

So far, the Digital Surgery team has deployed this solution for eye surgeries and bariatric procedures. They’re also starting to work on colonic surgery and orthopedic operations, among others.

Oil and canvas. Thunder and lightning. Salt and pepper. Some things just go together — like Adobe Dimension CC and NVIDIA RTX ray tracing, which are poised to revolutionize the work of graphic designers and artists.

Adobe Dimension CC makes it easy for graphic designers to create high-quality, photorealistic 3D images, whether for package design, scene visualizations or abstract art. And Adobe Research is constantly innovating to make it even easier for designers to create faster and better.

The latest find: NVIDIA RTX ray-tracing technology, which promises to make photo-real 3D design interactive and intuitive, with over 10x faster performance for Adobe Dimension on NVIDIA RTX GPUs.

What used to cost tens of thousands of dollars on ultra-high-end systems will be able to run on a desktop with an NVIDIA RTX GPU at a price within reach of millions of graphics designers. Check out a tech preview of this technology at Adobe MAX, in Los Angeles, in NVIDIA booth 717.

Adobe Dimension enables designers to incorporate 3D into their workflows, from packaging design to brand visualization to synthetic photography. Adobe makes 3D accessible by handling the heavy lifting of lighting and compositing with Adobe Sensei machine learning-based features, then producing a final, photorealistic output using the Dimension ray tracer.

“We’re partnering with NVIDIA on RTX because of its significant potential to accelerate our two core pillars – ray tracing and machine learning,” said Ross McKegney, director of Engineering for Adobe Dimension CC. “Early results are very promising. Our prototype Dimension builds running on RTX are able to produce photorealistic renders in near real time.”

Ray tracing is the technique modern movies rely on to produce images that are indistinguishable from those captured by a camera. Think realistic reflections, refractions and shadows.

The easiest way to understand ray tracing is to look around you. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Historically, though, computer hardware hasn’t been fast enough to use these 3D rendering techniques in real time. Artists have been limited to working with low-resolution proxies, slow design interaction and long waits to render the final production. But NVIDIA RTX changes the game.

NVIDIA has built ray-tracing acceleration into its Turing GPU architecture with accelerators called RT Cores. These accelerators enable artists to smoothly interact with a full-screen view of their final image. Lighting changes appear in real time, so artists can quickly get just the look they need. Camera changes, including depth of field, happen in real time, so artists can frame shots perfectly just like they would in a real camera.

Experience NVIDIA RTX Technology and Adobe Dimension Now

See the future of 3D design with NVIDIA RTX ray tracing and Adobe Dimension throughout the Adobe MAX show floor:

You’re a creator with an imagination running at the speed of light. Nothing can stop you. And nothing should — especially not the system that powers your favorite creative apps.

Thousands of the world’s top creators will descend on Adobe MAX in Los Angeles next week to get inspired, learn and network. We’ll be there demonstrating how NVIDIA RTX GPUs revolutionize creativity with the power of real-time photorealistic design, AI-enhanced graphics, and video and image processing. Millions of designers and artists will be able to create amazing content in a completely new way.

Learn Something New at NVIDIA’s Creative Experts Bar

To see some amazing creative designs and productions, and learn how they were done, step up to the NVIDIA Creative Experts Bar. Engage in casual, one-hour sessions with some of the brightest creators at MAX.

Whether you’re a visual effects pro, graphics artist, professional photographer or 3D designer, there’s something for you in these creative discussions going on all day in NVIDIA booth 717. Plan your visit to the Creative Experts Bar here.

NVIDIA Holodeck VR Experience with Adobe

For something completely different, step into VR with NVIDIA Holodeck for a virtual design review of photorealistic 3D assets created in Adobe Dimension CC.First come, first serve at booth 217.

Don’t Go Home Empty Handed

For a chance to win a new NVIDIA GeForce RTX 2080 Ti GPU, visit the NVIDIA booth for a creative experience with the AI-augmented Vincent sketch system. Then share a photo of your work of art on Twitter or Instagram with #NVIDIARTX and #AdobeMAX.

Get another chance to win when you tell us what you would create with more time by using #NVIDIARTX and #AdobeMAX.

Over the last few decades, VR experiences have gone from science fiction to research labs to inside homes and offices. But even today’s best VR experiences have yet to achieve full immersion.

NVIDIA’s new Turing GPUs are poised to take VR a big step closer to that level. Announced at SIGGRAPH last week and Gamescom today, Turing’s combination of real-time ray tracing, AI and new rendering technologies will propel VR to a new level of immersion and realism.

Real-Time Ray Tracing

Turing enables true-to-life visual fidelity through the introduction of RT Cores. These processors are dedicated to accelerating the computation of where rays of light intersect objects in the environment, enabling — for the first time — real-time ray tracing in games and applications.

At SIGGRAPH, we demonstrated the integration of VRWorks Audio into NVIDIA Holodeck showing how the technology can create more realistic audio and speed up audio workflows when developing complex virtual environments.

AI for More Realistic VR Environments

Deep learning, a method of GPU-accelerated AI, has the potential to address some of VR’s biggest visual and perceptual challenges. Graphics can be further enhanced, positional and eye tracking can be improved and character animations can be more true to life.

The Turing architecture’s Tensor Cores deliver up to 500 trillion tensor operations per second, accelerating inferencing and enabling the use of AI in advanced rendering techniques to make virtual environments more realistic.

Advanced VR Rendering Technologies

Turing also boasts a range of new rendering techniques that increase performance and visual quality in VR.

Variable Rate Shading (VRS) optimizes rendering by applying more shading horsepower in detailed areas of the scene and throttling back in scenes with less perceptible detail. This can be used for foveated rendering by reducing the shading rate on the periphery of scenes, where users are less likely to focus, particularly when combined with the emergence of eye-tracking.

Multi-View Rendering enables next-gen headsets that offer ultra-wide fields of view and canted displays, so users see only the virtual world without a bezel. A next-generation version of Single Pass Stereo, Multi-View Rendering doubles to four the number of projection views for a single rendering pass. And all four are now position-independent and able to shift along any axis. By rendering four projection views, it can accelerate canted (non-coplanar) head-mounted displays with extremely wide fields of view.

Turing’s Multi-View Rendering can accelerate geometry processing for up to four views.

VR Connectivity Made Easy

Turing is NVIDIA’s first GPU designed with hardware support for USB Type-C and VirtualLink*, a new open industry standard that powers next-generation headsets through a single, lightweight USB-C cable.

Today’s VR headsets can be complex to set up, with multiple, bulky cables. VirtualLink simplifies the VR setup process by providing power, display and data via one cable, while packing plenty of bandwidth to meet the demands of future headsets. A single connector also brings VR to smaller devices, such as thin-and-light notebooks, that provide only a single, small footprint USB-C connector.

Availability

VRWorks Variable Rate Shading, Multi-View Rendering and Audio SDKs will be available to developers through an update to the VRWorks SDK in September.

* In preparation for the emerging VirtualLink standard, Turing GPUs have implemented hardware support according to the “VirtualLink Advance Overview”. To learn more about VirtualLink, see www.virtuallink.org.