Everyone sees the excitement about artificial intelligence, but what's real? Everyone gets pumped up about smart drones and self-driving cars, but what does it take to really harness the potential of deep learning for real products? The academic benchmarks are impressive, but how does research translate into break-out businesses? Why is computer vision embracing deep neural networks so passionately?

This talk looks closely at artificial intelligence aka deep learning aka neural network aka cognitive computing technologies, maps out the affected applications and industries and dives into the profound impact it is having one example segment, computer vision. It explores the relationship among vision research, cloud and embedded AI product opportunities and the global explosion in the number of deep learning startups, illustrated with profiles of some particularly interesting entrepreneurial examples.

9:45am-10:30am

Session 6: SoC Design

As SoCs become more complex, designers struggle to implement a complete set of features within shrinking time-to-market windows. This session, led by The Linley Group senior analyst Tom Halfhill, will present new IP cores and tools that simplify the design and validation of complex SoCs with heterogeneous processor cores.

With SoC designs featuring multiple or even heterogeneous processor cores, programming these systems in an efficient and optimized manner is more challenging than ever before. To simplify this process, Silexica provides a programming tool that helps software professionals meet the most difficult performance and power requirements with its state-of-the-art compiler technology and full heterogeneity awareness. This talk will present multicorechallenges from different industries and highlight how this technology can solve the growing challenges in this sector.

We call them "systems on chip," yet the tools that designers and architects use to create them are often anything but system-level. This presentation shows how monitoring and analytics capabilities, when embedded in silicon, can foster a genuine system-level approach to SoC design. These tools can help to answer tricky questions like:• Where have my MIPS gone?• Is this really a software problem?• Why does the system hang or deadlock intermittently?

As geometries shrink to 16nm and below, the cost of SoC designs becomes prohibitive. In this rarefied environment, the approach of individually optimized IP subsystems no longer works; instead, system architects must weave a myriad of IP components into a single cohesive system that can adapt dynamically to changing application workloads. When there is only one shot at a successful tapeout, new levels of automation are needed. This presentation will introduce a new machine-learning-based design environment and interconnect IP that streamlines SoC design.

There will be Q&A and a panel discussion featuring above speakers.

12:15pm-1:30pm

LUNCH – Sponsored by Synopsys

1:30pm-2:40pm

Session 7: Data-Center Processors

Intel processors dominate the data center today, but competition is emerging from processors that can compete head-to-head with Xeon on many workloads. This session, led by The Linley Group principal analyst Linley Gwennap, will highlight three new high-performance server processors as well as an innovative method of using FPGAs to accelerate data-center applications.

Open Coherent Accelerator Processor Interface (OpenCAPI) is a new industry-standard device interface that enables the development of host-agnostic devices which can coherently connect to any host platform that supports the new standard. It provides the device with the capability to coherently cache host memory so as to facilitate accelerator execution, perform DMA and atomics to host memory, send messages and interrupts to the host, and act as a host memory home agent. The standard utilizes high-frequency differential signaling while providing high bandwidth in addition to low latency needed by advanced accelerators.

As the market continues to demand higher-performance cores at greater core counts, a modular core design with a scalable fabric is fundamental to meeting this demand. This presentation will discuss the factors that drove our scalable multi-chip module approach and how we delivered a sizable performance uplift with the new Zen processor and Infinity Fabric.

The shift from enterprise datacenter infrastructure to cloud-computing services is an ongoing trend that continues to accelerate. Large-scale cloud-service providers running today's cloud-native workloads have fundamentally different server-architecture requirements than traditional enterprise customers. This talk will disclose new information about the design of the 10nm Qualcomm Centriq 2400 SoC, which was purpose-built for throughput-oriented cloud workloads. Topics will include the company's 64-bit ARMv8 Falkor CPU core, the on-die fabric, cache and memory architecture enhancements, and I/O interfaces.

FPGA-based acceleration can customize networking, storage, and compute applications to accelerate specific and changing data-center workloads, optimizing performance and lowering power consumption. Unlocking this flexibility raises new questions: How are accelerators defined and programmed? How can accelerators be reused and deployed in a data center? This presentation discusses an acceleration stack for enabling applications running on Intel Xeon processors to leverage FPGA-based acceleration platforms.

This presentation will discuss how reprogrammable logic such as an embedded FPGA (eFPGA) can accelerate computationally intense applications in a processor-based SoC. Examples include embedding the FPGA between system interconnect and peripheral devices such as PCIe, Ethernet, SATA; using it as a custom coprocessor implementing new instructions; or using it as an application-specific accelerator. We highlight the eFPGA's performance advantages, including latency, throughput, and power, and we discuss the steps to integrate an eFPGA into an SoC.