This course is for anyone passionate about learning how to develop FPGA-accelerated applications with SDAccel!
The more general purpose you are, the more flexible you are and the more kinds of programs and algorithms you can execute on your underlying computing infrastructure. All of this is terrific, but there is no free food and this is happening, quite often, by losing in efficiency.
This course will present several scenarios where the workloads require more performance than can be obtained even by using the fastest CPUs. This scenario is turning cloud and data center architectures toward accelerated computing. Within this course, we are going to show you how to gain benefits by using Xilinx SDAccel to program Amazon EC2 F1 instances. We are going to do this through a working example of an algorithm used in computational biology.
The huge amount of data the algorithms need to process and their complexity raised the problem of increasing the amount of computational power needed to perform the computation. In this scenario, hardware accelerators revealed to be effective in achieving a speed-up in the computation while, at the same time, saving power consumption. Among the algorithms used in computational biology, the Smith-Waterman algorithm is a dynamic programming algorithm, guaranteed to find the optimal local alignment between two strings that could be nucleotides or proteins. In the following classes, we present an analysis and successive FPGA-based hardware acceleration of the Smith-Waterman algorithm used to perform pairwise alignment of DNA sequences.
Within this context, this course is focusing on distributed, heterogeneous cloud infrastructures, providing you details on how to use Xilinx SDAccel, through working examples, to bring your solutions to life by using the Amazon EC2 F1 instances.

Destrezas que aprenderás

Revisiones

Filled StarFilled StarFilled StarFilled StarFilled Star

4.1 (10 calificaciones)

5 stars

50%

4 stars

30%

3 stars

10%

1 star

10%

De la lección

On how to accelerate the cloud with SDAccel

Within this module we are going to have a first taste on how to gain the best out of the combination of the F1 instances with SDAccel providing some few practical instructions on how to develop accelerated applications on Amazon F1 by using the Xilinx SDAccel development environment. Then, we are going to present what it is necessary to create FPGA kernels, assemble the FPGA program and to compile the Amazon FPGA Image, or AFI. Finally, we will describe the steps and tasks involved in developing a host application accelerated on the F1 FPGA.

Impartido por:

Marco Domenico Santambrogio

Transcripción

Hi! And welcome to this new lesson to AWS F1 and SDAccel development environment. This class has not to be considered as a standalone class but as a introduction, providing the rationale behind the choice to develop accelerated applications on Amazon F1 using the Xilinx SDAccel development tool flow. We will then provide a description of the F1 hardware and software stacks and explain how hardware acceleration works on Amazon F1 instances. Done this, we will then provide an overview of the steps involved in creating acceleration kernels compiling the FPGA design and creating the Amazon FPGA image or AFI. Finally, we will describe the steps and tasks involved in developing a host application accelerated on the F1 FPGA. But first things first! Amazon F1 is an elastic cloud compute combining x86 CPUs and Xilinx FPGAs to create and run accelerated applications; therefore let us first explore few applicative domains for cloud FPGA acceleration. As we know, many applications are ideally suited for the FPGA based acceleration. Examples are: genomics, big data and financial analytics, security, video and image processing, machine learning. Those are all important and prominent examples but any compute intensive application benefiting from the massive parallelism and deep pipelining is a prime candidate for F1. Within this context, there is one domain among the others, which is emerging as a prominent one: genomics. The possibility to exploit large “-omics” data, such as genomics, transcriptomics and proteomics, is fostering the research around personalized medicine. Thanks to the availability of these data, important efforts are dedicated to better understand their relationship to individual health, diseases origin and personal responsiveness to medical treatments. But let us see a true story example: Victor’s story, a case of personalized medicine which brings future hope to lung cancer patients. As we can read on the University of Chicago Medicine and Biological Sciences website. After feeling a tickle in his throat for about a month, Victor visited the University of Chicago Medicine campus in June 2010 for a check-up. It had only been a very quick tickle, which caused him to clear his throat a half dozen or so times a day, but he wanted to make sure his health remained stable. “That’s how it all started,” said Victor, a plastic surgeon in the suburbs of Chicago. Medical center physicians took X-rays and diagnosed him with non-small cell lung cancer, one of two main types of lung cancer and his cancer was too advanced for radiation. The cause of Victor's lung cancer was a mystery to him, until experts from the University of Chicago Medicine traced it to a specific gene mutation. An increasing number of genetic tests are enabling doctors at the University of Chicago Medicine to see far beyond patients’ physical symptoms and into the root cause of their diseases. Such vision already has resulted in promising, personalized treatments. After enrolling in a clinical trial for genetically based drug therapy, Victor’s last CAT scan showed his tumor had shrunk and was not producing any new growth spots. His medical team considers this as a good news for Victor, and obviously, as well as for the future of personalized, genetically based medicine. In the coming years, human genome research will likely transform medical practices. The unique genetic profile of an individual and the knowledge of molecular basis of diseases are leading to the development of personalized medicines and therapies, but the exponential growth of available genomic data requires a computational effort that may limit the progress of personalized medicine. Despite the technological progresses, the computational resources needed for these tasks are still expensive, and the lack of general analysis tools further limits the developments of personalized medicine. Hence, the development of personalized therapies faces, at least, two main challenges: - we need methods to integrate data from multiple sources that maintain results accuracy and that can scale to large, integrated and highly-dimensional datasets. To this aim, it is necessary to develop procedures to reduce the number of variable by means of feature reduction techniques; - the second challenge regards the need of processing large-scale genomic data. From a technological point of view, today's sequencing technologies are replacing genotyping methods based on microarray, which are generally limited to querying only regions of known variation. A great example of a successful company within this applicative domain is Edico Genome. Edico Genome with their DRAGEN, which stands for Dynamic Read Analysis for GENomics, Bio-IT Platform, provides ultra-rapid secondary analysis of next-generation sequencing, or NGS data. DRAGEN is based on an implementation realised by using a reconfigurable FPGA to provide hardware-accelerated implementations of secondary analysis pipeline algorithms, such as BCL conversion, compression, mapping, alignment, just to name a few. There are several reasons why Edico Genome decided to invest on an FPGA design: - the FPGA capabilities of being reconfigured allows DRAGEN updates with new pipelines and performance upgrades; - an hardware accelerated proprietary IP allows the computation of the secondary analysis in a fraction of the time of a CPU-based software while also achieving industry-leading base calling accuracy; - one DRAGEN FPGA can replace around 80-100 traditional compute instances, reducing hardware and maintenance costs. Edico Genome is definitely a successful company and the proof can be found on May 15, 2018. On that day Illumina announced that it acquired Edico Genome. “Our acquisition of Edico Genome is a big step toward realizing the vision of reducing sequencing data acquisition and analysis to a push-button, standardized process”, said Susan Tousi, Senior Vice President of Product Development at Illumina and she continued by adding: “We expect to build on the solid foundation of DRAGEN to deliver a more streamlined and integrated sample to answer experience for our customers.” The DRAGEN platform is a terrific example on how a complex heterogenous solution can provide great benefits to a customer not interested in the low technical details, but looking for performance, speed and security! Customers can use DRAGEN on site to keep their data local, and this can be done because the DRAGEN platform is integrated into a Dell server to combine NGS compute and storage in one low-footprint system, but sometimes this may not be enough and that is why DRAGEN can also be leveraged as an hybrid of the onsite and Cloud solutions. With an hybrid solution DRAGEN can enable customers to scale during the busy periods and return onsite when throughput is going again to decrease. The hybrid Cloud solution allows customers to move their analysis and data securely and seamlessly from their onsite solution to the Cloud and this has been definitely a great idea because DRAGEN on site and cloud offerings enable customers to take advantage of DRAGEN’s ultra-rapid speeds on their preferred platform. Considering all the benefits of the AWS F1 Cloud Compute Platform, it is quite simple to understand why DRAGEN has been also deployed as a Cloud solution. Let us just recall few of them, as the AWS F1 Cloud Compute Platform: - Makes FPGA acceleration available to a large community of developers, and to millions of potential AWS users, who will be able to tap into a broad marketplace of acceleration ready apps. - Provides dedicated and large amounts of FPGA logic with elasticity to scale to multiple FPGAs. - Comes with one or more high-end fps each providing vast amount of programmable logic; this allows creating massively parallel FPGA accelerators delivering orders of magnitude increases in application performance and throughput. - Simplifies the development process by providing cloud-based FPGA development tools. - Provides a Marketplace for FPGA applications, giving more choice, secure and easy access to millions of AWS users. And those are also some of the reasons why in the following we are going to focus on how the EC2 F1 instances are working, built, understanding the AWS F1 hardware and software stacks.