Enquiry Form

Artificial Intelligence (AI) Training Courses

Local, instructor-led live Artificial Intelligence (AI) training courses demonstrate through hands-on practice how to implement AI solutions for solving real-world problems.
AI training is available as "onsite live training" or "remote live training". Hong Kong onsite live Artificial Intelligence (AI) trainings can be carried out locally on customer premises or in NobleProg corporate training centers. Remote live training is carried out by way of an interactive, remote desktop.
NobleProg -- Your Local Training Provider.

Artificial Intelligence (AI) Course Outlines

This course has been created for managers, solutions architects, innovation officers, CTOs, software architects and anyone who is interested in an overview of applied artificial intelligence and the nearest forecast for its development.

In Python Machine Learning, the Text Summarization feature is able to read the input text and produce a text summary. This capability is available from the command-line or as a Python API/Library. One exciting application is the rapid creation of executive summaries; this is particularly useful for organizations that need to review large bodies of text data before generating reports and presentations.

In this instructor-led, live training, participants will learn to use Python to create a simple application that auto-generates a summary of input text.

Part-3(40%) of the training would be extensively based on Tensorflow - 2nd Generation API of Google's open source software library for Deep Learning. The examples and handson would all be made in TensorFlow.

Audience

This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects

After completing this course, delegates will:

-

have a good understanding on deep neural networks(DNN), CNN and RNN

-

understand TensorFlow’s structure and deployment mechanisms

-

be able to carry out installation / production environment / architecture tasks and configuration

-

be able to assess code quality, perform debugging, monitoring

-

be able to implement advanced production like training models, building graphs and logging

Not all the topics would be covered in a public classroom with 35 hours duration due to the vastness of the subject.

The Duration of the complete course will be around 70 hours and not 35 hours.

The Apache OpenNLP library is a machine learning based toolkit for processing natural language text. It supports the most common NLP tasks, such as language detection, tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing and coreference resolution.

In this instructor-led, live training, participants will learn how to create models for processing text based data using OpenNLP. Sample training data as well customized data sets will be used as the basis for the lab exercises.

By the end of this training, participants will be able to:

- Install and configure OpenNLP- Download existing models as well as create their own- Train the models on various sets of sample data- Integrate OpenNLP with existing Java applications

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Machine Learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. Python is a programming language famous for its clear syntax and readability. It offers an excellent collection of well-tested libraries and techniques for developing machine learning applications.

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. R will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of live projects.

By the end of the training the delegates are expected to be sufficiently equipped with the essential python concepts and should be able to sufficiently use NLTK to implement most of the NLP and ML based operations. The training is aimed at giving not just an executional knowledge but also the logical and operational knowledge of the technology therein.

Predictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events.

In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data.

By the end of this training, participants will be able to:

- Create predictive models to analyze patterns in historical and transactional data- Use predictive modeling to identify risks and opportunities- Build mathematical models that capture important trends- Use data from devices and business systems to reduce waste, save time, or cut costs

Audience

- Developers- Engineers- Domain experts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

TensorFlow Serving is a system for serving machine learning (ML) models to production.

In this instructor-led, live training, participants will learn how to configure and use TensorFlow Serving to deploy and manage ML models in a production environment.

By the end of this training, participants will be able to:

- Train, export and serve various TensorFlow models- Test and deploy algorithms using a single architecture and set of APIs- Extend TensorFlow Serving to serve other types of models beyond TensorFlow models

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Natural language generation (NLG) refers to the production of natural language text or speech by a computer.

In this instructor-led, live training, participants will learn how to use Python to produce high-quality natural language text by building their own NLG system from scratch. Case studies will also be examined and the relevant concepts will be applied to live lab projects for generating content.

By the end of this training, participants will be able to:

- Use NLG to automatically generate content for various industries, from journalism, to real estate, to weather and sports reporting- Select and organize source content, plan sentences, and prepare a system for automatic generation of original content- Understand the NLG pipeline and apply the right techniques at each stage- Understand the architecture of a Natural Language Generation (NLG) system- Implement the most suitable algorithms and models for analysis and ordering- Pull data from publicly available data sources as well as curated databases to use as material for generated text- Replace manual and laborious writing processes with computer-generated, automated content creation

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

In this instructor-led, live training, participants will learn how to create various neural network components using ENCOG. Real-world case studies will be discussed and machine language based solutions to these problems will be explored.

Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

- Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation- Implement industrial big data storage and processing solutions for data analysis- Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

- Law Enforcement specialists with a technical background

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

In this instructor-led, live training, participants will learn the most relevant and cutting-edge machine learning techniques in Python as they build a series of demo applications involving image, music, text, and financial data.

Fiji is an open-source image processing package that bundles ImageJ (an image processing program for scientific multidimensional images) and a number of plugins for scientific image analysis.

In this instructor-led, live training, participants will learn how to use the Fiji distribution and its underlying ImageJ program to create an image analysis application.

By the end of this training, participants will be able to:

- Use Fiji's advanced programming features and software components to extend ImageJ- Stitch large 3d images from overlapping tiles- Automatically update a Fiji installation on startup using the integrated update system- Select from a broad selection of scripting languages to build custom image analysis solutions- Use Fiji's powerful libraries, such as ImgLib on large bioimage datasets- Deploy their application and collaborate with other scientists on similar projects

Audience

- Scientists- Researchers- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

This instructor-led, live training introduces the software, hardware, and step-by-step process needed to build a facial recognition system from scratch. Facial Recognition is also known as Face Recognition.

The hardware used in this lab includes Rasberry Pi, a camera module, servos (optional), etc. Participants are responsible for purchasing these components themselves. The software used includes OpenCV, Linux, Python, etc.

By the end of this training, participants will be able to:

- Install Linux, OpenCV and other software utilities and libraries on a Rasberry Pi.- Configure OpenCV to capture and detect facial images.- Understand the various options for packaging a Rasberry Pi system for use in real-world environments.- Adapt the system for a variety of use cases, including surveillance, identity verification, etc.

Embedding Projector is an open-source web application for visualizing the data used to train machine learning systems. Created by Google, it is part of TensorFlow.

This instructor-led, live training introduces the concepts behind Embedding Projector and walks participants through the setup of a demo project.

By the end of this training, participants will be able to:

- Explore how data is being interpreted by machine learning models- Navigate through 3D and 2D views of data to understand how a machine learning algorithm interprets it- Understand the concepts behind Embeddings and their role in representing mathematical vectors for images, words and numerals.- Explore the properties of a specific embedding to understand the behavior of a model- Apply Embedding Project to real-world use cases such building a song recommendation system for music lovers

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Tensor2Tensor (T2T) is a modular, extensible library for training AI models in different tasks, using different types of training data, for example: image recognition, translation, parsing, image captioning, and speech recognition. It is maintained by the Google Brain team.

In this instructor-led, live training, participants will learn how to prepare a deep-learning model to resolve multiple tasks.

By the end of this training, participants will be able to:

- Install tensor2tensor, select a data set, and train and evaluate an AI model- Customize a development environment using the tools and components included in Tensor2Tensor- Create and use a single model to concurrently learn a number of tasks from multiple domains- Use the model to learn from tasks with a large amount of training data and apply that knowledge to tasks where data is limited- Obtain satisfactory processing results using a single GPU

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Cognitive computing refers to systems that encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, to name a few. A cognitive computing system is often comprised of multiple technologies working together to process in-memory 'hot' contextual data as well as large sets of 'cold' historical data in batch. Examples of such technologies include Kafka, Spark, Elasticsearch, Cassandra, and Hadoop.

In this instructor-led, live training, participants will learn how Cognitive Computing compliments AI and Big Data and how purpose-built systems can be used to realize human-like behaviors that improve the performance of human-machine interactions in business.

By the end of this training, participants will understand:

- The relationship between cognitive computing and artificial intelligence (AI)- The inherently probabilistic nature of cognitive computing and how to use it as a business advantage- How to manage cognitive computing systems that behave in unexpected ways- Which companies and software systems offer the most compelling cognitive computing solutions

Snorkel is a system for rapidly creating, modeling, and managing training data. It focuses on accelerating the development of structured or "dark" data extraction applications for domains in which large labeled training sets are not available or easy to obtain.

In this instructor-led, live training, participants will learn techniques for extracting value from unstructured data such as text, tables, figures, and images through modeling of training data with Snorkel.

Deep Learning for NLP allows a machine to learn simple to complex language processing. Among the tasks currently possible are language translation and caption generation for photos. DL (Deep Learning) is a subset of ML (Machine Learning). Python is a popular programming language that contains libraries for Deep Learning for NLP.

In this instructor-led, live training, participants will learn to use Python libraries for NLP (Natural Language Processing) as they create an application that processes a set of pictures and generates captions.

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. Python is a programming language famous for its clear syntax and readability. It offers an excellent collection of well-tested libraries and techniques for developing machine learning applications.

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the finance industry.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

By the end of this training, participants will be able to:

- Understand the fundamental concepts in machine learning- Learn the applications and uses of machine learning in finance- Develop their own algorithmic trading strategy using machine learning with Python

Audience

- Developers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

The Tensor Processing Unit (TPU) is the architecture which Google has used internally for several years, and is just now becoming available for use by the general public. It includes several optimizations specifically for use in neural networks, including streamlined matrix multiplication, and 8-bit integers instead of 16-bit in order to return appropriate levels of precision.

In this instructor-led, live training, participants will learn how to take advantage of the innovations in TPU processors to maximize the performance of their own AI applications.

By the end of the training, participants will be able to:

- Train various types of neural networks on large amounts of data- Use TPUs to speed up the inference process by up to two orders of magnitude- Utilize TPUs to process intensive applications such as image search, cloud vision and photos

Audience

- Developers- Researchers- Engineers- Data scientists

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice