Intel® Edge AI Fundamentals with OpenVINO™
About this Course
Stay at the cutting-edge of AI technology by gaining practical skills for deploying edge AI. Learn how to use the Intel® Distribution of the OpenVINO™ toolkit to deploy computer vision capabilities inside a range of edge applications. Leverage the potential of edge computing and use the Intel® Distribution of the OpenVINO™ toolkit to fast-track development of high-performance computer vision and deep learning inference applications. …
Intel® Edge AI Fundamentals with OpenVINO™
About this Course
Stay at the cutting-edge of AI technology by gaining practical skills for deploying edge AI. Learn how to use the Intel® Distribution of the OpenVINO™ toolkit to deploy computer vision capabilities inside a range of edge applications. Leverage the potential of edge computing and use the Intel® Distribution of the OpenVINO™ toolkit to fast-track development of high-performance computer vision and deep learning inference applications.
Stay at the cutting-edge of AI technology by gaining practical skills for deploying edge AI. Learn how to use the Intel Distribution of OpenVINO toolkit to deploy computer vision capabilities inside a range of edge applications.
[
Computer vision and AI at the edge are becoming instrumental in powering everything from factory assembly lines and retail inventory management to hospital urgent care medical imaging equipment like X-ray and CAT scans. This program will teach fluency in some of the most cutting-edge technologies. The course will introduce students to the Intel® Distribution of OpenVINO™ Toolkit, which allows developers to deploy pre-trained deep learning models through a high-level C++ or Python inference engine API integrated with application logic. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance.
What is Edge AI? In Edge AI, the AI algorithms are processed locally on a hardware device, without requiring any connection. It uses data that is generated from the device and processes it to give real-time insights in less than few milliseconds. AI Edge processing today is focused on moving the inference part of the AI workflow to the device, keeping data constrained to the device.
]
lesson 1
Leveraging Pre-Trained Models
Leverage a pre-trained model for computer vision inferencing
lesson 2
The Model Optimizer
Convert pre-trained models into the framework-agnostic intermediate representation with the Model Optimizer
lesson 3
The Inference Engine
Perform efficient inference on deep learning models through the hardware-agnostic Inference Engine
lesson 4
Deploying an Edge App
Deploy an app on the edge, including sending information through MQTT, and analyze model performance and use cases