Learn to deploy AI models on edge devices like smartphones



Enroll in the full course ๐Ÿ‘‰ https://bit.ly/4bzZD7L

Weโ€™re excited to announce that Introduction to On-Device AI, a new short course made in collaboration with Qualcomm and taught by Krishna Sridhar, Senior Director of Engineering at Qualcomm, is live!

As AI moves beyond the cloud, on-device inference is rapidly expanding to smartphones, IoT devices, robots, AR/VR headsets, and more. Over 6 billion mobile devices and billions of other edge devices are ready to run optimized AI models.

In this course, youโ€™ll learn how to deploy AI models on edge devices using their local compute power for faster and more secure inference:

– Explore how deploying models on device reduces latency, enhances efficiency, and preserves privacy.
– Go through key concepts of on-device deployment such as neural network graph capture, on-device compilation, and hardware acceleration.
– Convert pretrained models from PyTorch and TensorFlow for on-device compatibility.
– Deploy a real-time image segmentation model on device with just a few lines of code.
– Test your model performance and validate numerical accuracy when deploying to on-device environments
– Quantize and make your model up to 4x faster and 4x smaller for higher on-device performance.
– See a demonstration of the steps for integrating the model into a functioning Android app.

Introduction to On-Device AI launches soon. Sign up for the waitlist and be the first to enroll!

Enroll in the full course ๐Ÿ‘‰ https://bit.ly/4bzZD7L

source