Process This: Demystifying Embedded Deep Learning Deployment
설명
2021년 7월 16일
In this webinar, learn how to automatically increase Edge AI inference performance using hardware accelerators for deep learning without having to program the cores.
Webinar agenda:
- How deep learning models operate on an embedded processor
- How to optimize models for maximum performance
- How to compile and deploy a model and accelerate inference using industry-standard APIs—e.g. TensorFlow Lite, ONNX Runtime, TVM
About TI Edge AI Cloud
TI Edge AI Cloud is a free online service that lets you evaluate accelerated deep learning inference on TDA4x processors. You do not need to purchase an evaluation board. The service is python-based; and it only takes a few minutes to login, deploy a model, and get a variety of performance benchmarks.
추가 정보
This course is also a part of the following series
Date: 6월 4일, 2021년