TensorFlow Lite

Low-latency inference of on-device ML models

5.0/5
About

TensorFlow’s lightweight solution for mobile and embedded devices. TensorFlow has always run on many platforms but as the adoption of ML models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

0 Reviews