TensorFlow Lite

Low-latency inference of on-device ML models

TensorFlow’s lightweight solution for mobile and embedded devices. TensorFlow has always run on many platforms but as the adoption of ML models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

Around the web

Reviews

 

Discussion

MakersThere are no makers yet
You need to become a Contributor to join the discussion - Find out how.