TensorFlow Lite

Low-latency inference of on-device ML models

TensorFlow’s lightweight solution for mobile and embedded devices. TensorFlow has always run on many platforms but as the adoption of ML models has grown exponentially over the last few years, so has the need to deploy them on mobile and embedded devices. TensorFlow Lite enables low-latency inference of on-device machine learning models.

Discussion
Would you recommend this product?
2 Reviews5.0/5
💬
Be the first to comment