Loading...
Loading...
Loading...
AI That Works Anywhere
Developing AI models that run efficiently on low-resource devices and work in disconnected environments, essential for real-world deployment across Africa.
10x
Model Compression
<50ms
Inference Latency
100%
Offline Capable
50%
Battery Savings
Most AI research assumes abundant compute resources and reliable internet connectivity. But in much of Africa, these assumptions don't hold. Rural areas may have intermittent connectivity. Users may rely on entry-level smartphones. Power supply may be unreliable.
Our Edge AI research addresses these constraints head-on. We develop techniques for compressing large models to run on mobile devices, designing systems that work offline-first, and optimizing inference for low-power hardware.
This work is foundational to deploying AI at scale across Africa, where the next billion users will come online via mobile devices in challenging infrastructure environments.
Understanding the unique obstacles we're working to overcome.
Much of rural Africa has intermittent or no internet connectivity, requiring offline-capable systems.
Entry-level smartphones with limited RAM, storage, and processing power are the norm, not the exception.
Unreliable power supply means devices must be power-efficient and able to resume after interruption.
State-of-the-art AI models are often too large to fit on mobile devices or run efficiently at the edge.
The methods and techniques we've developed to address these challenges.
Quantization, pruning, and knowledge distillation to reduce model size while maintaining accuracy.
System designs that work fully offline and sync when connectivity is available.
Optimized inference engines that run efficiently on ARM processors and low-end GPUs.
Systems that provide basic functionality offline and enhanced features when online.
Measurable outcomes from our research and deployments.
10x
Model Compression
Our compression techniques reduce model size by 10x while maintaining 95%+ accuracy.
<50ms
Inference Latency
Our optimized models run in under 50ms on entry-level smartphones.
100%
Offline Capable
All our production deployments work fully offline with sync when connected.
50%
Battery Savings
Optimized inference reduces battery consumption by 50% compared to naive implementations.
Kimani, D., Mutua, L., et al.
Banda, M., Kimani, D., et al.
Fully offline health symptom checker and information system for rural health workers.
Partners:
Ministry of Health Kenya
Tools and libraries for deploying ML models on low-resource devices.
Partners:
TensorFlow Lite team
David Kimani
Research Lead
Linda Mutua
Mobile Engineering Lead
We're always looking for collaborators, partners, and talented researchers to advance this work.