Roadmap v0.7.0 -> v1.0.0 #736
mkopcins
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We're excited to share our plans for the path to stable version v1.0.0!
This roadmap outlines the key features and improvements we're working on. Features included in v1.0.0 are being developed in parallel and can be released sooner. Alongside code development we are working on exporting additional models which we are publishing on our huggingface.
As always, this is a living document—priorities may shift based on community feedback, upstream ExecuTorch developments, and what we learn along the way. This means that additional versions can appear before v1.0.0. We'd love to hear your thoughts in the comments.
Release v0.7.0
This release focuses on expanding language support and introducing vision-language capabilities.
Support for Text-To-Speech
OCR improvements
Tokenizers migration
Improved error handling
Release v0.8.0
Vision camera support
Extended language support for Text-to-Speech
Liquid Foundation Models support
Computer Vision models quantization
Extended Computer Vision model support
VLM support
Release v0.9.0
Release v1.0.0
Modularization
Currently all features are bundled together, leading to large bundle sizes and makes production deployment difficult.
To solve this we are planning on splitting the library into submodules based on usage scenarios, such as LLM or Computer Vision. This would allow us to reduce the bundle size by excluding OpenCV for LLM bundle and skipping tokenizers for Computer Vision.
Comprehensive testing setup
Model usage telemetry
Currently the only way to monitor the library usage is to check for models downloads on our HuggingFace as well as checking npm stats. We are planning on adding usage reports such as what models are used, generated LLM tokens or inference count. This would allow us to properly monitor community interest and enable us to better adjust for future releases.
We are NOT planning on sharing any kind of personal data, such as images, videos or LLM conversations!
Web support
Removing Expo dependency
Improved logging
Fix crashes resulting from running out of device memory
Benchmarking framework
Develop benchmarking framework for library to allow us for regular benchmarking in case for new devices or different models.
GenAI SDK-compatible provider
Provide a GenAI SDK-compatible provider so users can run on-device ExecuTorch models using the familiar GenAI SDK API they already know.
More plans for the future which aren't assigned to any release yet
How You Can Help
Our mission with React Native ExecuTorch is to empower developers to seamlessly integrate their preferred ML models into React Native applications. Starting with targeted solutions, our strategy is to gradually provide a suite of tools addressing more generalized use cases.
This roadmap is a living document. We encourage the community to engage actively—your contributions and insights are invaluable as we strive to meet and exceed developer needs in the evolving landscape of mobile development.
Feel free to join the discussion, contribute to the projects, and let us know how we can better serve you in future updates.
Thanks for being part of this journey. On-device AI is moving fast, and we're glad to have you along for the ride. 🚀
— The React Native ExecuTorch Team
Beta Was this translation helpful? Give feedback.
All reactions