-
Notifications
You must be signed in to change notification settings - Fork 38
feat: unify AI providers with shared abstractions in expo-example #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: unify AI providers with shared abstractions in expo-example #179
Conversation
- Consolidated Llama and MLC providers under Apple screen with provider selector - Removed standalone Llama and MLC screens - Created shared configuration for models (LLAMA_MODELS, SPEECH_LLAMA_MODELS) - Implemented reusable components: - ChatUI: Provider-agnostic chat interface - ProviderSetup: Language model download and initialization - SpeechProviderSetup: Speech model with vocoder setup - ProviderSelector: Provider switching UI component - Upgraded to AI SDK v6 with LanguageModelV3 and SpeechModelV3 - Removed unnecessary dynamic imports (Llama available on all platforms) - Added Llama speech support with OuteTTS model - Fixed code duplication and inconsistencies - Platform-specific implementations for iOS and Android
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
- Replace separate repo/filename fields with single modelId format - Replace nested vocoder object with vocoderId field - Remove redundant mmproj and vocoder.size fields - Update SpeechProviderSetup to use new interface with getFilename helper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors the expo-example app to consolidate multiple AI providers (Apple, Llama, and formerly MLC) under a unified architecture with shared abstractions and reusable components. The changes include:
Changes:
- Created a provider abstraction layer with SetupAdapter interface for unified model management (download, initialization, deletion)
- Upgraded dependencies:
llama.rnfrom 0.10.0-rc.0 to 0.10.1 andaipackage from 5.0.56 to 6.0.0 - Removed MLC provider and consolidated screens into provider-agnostic implementations
- Added llama.rn plugin to Expo configuration (previously missing)
- Exported additional storage utilities (ModelInfo, getDownloadedModels, removeModel) from llama package
Reviewed changes
Copilot reviewed 24 out of 25 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/llama/src/index.ts | Exports additional storage utilities for model management |
| packages/llama/src/ai-sdk.ts | Updates speech model generation with guide tokens and improved audio handling |
| packages/llama/package.json | Upgrades llama.rn to 0.10.1 |
| apps/expo-example/src/config/providers.ts | Defines provider abstraction layer with SetupAdapter interface |
| apps/expo-example/src/components/adapters/*.ts | Implements adapters for Apple and Llama providers |
| apps/expo-example/src/components/ProviderSelector.tsx | New component for provider selection |
| apps/expo-example/src/components/ProviderSetup.tsx | New component for model download/setup |
| apps/expo-example/src/components/ChatUI.tsx | Refactored to accept any language model and tools |
| apps/expo-example/src/screens/apple/ChatScreen/index.tsx | New unified chat screen supporting multiple providers |
| apps/expo-example/src/screens/apple/PlaygroundScreen/index.tsx | Converted from iOS-specific to cross-platform with provider support |
| apps/expo-example/src/screens/apple/SpeechScreen/index.tsx | Converted from iOS-specific to cross-platform with provider support |
| apps/expo-example/src/screens/apple/TranscribeScreen/index.tsx | Converted from iOS-specific to cross-platform |
| apps/expo-example/src/App.tsx | Updated navigation to use unified screens and removed MLC/Llama tabs |
| apps/expo-example/polyfills.ts | Added DOMException polyfill |
| apps/expo-example/package.json | Removed MLC dependency, upgraded ai and llama.rn versions |
| apps/expo-example/app.json | Changed plugin from @react-native-ai/mlc to llama.rn |
| bun.lock | Updated lockfile with new dependency versions |
Comments suppressed due to low confidence (2)
apps/expo-example/src/components/ChatUI.tsx:67
- The removed line 'setMessages(updatedMessages)' means the user's message is no longer added to the UI immediately after sending. The message will only appear after the assistant's placeholder message is added on line 61-67. This creates a delay in showing the user's own message in the chat interface. Consider adding back 'setMessages(updatedMessages)' or updating line 61 to show the user message immediately.
apps/expo-example/src/components/ChatUI.tsx:75 - The tools prop defaults to an empty object but is then checked with 'tools ?? undefined'. When tools is an empty object (the default), it will be passed to streamText as an empty object rather than undefined. This could cause unexpected behavior. Consider changing the default to 'undefined' in the props interface and removing the '?? undefined' check, or checking if the object is empty before passing it.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
apps/expo-example/src/components/adapters/llamaSpeechSetupAdapter.ts
Outdated
Show resolved
Hide resolved
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…ter.ts Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* refactor(expo-example): unify AI providers with shared abstractions - Consolidated Llama and MLC providers under Apple screen with provider selector - Removed standalone Llama and MLC screens - Created shared configuration for models (LLAMA_MODELS, SPEECH_LLAMA_MODELS) - Implemented reusable components: - ChatUI: Provider-agnostic chat interface - ProviderSetup: Language model download and initialization - SpeechProviderSetup: Speech model with vocoder setup - ProviderSelector: Provider switching UI component - Upgraded to AI SDK v6 with LanguageModelV3 and SpeechModelV3 - Removed unnecessary dynamic imports (Llama available on all platforms) - Added Llama speech support with OuteTTS model - Fixed code duplication and inconsistencies - Platform-specific implementations for iOS and Android * refactor: simplify SpeechModelOption to match ModelOption shape - Replace separate repo/filename fields with single modelId format - Replace nested vocoder object with vocoderId field - Remove redundant mmproj and vocoder.size fields - Update SpeechProviderSetup to use new interface with getFilename helper * chore: some other tweaks to get proper version * feat: updates to the playground app * chore: complete updates to the UI * tweaks * Update apps/expo-example/src/screens/apple/SpeechScreen/index.tsx Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update apps/expo-example/src/components/adapters/llamaSpeechSetupAdapter.ts Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update apps/expo-example/src/components/ProviderSetup.tsx Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * chore: get rid --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

This PR refactors the
expo-exampleapp to consolidate multiple AI providers (Apple, Llama, MLC) under a unified architecture with shared abstractions and reusable components. It also upgrades Llama to latest and updates Expo configuration (previously, it was missing a plugin).The goal is to expose all llama features from within same UI and have a runaway to test other on-device providers as well. The UI needs a lot of love for sure, but let's focus on functionality at this point.
UX is definitely not good, and this is meant for internal testing mostly. Will improve in follow-up PRs.
Testing Checklist