Integrate IntroScreen and Improve AI Generation Cleanup#7
Integrate IntroScreen and Improve AI Generation Cleanup#7Godzilla675 wants to merge 25 commits intore-writefrom
Conversation
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
…ovements Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
…support Add remote MCP server support with settings UI
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Enable BuildConfig generation for memory-vault debug build
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
[WIP] Check AI models access to MCP servers and fix any issues
Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
…ns field Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
- Fix parseResponse() to always try SSE parsing regardless of transport type - Rename test class from McpServerIntegrationTest to McpServerTest - Use exact assertions instead of permissive contains() checks - Add helper function documentation and UUID format validation - Reduce UUID test iterations from 100 to 10 for efficiency Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
…p-server Add MCP server integration tests
- Added an onFinished callback to IntroScreen to allow navigation after initialization. - Integrated IntroScreen as the initial splash screen in MainActivity, using rememberSaveable to maintain state across configuration changes. - Enhanced LlmModelWorker by adding proper stop generation calls in awaitClose blocks for both text (GGUF) and image (Diffusion) generation flows, ensuring resources are freed when flows are cancelled. Co-authored-by: Godzilla675 <131464726+Godzilla675@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Summary of ChangesHello @Godzilla675, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully integrates an intro screen and improves resource management for AI generation tasks. The use of rememberSaveable for the intro screen state is appropriate for handling configuration changes. The updates to LlmModelWorker to cancel generation tasks when flows are closed are a significant improvement for resource efficiency. I've identified a potential medium-severity issue in MainActivity.kt where the new navigation logic could lead to UI flicker due to a race condition with state loading. My review includes a specific comment on how to resolve this by handling the loading state more explicitly.
| if (showIntro) { | ||
| IntroScreen(onFinished = { showIntro = false }) | ||
| } else { | ||
| val chatViewModel: ChatViewModel = hiltViewModel() | ||
| val llmModelViewModel: LLMModelViewModel = hiltViewModel() | ||
|
|
||
| AppNavigation( | ||
| chatViewModel = chatViewModel, | ||
| llmModelViewModel = llmModelViewModel | ||
| ) | ||
| if (!hasAcceptedTerms) { | ||
| TermsAndConditionsScreen( | ||
| onAccept = { | ||
| scope.launch { | ||
| termsDataStore.acceptTerms() | ||
| } | ||
| } | ||
| ) | ||
| } else { | ||
| val chatViewModel: ChatViewModel = hiltViewModel() | ||
| val llmModelViewModel: LLMModelViewModel = hiltViewModel() | ||
|
|
||
| AppNavigation( | ||
| chatViewModel = chatViewModel, | ||
| llmModelViewModel = llmModelViewModel | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
This new navigation logic can cause a UI flicker due to a race condition with loading the hasAcceptedTerms state. On line 80, hasAcceptedTerms is initialized to true. If the DataStore is slow to emit the actual value (e.g., false), the app may briefly show AppNavigation after the intro screen before jumping to TermsAndConditionsScreen once the state updates.
To prevent this, you should handle the loading state explicitly. A robust approach is:
- Initialize
hasAcceptedTermstonullon line 80:collectAsState(initial = null). - Use a
whenstatement here to handle thenull(loading),false(not accepted), andtrue(accepted) states. This will prevent the flicker and provide a smoother user experience.
This PR integrates the IntroScreen as a startup splash screen and improves the resource management of the AI engines by ensuring that generation tasks are correctly cancelled when their associated flows are closed.
Key changes:
IntroScreennow accepts anonFinishedcallback which is invoked after its internal progress animation completes.MainActivitywas updated to show this screen first and manage the transition to the rest of the application.MainActivity,rememberSaveableis used for theshowIntrostate to prevent the splash screen from re-triggering on device rotation or other configuration changes.LlmModelWorkernow properly callsggufStopGeneration()andstopDiffusionGeneration()within theawaitCloseblock of their respective streaming flows. This prevents the underlying engines from continuing to process tokens or image steps after the UI has cancelled the request.PR created automatically by Jules for task 12681402008920270218 started by @Godzilla675