Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds multi-provider RAG to VectorStoreObject (OpenAI and Google Vertex AI), new provider-aware public APIs (provider selection, bucket configuration, bulk upload/attach), Google Vertex AI and GCS helpers, Gemini/OpenAI grounding/payload updates, and an end-to-end Gemini RAG test. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant App as GenAIApp
participant VStore as VectorStoreObject
participant Adapter as Provider Adapter
participant RAG as RAG API / GCS
User->>App: newVectorStore(provider)
App->>VStore: constructor(provider)
VStore->>VStore: store provider
User->>VStore: setBucketName(bucketAddress)
VStore->>VStore: store bucket
User->>VStore: uploadAndAttachFiles(blobs, attrs)
VStore->>Adapter: resolve provider
Adapter->>RAG: upload/import files (batch or single)
RAG-->>Adapter: fileIds / operation
Adapter->>RAG: attachFile(s)
RAG-->>Adapter: success
User->>VStore: listFiles()
VStore->>Adapter: listFiles()
Adapter->>RAG: query files
RAG-->>Adapter: files list
Adapter-->>VStore: files
VStore-->>User: files
sequenceDiagram
actor User
participant Chat as Chat Interface
participant VStore as VectorStoreObject
participant GenAI as GenAI API
User->>Chat: send message + vectorStore reference
Chat->>VStore: getProvider()
VStore-->>Chat: provider
Chat->>Chat: sanitize payload for provider
Chat->>GenAI: send message with retrieval tooling + attachments
GenAI-->>Chat: response (+ grounding metadata)
Chat->>Chat: attach/propagate grounding metadata
Chat-->>User: response with sources
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
README.md (1)
525-525:⚠️ Potential issue | 🟡 MinorReference section header still says "OpenAI vector store" despite multi-provider support.
Line 525 reads
A VectorStoreObject represents an OpenAI vector store.but the class now supports both OpenAI and Google Vertex AI RAG. This should be updated to reflect the provider-agnostic nature.-A `VectorStoreObject` represents an OpenAI vector store. +A `VectorStoreObject` represents a vector store (OpenAI or Google Vertex AI RAG).src/code.gs (1)
911-920:⚠️ Potential issue | 🟠 MajorNo validation that GCP auth is configured when using the Google provider.
If a user calls
GenAIApp.newVectorStore("google").setName("test").createVectorStore()without first callingsetGeminiAuth(),gcpProjectIdis""and_createRagCorpuswill construct an invalid URL, producing a confusing API error instead of a clear message.Suggested guard in createVectorStore
this.createVectorStore = function () { if (!name) throw new Error("[GenAIApp] - Please specify your Vector Store name using the GenAiApp.newVectorStore().setName() method before creating it."); + if (providerType === "google" && !gcpProjectId) { + throw new Error("[GenAIApp] - Please set your GCP project auth using GenAIApp.setGeminiAuth(projectId, region) before creating a Google RAG vector store."); + } try { id = rag.createVectorStore(name);
🤖 Fix all issues with AI agents
In `@README.md`:
- Line 554: Add a single trailing newline at the end of the README so the file
ends with one newline character; locate the final line containing "Happy coding
and enjoy building with the **GenAIApp** library!" and ensure you append one
newline character after that line (do not add additional blank lines).
In `@src/code.gs`:
- Line 25: The hardcoded ragRegion ("europe-west4") creates a data-residency
mismatch with setGeminiAuth's region; change ragRegion to default to the same
region set by setGeminiAuth (or add a public setter like setRagRegion) and
ensure all RAG corpus operations (create/import/list/delete) read that variable
instead of the hardcoded string; update the variable initialization and
setGeminiAuth to assign the region to ragRegion when provided, or implement and
document a setRagRegion function and use that variable in the RAG-related calls.
- Around line 854-863: The constructor currently accepts a provider string and
immediately calls _resolveRagProvider(provider) which silently falls back to
OpenAI for unknown values; add explicit validation of the incoming provider
parameter before or immediately after calling _resolveRagProvider: check
provider against the supported provider list (or verify that _resolveRagProvider
returned a value matching the input), and if it doesn't match, either throw a
descriptive error or emit a clear warning (using the existing logging mechanism)
that the provided provider is unrecognized and which provider will be used
instead; reference the constructor, provider/providerType variables, and
_resolveRagProvider when implementing this guard.
- Around line 753-766: The code uses addedVectorStores directly to build
ragCorpusIds and may mix providers (e.g., OpenAI IDs) into a Google RAG resource
path; update the logic so only GCP-backed vector stores are used for Vertex RAG.
Either change the shape of addedVectorStores to store provider metadata (e.g., {
id, provider }) and filter ragCorpusIds =
Object.values(addedVectorStores).filter(v => v.provider === 'gcp').map(v =>
v.id), or validate each id before pushing into payload.tools (check provider
field or a GCP-specific prefix) and skip/log incompatible IDs; keep the existing
variables ragCorpusIds, payload.tools, numberOfAPICalls, gcpProjectId,
ragRegion, and maxNumOfChunks when implementing the filter/validation.
- Around line 2456-2473: The code repeatedly calls _listFilesInRagCorpus(ragId)
per batch and rebuilds uriToId, causing O(corpus × batches) work; instead either
(A) move the listing/map creation out of the per-batch loop and build uriToId
once after all imports complete (then resolve normalizedUris → ragFileId and
push into allRagFileIds), or (B) before each import capture the existing file
count (or set of URIs) and after the batch only iterate the new slice/uris to
update uriToId and allRagFileIds; update the logic around _listFilesInRagCorpus,
uriToId, normalizedUris and allRagFileIds accordingly so you no longer re-list
the entire corpus every batch.
- Around line 2347-2359: The JSDoc for _importFileFromBucketToRagCorpus includes
a stale `@param` for "attributes" that is not present in the function signature;
remove the line "@param {Object} attributes - JSON object with the attributes of
the file..." from the JSDoc (or add the parameter to the function if the
intention was to accept it), and ensure the remaining `@param` entries (gcsPath,
vectorStoreID, maxChunkSize, chunkOverlap) match the actual function parameters
and ordering in _importFileFromBucketToRagCorpus.
- Around line 2790-2791: The attachFile adapter currently drops the attributes
parameter—update the attachFile implementation so it forwards the attributes
into _importFileFromBucketToRagCorpus (i.e., call
_importFileFromBucketToRagCorpus(fileId, vectorStoreId, attributes, maxChunk,
overlap)) or, if the Google RAG provider cannot accept attributes, emit a clear
warning when attributes are provided; reference the attachFile adapter and the
helper _importFileFromBucketToRagCorpus (and mirror behavior of
_attachFileToVectorStore) so callers like uploadAndAttachFile(blob, { category:
"docs" }) either have their attributes preserved or see a logged warning.
- Around line 2134-2145: Before calling JSON.parse on the operation response,
check the HTTP status via response.getResponseCode() after
UrlFetchApp.fetch(operationUrl, options); if the status is not in the 2xx range,
throw a descriptive error containing the status code and the raw response body
(response.getContentText()) so non-JSON error pages don’t get parsed; then
proceed to JSON.parse only for successful responses and continue the existing
logic that checks result.done and result.error. Use the existing variables
operationUrl, options, response, and result to locate and update the code.
- Around line 2095-2106: The code assumes
_waitForGoogleOperation(operationResult.name) returns a string but that function
can return undefined; update the block using _waitForGoogleOperation,
operationResult and result so you guard against undefined before calling .split:
call _waitForGoogleOperation and check that the returned result is a non-empty
string (e.g., if (!result) { throw new Error(...) }) before using
result.split('ragCorpora/'), adjust the Logger.log to include a fallback id or
error context when result is missing, and return only after verifying
result.split(...) produces a value; ensure you reference the existing symbols
_waitForGoogleOperation, operationResult, result, Logger.log and the return path
so no .split is called on undefined.
- Around line 976-1004: uploadAndAttachFiles lacks error handling in the Google
batch path: if rag.uploadFile throws part-way through the for-loop the
already-uploaded blobs remain in GCS but are not attached; wrap the upload loop
in a try-catch and collect only successfully uploaded gcsUris so that on error
you either continue uploading remaining files or at minimum call
rag.attachFilesBatch with the successful gcsUris and id to attach what
succeeded, and surface/log the per-file upload error (use uploadAndAttachFile's
try-catch pattern as a guide). Also fix the minor argument formatting when
calling rag.attachFilesBatch to use "id, max_chunk_size" instead of "id
,max_chunk_size". Ensure you reference and update the upload loop around
rag.uploadFile and the final rag.attachFilesBatch call inside
uploadAndAttachFiles.
- Around line 2267-2269: The upload URL is built with whatever bucketName is
provided (e.g., "gs://my-bucket/path"), producing an invalid URL; update
_uploadFileToBucket to normalize the bucket name by stripping a leading "gs://"
if present and removing any path portion (take the segment before the first
"/"), then use that cleaned bucket name (encoded via encodeURIComponent) when
constructing the GCS upload URL; ensure any callers like setBucketName still
accept "gs://..." but _uploadFileToBucket always derives the plain bucket name
for the URL.
- Around line 757-765: Replace the deprecated similarityTopK usage inside the
retrieval.vertex_rag_store block by constructing a ragRetrievalConfig object
with topK set to maxNumOfChunks || 5; specifically, in the same block where
retrieval -> vertex_rag_store -> rag_resources (which uses ragCorpusIds,
gcpProjectId, ragRegion) is defined, remove similarityTopK and add
ragRetrievalConfig: { topK: maxNumOfChunks || 5 } so the VertexRagStore uses
ragRetrievalConfig.topK instead of similarityTopK.
| const ragCorpusIds = Object.keys(addedVectorStores); | ||
|
|
||
| if (ragCorpusIds?.length > 0 && numberOfAPICalls < 1 && !!gcpProjectId) { | ||
| payload.tools.push({ | ||
| google_search: {} | ||
| retrieval: { | ||
| vertex_rag_store: { | ||
| rag_resources: ragCorpusIds.map(ragId => ({ | ||
| rag_corpus: `projects/${gcpProjectId}/locations/${ragRegion}/ragCorpora/${ragId}` | ||
| })), | ||
| similarityTopK: maxNumOfChunks || 5 | ||
| } | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Cross-provider vector store IDs can cause silent failures.
addedVectorStores is a shared global dictionary that doesn't track which provider each ID belongs to. If a user creates an OpenAI vector store (getting an OpenAI ID like vs_abc123) and then calls chat.run({ model: "gemini-2.5-flash" }), this code will attempt to use that OpenAI ID as a Google RAG corpus ID in the resource path projects/.../ragCorpora/vs_abc123, which will silently fail or error.
Consider storing the provider alongside each vector store ID, or validating compatibility between the vector store provider and the model being used.
🤖 Prompt for AI Agents
In `@src/code.gs` around lines 753 - 766, The code uses addedVectorStores directly
to build ragCorpusIds and may mix providers (e.g., OpenAI IDs) into a Google RAG
resource path; update the logic so only GCP-backed vector stores are used for
Vertex RAG. Either change the shape of addedVectorStores to store provider
metadata (e.g., { id, provider }) and filter ragCorpusIds =
Object.values(addedVectorStores).filter(v => v.provider === 'gcp').map(v =>
v.id), or validate each id before pushing into payload.tools (check provider
field or a GCP-specific prefix) and skip/log incompatible IDs; keep the existing
variables ragCorpusIds, payload.tools, numberOfAPICalls, gcpProjectId,
ragRegion, and maxNumOfChunks when implementing the filter/validation.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
477-488:⚠️ Potential issue | 🟡 Minor
setRagRegionandnewVectorStore(provider)are missing from the Reference section.
GenAIApp.setRagRegion(region)was added to the public API (line 2944 in code.gs) but isn't listed in the Reference table. Also,newVectorStore()at line 482 should document the optionalproviderparameter.📝 Suggested additions
- `newVectorStore()`: Create a new `VectorStoreObject`. +- `newVectorStore(provider)`: Create a new `VectorStoreObject`. Optional `provider` parameter: `"openai"` (default) or `"google"`. +- `setRagRegion(region)`: Override the region used for Google RAG operations (default: `europe-west4`).
🤖 Fix all issues with AI agents
In `@README.md`:
- Around line 286-295: Update the README Google Vertex AI RAG example to
explicitly call setRagRegion so RAG operations use the intended region (it’s
separate from setGeminiAuth); modify the example that uses
GenAIApp.setGeminiAuth(...) and
GenAIApp.newVectorStore("google")...createVectorStore() to also call
setRagRegion("europe-west4") (or demonstrate matching the same region passed to
setGeminiAuth) and add a short note explaining that setGeminiAuth configures
Gemini auth region while setRagRegion controls RAG endpoints so both should be
set when they must match.
In `@src/code.gs`:
- Around line 2809-2826: The _googleRagAdapter object mixes arrow-function
properties (createVectorStore, retrieveVectorStoreInformation, attachFilesBatch,
listFiles, deleteVectorStore) with shorthand method definitions (uploadFile,
deleteFile); pick a single style and make them consistent—either convert
uploadFile and deleteFile to arrow properties (e.g., uploadFile: (blob, bucket)
=> { ... }) or convert the arrow properties to shorthand methods (e.g.,
createVectorStore(name) { ... }) so all functions in _googleRagAdapter use the
same syntax, preserving the existing internal logic and names.
- Around line 2932-2947: The returned object literal is missing a comma between
the setPrivateInstanceBaseUrl and setRagRegion properties, causing a parse
error; open the object that contains setPrivateInstanceBaseUrl and add a
trailing comma after its closing brace so the next property setRagRegion is
comma-separated, ensuring the object literal syntax is valid (refer to the
functions setPrivateInstanceBaseUrl and setRagRegion to locate the insertion
point).
- Around line 2828-2833: The attachFile property currently uses invalid syntax;
change it to a valid function value (either a method shorthand or a function
expression) so the object has a callable attachFile, e.g. attachFile(fileId,
vectorStoreId, attributes, maxChunk, overlap) { ... } or attachFile:
function(fileId, vectorStoreId, attributes, maxChunk, overlap) { ... }, and
inside ensure you return the _importFileFromBucketToRagCorpus(...) call so the
caller receives its result; reference attachFile and
_importFileFromBucketToRagCorpus to locate and update the code.
- Around line 2288-2295: In _uploadFileToBucket the computed
cleanBucketName/encodedBucket are not used so a gs:// prefix can break the
upload; replace the raw bucketName interpolation in the url construction with
the cleaned and encoded bucket (use encodedBucket, which should be computed from
cleanBucketName) and keep encodedName for the object name (or remove
encodedBucket if you prefer decoding elsewhere) so the URL becomes built from
the cleaned/encoded bucket and encodedName rather than the original bucketName.
…r-gemini-vector-store-rag Add Gemini RAG vector store test pipeline
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
README.md (2)
229-234:⚠️ Potential issue | 🟡 MinorDuplicated warning about
reasoning_effort.Lines 233 and 234 both warn about the
reasoning_effortparameter being supported only by reasoning-capable OpenAI models. The second warning (line 234) appears to be a duplicate of the sentence at the end of line 233.📝 Proposed fix
The "reasoning_effort" parameter is supported only by reasoning-capable OpenAI models and ignored by all others. -⚠️ **Warning:** The "reasoning_effort" parameter is supported only by reasoning-capable OpenAI models and ignored by all others.
491-491:⚠️ Potential issue | 🟡 Minor
newVectorStore()reference entry should mention theproviderparameter.The reference section documents
newVectorStore()without the optionalproviderargument, but the code and usage examples clearly shownewVectorStore("google").📝 Proposed fix
-- `newVectorStore()`: Create a new `VectorStoreObject`. +- `newVectorStore([provider])`: Create a new `VectorStoreObject`. `provider` defaults to `"openai"`; pass `"google"` for Vertex AI RAG.
🤖 Fix all issues with AI agents
In `@src/code.gs`:
- Around line 2943-2945: The parameter name `region` in setRagRegion(region)
shadows the module-level variable `region` (used for Gemini auth); rename the
parameter (e.g., to ragRegionValue) and update the function body to assign
ragRegion = ragRegionValue so the module-level `region` name is not reused and
accidental confusion is avoided when maintaining setRagRegion and the
module-level region variable.
- Around line 2410-2411: The code parses Google API responses (e.g., in
_importFileFromBucketToRagCorpus, _deleteFileInRagCorpus, _deleteRagCorpus)
immediately after UrlFetchApp.fetch into operationResult without checking HTTP
status; add a small shared helper (e.g., validateFetchResponse or
ensureSuccessfulResponse) that accepts the UrlFetchApp.fetch result, checks
response.getResponseCode() for 2xx and throws or returns a clear error
(including response.getContentText() and the code) when not successful, then
replace direct JSON.parse(operationResponse.getContentText()) calls in those
functions with a call to the helper and parse only after the helper confirms
success so all three functions use the same validated path.
- Around line 2010-2037: The function _getRagFileIdFromGcsUri currently scans
_listFilesInRagCorpus(ragId) which is expensive for large corpora; instead
change the approach to either (A) call the Vertex AI/ManagedIndex/IndexEndpoint
file-listing API with a filter for the specific gcsUri (or gcsSource.uri) so you
only retrieve the matching file, or (B) have the import path that uploads the
GCS URI return and persist the created ragFile ID (e.g., update the import
method to return fileId and store it in your corpus index) and then use that
stored mapping here. Update _getRagFileIdFromGcsUri to use the new filtered API
call or lookup the persisted mapping (reference function name
_getRagFileIdFromGcsUri and the import/upload routine that creates rag files)
and remove the full-corpus scan.
- Line 1017: The call to rag.attachFilesBatch(id, gcsUris, ...) passes
vectorStoreId first but _googleRagAdapter.attachFilesBatch currently expects
(gcsUris, ragId, ...), causing the ID and URIs to swap; fix by aligning the
adapter signature to accept (vectorStoreId, gcsUris, max_chunk_size,
chunk_overlap) or alternatively change the call sites to pass (gcsUris, id,
...). Update the adapter function (_googleRagAdapter.attachFilesBatch) and any
internal call to _importFilesFromBucketToRagCorpusBatch so the parameter
ordering matches (gcsUris, ragId, ...) or vice versa consistently across
rag.attachFilesBatch and its adapter; ensure the same fix is applied to the
other occurrence noted (the second attachFilesBatch usage).
- Around line 2478-2483: The code calls UrlFetchApp.fetch(url, options) with
muteHttpExceptions:true and immediately JSON.parse(response.getContentText()),
which can throw on non-JSON error bodies; before calling JSON.parse or checking
data?.name, inspect response.getResponseCode() and if it is not a 2xx status
throw or return a descriptive error that includes the response code and
response.getContentText(); only attempt JSON.parse when the status is 200–299
and keep the existing data?.name validation afterwards so failures include both
HTTP status and body for easier debugging.
- Around line 2107-2108: The response from UrlFetchApp.fetch is being parsed
without checking HTTP status; update the code around the fetch call that assigns
operationResponse and operationResult to first inspect
operationResponse.getResponseCode() (or equivalent) and handle non-2xx codes
before calling JSON.parse — e.g., log or throw a descriptive error including
operationResponse.getContentText() when the status is not successful, mirroring
the guard used in _importFilesFromBucketToRagCorpusBatch; reference the
variables operationResponse, options, and operationResult so you modify the same
fetch/parse block.
In `@src/testFunctions.gs`:
- Around line 130-132: The guard in testGeminiVectorStoreRagPipeline only checks
truthiness of GCP_PROJECT_ID so the placeholder "YOUR_GCP_PROJECT_ID" slips
through; update the check in testGeminiVectorStoreRagPipeline to also detect and
reject placeholder values (e.g., compare against "YOUR_GCP_PROJECT_ID" or a
pattern like starting with "YOUR_" or not matching expected project-id regexp)
and throw a clear Error instructing to set a real GCP_PROJECT_ID before running
the test.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@src/code.gs`:
- Around line 1458-1460: The code currently mutates the responseMessage object
by assigning responseMessage.groundingMetadata =
firstCandidate?.groundingMetadata, which can leak into contents later (see
_handleGeminiToolCalls and cleanContents). Fix by not mutating responseMessage:
instead extract groundingMetadata from firstCandidate and return or attach it as
a separate value (e.g., alongside the response payload) or clone responseMessage
before adding the property; update callers in _handleGeminiToolCalls to consume
the separate groundingMetadata return value (or use the cloned object) and leave
the original responseMessage/content objects unmodified so cleanContents remains
the single source of truth.
- Around line 986-998: The fallback in uploadAndAttachFiles (when
rag.attachFilesBatch is not a function) currently pushes every
uploadAndAttachFile result into results, but uploadAndAttachFile returns
undefined on failure, so filter out failed uploads before returning: after
calling this.uploadAndAttachFile(blobs[i], attrs) only push non-undefined values
(or filter results for truthy entries) so callers don't receive undefined
entries; update the loop in uploadAndAttachFiles (and keep the existing id and
rag.attachFilesBatch checks) to collect and return only successful upload
objects.
In `@src/testFunctions.gs`:
- Around line 1-6: GCP_PROJECT_ID is hardcoded in src/testFunctions.gs (const
GCP_PROJECT_ID = "support-add-on"), which exposes a real project id; change this
to a neutral placeholder or load it from runtime configuration (e.g.,
Script/Document/User Properties via PropertiesService) and fall back to a clear
placeholder if missing, and update any code that references GCP_PROJECT_ID to
use the property lookup (or throw a helpful error) so users are guided to set
their own project id; also update README/docs to explain where to set the
GCP_PROJECT_ID property.
- Around line 234-245: The current items.forEach callback throws inside the loop
which aborts deletion of remaining objects; change the loop over items (the
block creating deleteUrl and calling UrlFetchApp.fetch) to iterate with a
for...of or standard for loop and wrap each delete call in a try/catch so
failures are logged (e.g., using console.error or Logger.log) and the loop
continues; optionally collect failed item names into an array for diagnostics
and ensure the subsequent bucket delete logic checks/handles those failures
before attempting to delete the bucket.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/testFunctions.gs (1)
9-19: 🧹 Nitpick | 🔵 TrivialNote:
testGeminiVectorStoreRagPipelinewill halttestAllifGCP_PROJECT_IDis unset.Since it's the last call in the list this is fine today, but if more tests are appended later, the thrown error will skip them. Consider wrapping this call in a try/catch or gating it, consistent with how a missing config should degrade gracefully in a test suite.
🤖 Fix all issues with AI agents
In `@src/testFunctions.gs`:
- Around line 152-157: The test currently calls
vectorStore.uploadAndAttachFile(...) then immediately creates a chat
(GenAIApp.newChat()...addVectorStores(vectorStore.getId())) and calls
chat.run(...), which can be flaky because indexing is asynchronous; add a short
wait or polling loop after ragFileId = vectorStore.uploadAndAttachFile(blob) —
e.g., call Utilities.sleep(...) for a small interval or poll
vectorStore.isIndexed/getIndexStatus until the uploaded document is available —
before constructing the chat and calling chat.run to ensure the new document is
indexed before assertion.
| ragFileId = vectorStore.uploadAndAttachFile(blob); | ||
|
|
||
| const chat = GenAIApp.newChat() | ||
| .addMessage("Using the provided documents, what is the capital of France?") | ||
| .addVectorStores(vectorStore.getId()); | ||
| const response = chat.run({ model: GEMINI_MODEL, max_tokens: 10000 }); |
There was a problem hiding this comment.
Potential flakiness: no delay between upload/indexing and query.
RAG indexing is typically asynchronous. Querying immediately after uploadAndAttachFile may return results that don't yet include the newly uploaded document, causing the "paris" assertion to fail intermittently. Consider adding a short polling loop or a Utilities.sleep() delay before the chat query to allow the index to propagate.
🤖 Prompt for AI Agents
In `@src/testFunctions.gs` around lines 152 - 157, The test currently calls
vectorStore.uploadAndAttachFile(...) then immediately creates a chat
(GenAIApp.newChat()...addVectorStores(vectorStore.getId())) and calls
chat.run(...), which can be flaky because indexing is asynchronous; add a short
wait or polling loop after ragFileId = vectorStore.uploadAndAttachFile(blob) —
e.g., call Utilities.sleep(...) for a small interval or poll
vectorStore.isIndexed/getIndexStatus until the uploaded document is available —
before constructing the chat and calling chat.run to ensure the new document is
indexed before assertion.
No description provided.