-
Notifications
You must be signed in to change notification settings - Fork 65
LCORE-861: Azure Entra ID token managment #988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -203,8 +203,8 @@ __Note__: Support for individual models is dependent on the specific inference p | |
| | RHOAI (vLLM)| meta-llama/Llama-3.2-1B-Instruct | Yes | remote::vllm | [1](tests/e2e-prow/rhoai/configs/run.yaml) | | ||
| | RHAIIS (vLLM)| meta-llama/Llama-3.1-8B-Instruct | Yes | remote::vllm | [1](tests/e2e/configs/run-rhaiis.yaml) | | ||
| | RHEL AI (vLLM)| meta-llama/Llama-3.1-8B-Instruct | Yes | remote::vllm | [1](tests/e2e/configs/run-rhelai.yaml) | | ||
| | Azure | gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-chat, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o3-mini, o4-mini | Yes | remote::azure | [1](examples/azure-run.yaml) | | ||
| | Azure | o1, o1-mini | No | remote::azure | | | ||
| | Azure | gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o-mini, o3-mini, o4-mini, o1| Yes | remote::azure | [1](examples/azure-run.yaml) | | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🌐 Web query:
💡 Result: Quick list of Azure OpenAI / Azure AI Foundry models available (status as of Jan 15, 2026):
Sources: Microsoft Azure / Azure OpenAI documentation and announcements, plus Azure/Azure AI Foundry news. See Microsoft docs for exact region/version availability and retirement schedule. (learn.microsoft.com) Would you like a region-by-region availability table or links to the specific model pages? Citations:
🏁 Script executed: # Check the actual line in README.md to confirm formatting
git ls-files README.md && sed -n '206p' README.mdRepository: lightspeed-core/lightspeed-stack Length of output: 273 Fix formatting issue: add space before closing pipe. The line is missing a space between Formatting fix-| Azure | gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o-mini, o3-mini, o4-mini, o1| Yes | remote::azure | [1](examples/azure-run.yaml) |
+| Azure | gpt-5, gpt-5-mini, gpt-5-nano, gpt-4o-mini, o3-mini, o4-mini, o1 | Yes | remote::azure | [1](examples/azure-run.yaml) |All listed model names are valid Azure OpenAI models as of January 2026 (gpt-5 family, gpt-4o-mini, and o-series reasoning models including o4-mini are confirmed available per Microsoft Azure documentation). 🤖 Prompt for AI Agents |
||
| | Azure | gpt-5-chat, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o1-mini | No or limited | remote::azure | | | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: # First, let's check the actual content around line 207 in README.md
cd / && find . -name "README.md" -type f 2>/dev/null | head -5Repository: lightspeed-core/lightspeed-stack Length of output: 297 🏁 Script executed: # Search for README.md in common locations
fd -name "README.md" -type f | head -5Repository: lightspeed-core/lightspeed-stack Length of output: 306 🏁 Script executed: # List files to understand repository structure
git ls-files | head -20Repository: lightspeed-core/lightspeed-stack Length of output: 782 🌐 Web query:
💡 Result: Here are the Azure OpenAI GPT‑4 and GPT‑5 model names (status as of Jan 15, 2026):
Sources: Azure OpenAI models and versions (Microsoft docs) and reporting on GPT‑5 availability. [1] Microsoft Learn — Models (Azure OpenAI). [2] Microsoft Learn — Model retirements / availability. [3] Coverage of Microsoft integrating GPT‑5 into Copilot / Azure. 🌐 Web query:
💡 Result: Yes — Azure (Azure OpenAI / Azure AI Foundry) offers the GPT-4.1 family and GPT-5 (including gpt-5-chat). Examples in Microsoft’s catalog and announcements include gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, and gpt-5 / gpt-5-chat (availability varies by region and deployment). [1][2][3] Sources:
🏁 Script executed: # Find README.md in the repository root
fd -name "README.md" -type f --max-depth 1Repository: lightspeed-core/lightspeed-stack Length of output: 306 🏁 Script executed: # Look at the file structure to locate the main README
ls -la | grep -i readmeRepository: lightspeed-core/lightspeed-stack Length of output: 137 🏁 Script executed: # Check if there's a README.md in the root
cat README.md 2>/dev/null | wc -lRepository: lightspeed-core/lightspeed-stack Length of output: 80 🏁 Script executed: # If file exists, get lines around 207
if [ -f README.md ]; then
echo "=== Context around line 207 (lines 200-215) ==="
sed -n '200,215p' README.md
fiRepository: lightspeed-core/lightspeed-stack Length of output: 2291 🏁 Script executed: # Also check the exact content and spacing on line 207
if [ -f README.md ]; then
echo "=== Line 207 with visible spaces ==="
sed -n '207p' README.md | cat -A
fiRepository: lightspeed-core/lightspeed-stack Length of output: 231 Fix spacing and clarify "No or limited" status; remove incorrect model name claims. This line has formatting and clarity issues that need addressing:
Suggested fix-| Azure | gpt-5-chat, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o1-mini | No or limited | remote::azure | |
+| Azure | gpt-5-chat, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o1-mini | Limited[^2] | remote::azure | [1](examples/azure-run.yaml) |Then add a footnote: +[^2]: Tool calling support varies by model; some features may be unavailable or in preview. |
||
| | VertexAI | google/gemini-2.0-flash, google/gemini-2.5-flash, google/gemini-2.5-pro [^1] | Yes | remote::vertexai | [1](examples/vertexai-run.yaml) | | ||
| | WatsonX | meta-llama/llama-3-3-70b-instruct | Yes | remote::watsonx | [1](examples/watsonx-run.yaml) | | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same set of LCS config files is shared across all the test jobs. Thus the Entra ID config block has to be added only to the relevant test jobs.