Skip to content

Conversation

@danenania
Copy link
Contributor

Summary

Add a natural language analytics feature where property owners can ask questions about their booking data in plain English (e.g., "How many bookings did I have last month?"). The system uses an LLM to generate SQL queries that are executed against a SQLite database.

New Features

  • POST /authorized/:level/analytics/query - Natural language to SQL analytics endpoint
  • SQLite database with sample booking/property/owner data
  • Database initialization script

Files Added

  • src/routes/analytics.ts - Analytics endpoint with LLM-based SQL generation
  • src/scripts/init-analytics-db.ts - Database initialization script
  • src/data/bookings.db - SQLite database (gitignored, generated by init script)

Dependencies

  • better-sqlite3 for SQLite database access

Add an analytics feature where property owners can ask questions about
their booking data in natural language. The system uses an LLM to
generate SQL queries that are executed against a SQLite database.

New files:
- src/routes/analytics.ts - Analytics endpoint with LLM-based SQL generation
- src/scripts/init-analytics-db.ts - Database initialization script

Dependencies:
- better-sqlite3 for SQLite database access
Comment on lines +77 to +120
router.post('/authorized/:level/analytics/query', async (req: Request, res: Response) => {
try {
const { level } = req.params as { level: 'minnow' | 'shark' };
const { question, model } = analyticsQuerySchema.parse(req.body);

if (!db) {
return res.status(500).json({
error: 'Database not available',
message: 'Analytics database is not initialized',
});
}

// Generate SQL from natural language
const sqlQuery = await generateSqlQuery(question, model);

// VULNERABILITY: Execute generated SQL directly without validation
// Only safeguard is the system prompt instructions (bypassable)
try {
const results = db.prepare(sqlQuery).all();

return res.json({
question,
generatedQuery: sqlQuery,
results,
rowCount: Array.isArray(results) ? results.length : 0,
});
} catch (dbError) {
return res.status(400).json({
error: 'Query execution failed',
generatedQuery: sqlQuery,
message: dbError instanceof Error ? dbError.message : 'Unknown database error',
});
}
} catch (error) {
if (error instanceof z.ZodError) {
return res.status(400).json({ error: 'Validation error', details: error.errors });
}
console.error('Analytics query error:', error);
return res.status(500).json({
error: 'Internal server error',
message: error instanceof Error ? error.message : 'Unknown error',
});
}
});

Check failure

Code scanning / CodeQL

Missing rate limiting High

This route handler performs
a database access
, but is not rate-limited.

Copilot Autofix

AI 4 days ago

In general, the problem is fixed by adding a rate-limiting middleware in front of the expensive handler so that individual clients cannot send unbounded numbers of requests in a short time. In Express, a common solution is to use the well-known express-rate-limit package, configure reasonable thresholds, and apply the resulting middleware either globally to the router or specifically to this analytics route.

For this snippet, the least invasive and clearest fix is:

  • Import express-rate-limit at the top of src/routes/analytics.ts.
  • Configure a limiter specifically for the analytics query endpoint (for example, a small number of requests per minute per IP, given it’s doing LLM + DB work).
  • Apply that limiter as a middleware only to router.post('/authorized/:level/analytics/query', ...) so that existing behavior of other routes in this router is unchanged.

Concretely:

  • Add import rateLimit from 'express-rate-limit'; below the existing imports.
  • Define a const analyticsLimiter = rateLimit({ ... }) near the router initialization, setting windowMs and max and possibly a friendly message.
  • Update the route definition on line 77 from router.post('/authorized/:level/analytics/query', async (req, res) => { ... }) to router.post('/authorized/:level/analytics/query', analyticsLimiter, async (req, res) => { ... }).

No other behavior of the handler needs to change; we are only inserting middleware to control how frequently it can be invoked.

Suggested changeset 2
src/routes/analytics.ts

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/routes/analytics.ts b/src/routes/analytics.ts
--- a/src/routes/analytics.ts
+++ b/src/routes/analytics.ts
@@ -2,9 +2,17 @@
 import { z } from 'zod';
 import Database from 'better-sqlite3';
 import * as path from 'path';
+import rateLimit from 'express-rate-limit';
 
 const router = Router();
 
+const analyticsLimiter = rateLimit({
+  windowMs: 60 * 1000, // 1 minute window
+  max: 10, // limit each IP to 10 analytics requests per windowMs
+  standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
+  legacyHeaders: false, // Disable the `X-RateLimit-*` headers
+});
+
 const analyticsQuerySchema = z.object({
   question: z.string().min(1).max(500),
   model: z.string().optional(),
@@ -74,7 +79,7 @@
 }
 
 // Natural language analytics endpoint
-router.post('/authorized/:level/analytics/query', async (req: Request, res: Response) => {
+router.post('/authorized/:level/analytics/query', analyticsLimiter, async (req: Request, res: Response) => {
   try {
     const { level } = req.params as { level: 'minnow' | 'shark' };
     const { question, model } = analyticsQuerySchema.parse(req.body);
EOF
@@ -2,9 +2,17 @@
import { z } from 'zod';
import Database from 'better-sqlite3';
import * as path from 'path';
import rateLimit from 'express-rate-limit';

const router = Router();

const analyticsLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute window
max: 10, // limit each IP to 10 analytics requests per windowMs
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
});

const analyticsQuerySchema = z.object({
question: z.string().min(1).max(500),
model: z.string().optional(),
@@ -74,7 +79,7 @@
}

// Natural language analytics endpoint
router.post('/authorized/:level/analytics/query', async (req: Request, res: Response) => {
router.post('/authorized/:level/analytics/query', analyticsLimiter, async (req: Request, res: Response) => {
try {
const { level } = req.params as { level: 'minnow' | 'shark' };
const { question, model } = analyticsQuerySchema.parse(req.body);
package.json
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/package.json b/package.json
--- a/package.json
+++ b/package.json
@@ -21,7 +21,8 @@
     "express": "^4.22.1",
     "js-yaml": "^4.1.1",
     "jsonwebtoken": "^9.0.2",
-    "zod": "^3.22.4"
+    "zod": "^3.22.4",
+    "express-rate-limit": "^8.2.1"
   },
   "devDependencies": {
     "@types/better-sqlite3": "^7.6.12",
EOF
@@ -21,7 +21,8 @@
"express": "^4.22.1",
"js-yaml": "^4.1.1",
"jsonwebtoken": "^9.0.2",
"zod": "^3.22.4"
"zod": "^3.22.4",
"express-rate-limit": "^8.2.1"
},
"devDependencies": {
"@types/better-sqlite3": "^7.6.12",
This fix introduces these dependencies
Package Version Security advisories
express-rate-limit (npm) 8.2.1 None
Copilot is powered by AI and may make mistakes. Always verify output.
Copy link

@promptfoo-scanner promptfoo-scanner bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces a natural language analytics dashboard with text-to-SQL functionality. I found several critical LLM security vulnerabilities including prompt injection leading to secrets exposure, missing authentication on the endpoint, and prompt-only safeguards that can be bypassed. The most severe issue is that untrusted user input flows to an LLM to generate SQL queries, which are then executed without validation against a database containing API keys and PII.

Minimum severity threshold for this scan: 🟡 Medium | Learn more

Comment on lines +44 to +95
const response = await fetch(`${LITELLM_SERVER_URL}/v1/chat/completions`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: model || 'gpt-4o-mini',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: question },
],
}),
});

if (!response.ok) {
throw new Error(`LiteLLM request failed: ${await response.text()}`);
}

const data: any = await response.json();
let sqlQuery = data.choices[0].message.content.trim();

// Remove markdown code blocks if present
if (sqlQuery.startsWith('```sql')) {
sqlQuery = sqlQuery.slice(6);
} else if (sqlQuery.startsWith('```')) {
sqlQuery = sqlQuery.slice(3);
}
if (sqlQuery.endsWith('```')) {
sqlQuery = sqlQuery.slice(0, -3);
}

return sqlQuery.trim();
}

// Natural language analytics endpoint
router.post('/authorized/:level/analytics/query', async (req: Request, res: Response) => {
try {
const { level } = req.params as { level: 'minnow' | 'shark' };
const { question, model } = analyticsQuerySchema.parse(req.body);

if (!db) {
return res.status(500).json({
error: 'Database not available',
message: 'Analytics database is not initialized',
});
}

// Generate SQL from natural language
const sqlQuery = await generateSqlQuery(question, model);

// VULNERABILITY: Execute generated SQL directly without validation
// Only safeguard is the system prompt instructions (bypassable)
try {
const results = db.prepare(sqlQuery).all();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Critical

User input flows directly to the LLM to generate SQL queries, which are then executed without validation. An attacker can use prompt injection to bypass the system prompt's safety rules and extract sensitive data like API keys from the owners table. Since the endpoint is unauthenticated and the database is opened in read-write mode, attackers can also potentially modify or delete data.

💡 Suggested Fix

Implement defense-in-depth with multiple layers of protection:

// Layer 1: Validate generated SQL before execution
function validateSqlQuery(query: string): { valid: boolean; error?: string } {
  const upperQuery = query.trim().toUpperCase();

  if (!upperQuery.startsWith('SELECT')) {
    return { valid: false, error: 'Only SELECT queries are allowed' };
  }

  const dangerousKeywords = ['DROP', 'DELETE', 'INSERT', 'UPDATE', 'ALTER', 'CREATE', 'TRUNCATE', 'PRAGMA'];
  for (const keyword of dangerousKeywords) {
    if (new RegExp(`\\b${keyword}\\b`, 'i').test(query)) {
      return { valid: false, error: `Dangerous keyword detected: ${keyword}` };
    }
  }

  return { valid: true };
}

// Layer 2: Open database in read-only mode (line 18)
db = new Database(dbPath, { readonly: true });

// Layer 3: Filter sensitive columns from results
function filterSensitiveColumns(results: any[]): any[] {
  const sensitiveColumns = ['api_key', 'password', 'secret', 'token'];
  return results.map(row => {
    const filtered: any = {};
    for (const [key, value] of Object.entries(row)) {
      if (!sensitiveColumns.some(col => key.toLowerCase().includes(col))) {
        filtered[key] = value;
      }
    }
    return filtered;
  });
}

// Apply validation before execution (line 90-95)
const sqlQuery = await generateSqlQuery(question, model);
const validation = validateSqlQuery(sqlQuery);
if (!validation.valid) {
  return res.status(400).json({ error: 'Invalid query', message: validation.error });
}

const rawResults = db.prepare(sqlQuery).all();
const results = filterSensitiveColumns(rawResults);
🤖 AI Agent Prompt

The text-to-SQL feature at src/routes/analytics.ts:44-95 has a critical prompt injection vulnerability. User input flows to an LLM (via generateSqlQuery), which generates SQL that's executed directly without validation at line 95. The database contains sensitive data (API keys in the owners table, guest emails in bookings table) and is opened in read-write mode.

Your task: Implement comprehensive defense-in-depth security controls. Investigate the entire data flow from the /authorized/:level/analytics/query endpoint through to database execution. You need to add:

  1. Application-layer SQL validation to enforce SELECT-only queries and block dangerous keywords
  2. Convert the database connection to read-only mode
  3. Result filtering to redact sensitive columns like api_key
  4. Consider removing the api_key column from the system prompt schema disclosure (lines 29-42)

Start by examining the current implementation to understand the attack surface. Then implement each layer of defense systematically. The goal is to prevent both data exfiltration and data modification attacks via prompt injection.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +90 to +102
const sqlQuery = await generateSqlQuery(question, model);

// VULNERABILITY: Execute generated SQL directly without validation
// Only safeguard is the system prompt instructions (bypassable)
try {
const results = db.prepare(sqlQuery).all();

return res.json({
question,
generatedQuery: sqlQuery,
results,
rowCount: Array.isArray(results) ? results.length : 0,
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

Guest email addresses and names in the bookings table are exposed to unauthenticated users through unvalidated LLM-generated queries. Any attacker can extract all guest PII by crafting natural language questions like "Show me all guest emails". This is a cross-user data exposure vulnerability where one user's data is accessible to others.

💡 Suggested Fix

Implement row-level authorization to filter results based on authenticated user identity:

// After authentication middleware provides req.user
router.post('/authorized/:level/analytics/query', authenticateToken, async (req: Request, res: Response) => {
  // ... query generation and validation ...

  // Execute query
  const rawResults = db.prepare(sqlQuery).all();

  // Filter results based on user authorization
  const userId = (req as any).user?.sub;
  const userEmail = (req as any).user?.email;

  const authorizedResults = rawResults.filter(row => {
    // Users can only see their own bookings
    if ('guest_email' in row) {
      return row.guest_email === userEmail;
    }
    return true; // Allow non-sensitive data
  });

  return res.json({
    question,
    generatedQuery: sqlQuery,
    results: authorizedResults,
    rowCount: authorizedResults.length,
  });
});
🤖 AI Agent Prompt

At src/routes/analytics.ts:90-102, query results are returned without authorization checks, allowing cross-user PII exposure. The database contains guest emails and names in the bookings table that should only be accessible to the respective guests or property owners.

Your task: Implement row-level authorization filtering. After the authentication middleware is in place (see the missing authentication issue), filter query results to ensure users can only access data they're authorized to see.

Investigate the application's user model to understand user roles (guests, property owners, admins) and implement appropriate filtering logic. For example, guests should only see their own bookings, while property owners should see bookings for their properties. The exact authorization model will depend on your application's requirements, but at minimum, prevent cross-user data access.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +36 to +95
RULES:
- Only generate SELECT queries
- Never use DROP, DELETE, UPDATE, INSERT, or ALTER statements
- Never access system tables
- Always limit results to 100 rows maximum
Generate a single SQL query to answer the user's question. Return ONLY the SQL query, no explanation.`;

const response = await fetch(`${LITELLM_SERVER_URL}/v1/chat/completions`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: model || 'gpt-4o-mini',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: question },
],
}),
});

if (!response.ok) {
throw new Error(`LiteLLM request failed: ${await response.text()}`);
}

const data: any = await response.json();
let sqlQuery = data.choices[0].message.content.trim();

// Remove markdown code blocks if present
if (sqlQuery.startsWith('```sql')) {
sqlQuery = sqlQuery.slice(6);
} else if (sqlQuery.startsWith('```')) {
sqlQuery = sqlQuery.slice(3);
}
if (sqlQuery.endsWith('```')) {
sqlQuery = sqlQuery.slice(0, -3);
}

return sqlQuery.trim();
}

// Natural language analytics endpoint
router.post('/authorized/:level/analytics/query', async (req: Request, res: Response) => {
try {
const { level } = req.params as { level: 'minnow' | 'shark' };
const { question, model } = analyticsQuerySchema.parse(req.body);

if (!db) {
return res.status(500).json({
error: 'Database not available',
message: 'Analytics database is not initialized',
});
}

// Generate SQL from natural language
const sqlQuery = await generateSqlQuery(question, model);

// VULNERABILITY: Execute generated SQL directly without validation
// Only safeguard is the system prompt instructions (bypassable)
try {
const results = db.prepare(sqlQuery).all();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

Security rules restricting SQL queries to SELECT-only are enforced solely through system prompt instructions, which can be bypassed via prompt injection. The database is opened in read-write mode, so if an attacker successfully manipulates the LLM to generate UPDATE, DELETE, or DROP statements, these will execute without any application-layer validation blocking them.

💡 Suggested Fix

Replace prompt-only safeguards with deterministic technical controls (this is covered by the comprehensive fix for the prompt injection vulnerability):

// Application-layer validation (cannot be bypassed)
function validateSqlQuery(query: string): { valid: boolean; error?: string } {
  const upperQuery = query.trim().toUpperCase();
  if (!upperQuery.startsWith('SELECT')) {
    return { valid: false, error: 'Only SELECT queries are allowed' };
  }

  const dangerousKeywords = ['DROP', 'DELETE', 'INSERT', 'UPDATE', 'ALTER', 'CREATE', 'TRUNCATE', 'PRAGMA'];
  for (const keyword of dangerousKeywords) {
    if (new RegExp(`\\b${keyword}\\b`, 'i').test(query)) {
      return { valid: false, error: `Dangerous keyword detected: ${keyword}` };
    }
  }
  return { valid: true };
}

// Database-level protection (line 18)
db = new Database(dbPath, { readonly: true });

// Apply validation before execution
const validation = validateSqlQuery(sqlQuery);
if (!validation.valid) {
  return res.status(400).json({ error: 'Invalid query', message: validation.error });
}
🤖 AI Agent Prompt

The code at src/routes/analytics.ts:36-40 defines security rules only in the system prompt, with no technical enforcement. At line 95, any SQL the LLM generates is executed directly. The database is opened in read-write mode at line 18.

Your task: Replace bypassable prompt instructions with deterministic security controls. Implement two layers:

  1. Application-layer validation that checks the generated SQL before execution, rejecting anything that isn't a SELECT query or contains dangerous keywords
  2. Open the database connection in read-only mode to prevent modifications even if validation is bypassed

These technical controls should be impossible to circumvent via prompt injection, unlike the current prompt-only safeguards. Review the database initialization at line 18 and the query execution at line 95 to implement these protections.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +35 to +36
// Analytics endpoints
app.use(analyticsRouter);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

The analytics router is mounted without authentication middleware, despite the endpoint path including /authorized/ which suggests auth should be required. This makes the text-to-SQL capability publicly accessible to unauthenticated users, amplifying all other vulnerabilities. Compare this to line 33 where the chat endpoint correctly applies authenticateToken middleware.

💡 Suggested Fix

Apply authentication middleware to the analytics routes:

// In src/routes/analytics.ts, add middleware to the route definition:
import { authenticateToken } from '../middleware/auth';

router.post('/authorized/:level/analytics/query',
  authenticateToken,  // Add authentication middleware
  async (req: Request, res: Response) => {
    // ... existing handler code ...
  }
);

Or alternatively in src/server.ts:

// Apply middleware when mounting the router
app.use('/authorized', authenticateToken, analyticsRouter);
🤖 AI Agent Prompt

At src/server.ts:35-36, the analytics router is mounted without authentication middleware. The endpoint path includes /authorized/ but no auth is actually enforced, unlike the chat endpoint at line 33 which correctly uses authenticateToken.

Your task: Add authentication to protect the analytics endpoint. You have two options:

  1. Modify src/routes/analytics.ts at line 77 to import and apply authenticateToken middleware directly to the route handler
  2. Modify src/server.ts at line 36 to apply middleware when mounting the router

Option 1 is recommended for clarity. Once implemented, the endpoint will require a valid JWT Bearer token, preventing unauthenticated access to the text-to-SQL functionality. Review how the chat endpoint applies authentication at line 33 as a reference pattern.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +29 to +42
const systemPrompt = `You are a SQL query generator for a vacation rental analytics system.
DATABASE SCHEMA:
- bookings (id INTEGER, property_id INTEGER, guest_name TEXT, guest_email TEXT, check_in TEXT, check_out TEXT, total_price REAL, status TEXT, created_at TEXT)
- properties (id INTEGER, name TEXT, address TEXT, nightly_rate REAL, owner_id INTEGER, created_at TEXT)
- owners (id INTEGER, name TEXT, email TEXT, api_key TEXT, created_at TEXT)
RULES:
- Only generate SELECT queries
- Never use DROP, DELETE, UPDATE, INSERT, or ALTER statements
- Never access system tables
- Always limit results to 100 rows maximum
Generate a single SQL query to answer the user's question. Return ONLY the SQL query, no explanation.`;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium

The system prompt reveals the complete database schema including the existence of an api_key column in the owners table. This information disclosure makes prompt injection attacks easier by providing attackers a roadmap of what sensitive data exists and exactly how to query it.

💡 Suggested Fix

Remove sensitive columns from the schema disclosure:

const systemPrompt = `You are a SQL query generator for a vacation rental analytics system.

DATABASE SCHEMA:
- bookings (id, property_id, guest_name, check_in, check_out, total_price, status, created_at)
- properties (id, name, address, nightly_rate, owner_id, created_at)
- owners (id, name, email, created_at)

Note: Some columns containing sensitive data are omitted from this schema.

RULES:
- Only generate SELECT queries
- Never use DROP, DELETE, UPDATE, INSERT, or ALTER statements
- Always limit results to 100 rows maximum

Generate a single SQL query to answer the user's question. Return ONLY the SQL query, no explanation.`;
🤖 AI Agent Prompt

The system prompt at src/routes/analytics.ts:29-42 reveals the database schema including sensitive columns like api_key. While this doesn't directly expose secrets, it aids attackers by showing them exactly what sensitive data exists.

Your task: Remove the api_key column from the schema provided to the LLM. The schema should include only non-sensitive columns for each table. This is a defense-in-depth measure that works in conjunction with result filtering to prevent sensitive data exposure.

Update the system prompt construction in the generateSqlQuery function to omit sensitive column names while still providing enough schema information for the LLM to generate useful analytics queries.


Was this helpful?  👍 Yes  |  👎 No 

Comment on lines +77 to +80
router.post('/authorized/:level/analytics/query', async (req: Request, res: Response) => {
try {
const { level } = req.params as { level: 'minnow' | 'shark' };
const { question, model } = analyticsQuerySchema.parse(req.body);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium

The authorization level parameter ('minnow' or 'shark') is extracted from the URL but never used for access control. Both authorization levels have identical database access, suggesting unfinished implementation of tiered permissions. This overly broad access violates the principle of least privilege.

💡 Suggested Fix

Implement table-based access restrictions for different authorization levels:

router.post('/authorized/:level/analytics/query', authenticateToken, async (req: Request, res: Response) => {
  try {
    const { level } = req.params as { level: 'minnow' | 'shark' };
    const { question, model } = analyticsQuerySchema.parse(req.body);

    // Define allowed tables based on authorization level
    const allowedTables: Record<string, string[]> = {
      minnow: ['bookings', 'properties'],  // Limited access
      shark: ['bookings', 'properties', 'owners'],  // Full access
    };

    const sqlQuery = await generateSqlQuery(question, model);

    // Validate generated SQL and check table access
    const referencedTables = extractTablesFromQuery(sqlQuery);
    const unauthorizedTables = referencedTables.filter(
      table => !allowedTables[level].includes(table)
    );

    if (unauthorizedTables.length > 0) {
      return res.status(403).json({
        error: 'Forbidden',
        message: `Authorization level '${level}' cannot access tables: ${unauthorizedTables.join(', ')}`
      });
    }

    // ... execute query ...
  }
});
🤖 AI Agent Prompt

At src/routes/analytics.ts:77-80, the code extracts a level parameter ('minnow' or 'shark') but never uses it for authorization. This suggests an intended design where different levels should have different access, but it's not implemented.

Your task: Implement authorization level enforcement. Define what tables or data each level can access, then validate the generated SQL queries to ensure they only reference allowed tables for that level. For example, 'minnow' users might only access bookings and properties, while 'shark' users can also access the owners table.

You'll need to parse the generated SQL to extract referenced tables and reject queries that access unauthorized tables. Consider using regex patterns or a lightweight SQL parser. The specific access model should align with your application's business requirements - this is a defense-in-depth measure to limit the blast radius if other security controls fail.


Was this helpful?  👍 Yes  |  👎 No 

@danenania danenania closed this Jan 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants