Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 26 additions & 20 deletions generator/src/llm_utils.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
import requests
import logging
import asyncio
from functools import partial
from generator.src.config import OLLAMA_MODEL

logger = logging.getLogger(__name__)

# πŸ”₯ Limit concurrency
semaphore = asyncio.Semaphore(5)


def generate_text(prompt, model=None, url_override=None):
"""Generate text from prompt using local Ollama."""
target_model = model or OLLAMA_MODEL

# Use centralized config unless overridden

if url_override:
url = url_override
else:
Expand All @@ -24,27 +29,28 @@ def generate_text(prompt, model=None, url_override=None):
"num_ctx": 8192
}
}

logger.info(f"Sending request to Ollama (model: {target_model}, prompt length: {len(prompt)} chars)")


try:
resp = requests.post(url, json=payload, timeout=600)
resp.raise_for_status()
except Exception as e:
logger.error(f"Error calling Ollama at {url}: {e}", exc_info=True)
return ""

try:
data = resp.json()
response_text = data.get("response", "")

logger.info(f"Received response from Ollama (length: {len(response_text)} chars)")
logger.info(f"Response done: {data.get('done', False)}")

if not response_text:
logger.warning(f"Empty response from Ollama. Full data: {data}")

return response_text
return data.get("response", "")
except Exception as e:
logger.error(f"Error parsing Ollama response: {e}", exc_info=True)
logger.error(f"LLM error: {e}", exc_info=True)
return ""
Comment on lines 38 to 40
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | πŸ”΅ Trivial

Use logger.exception() instead of logger.error() with exc_info=True.

Per static analysis hint, logger.exception() is the idiomatic way to log exceptions with traceback. Also consider using %s formatting for lazy evaluation.

♻️ Proposed fix
     except Exception as e:
-        logger.error(f"LLM error: {e}", exc_info=True)
+        logger.exception("LLM error: %s", e)
         return ""
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except Exception as e:
logger.error(f"Error parsing Ollama response: {e}", exc_info=True)
logger.error(f"LLM error: {e}", exc_info=True)
return ""
except Exception as e:
logger.exception("LLM error: %s", e)
return ""
🧰 Tools
πŸͺ› Ruff (0.15.6)

[warning] 39-39: Logging .exception(...) should be used instead of .error(..., exc_info=True)

(G201)


[warning] 39-39: Logging statement uses f-string

(G004)

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@generator/src/llm_utils.py` around lines 38 - 40, In the except block that
currently calls logger.error(f"LLM error: {e}", exc_info=True) (inside the LLM
call/error handling in generator/src/llm_utils.py), replace that call with
logger.exception("LLM error: %s", e) to use the idiomatic logger.exception()
which automatically includes the traceback and also switch to %s formatting for
lazy evaluation; keep the return "" behavior unchanged.



async def generate_text_async(prompt, model=None, url_override=None):
loop = asyncio.get_event_loop()
func = partial(generate_text, prompt, model, url_override)
return await loop.run_in_executor(None, func)
Comment on lines +43 to +46
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

asyncio.get_event_loop deprecated Python 3.10 get_running_loop

πŸ’‘ Result:

Yes, asyncio.get_event_loop is deprecated in Python 3.10 when called from coroutines or callbacks without a running event loop, or in certain contexts where no current event loop exists. Instead, use asyncio.get_running_loop inside async functions (coroutines or callbacks), as it raises RuntimeError if no loop is running, ensuring safer usage. asyncio.get_running_loop (added in Python 3.7) returns the running event loop in the current OS thread and can only be called from a coroutine or callback. For top-level entry points, prefer asyncio.run(main), which handles loop creation and cleanup automatically, avoiding manual loop management altogether. Example migration: - Old (deprecated in async context): loop = asyncio.get_event_loop - New: loop = asyncio.get_running_loop Deprecation warnings appear in Python 3.10+ when get_event_loop is used improperly, and in later versions (e.g., 3.14+), it may raise RuntimeError if no current loop exists. Multiple projects (Langchain, Kingpin) have updated codebases to replace it with get_running_loop.

Citations:


🏁 Script executed:

# Find project configuration files to determine Python version requirement
git ls-files | grep -E "(setup\.py|pyproject\.toml|requirements|tox\.ini|\.python-version)" | head -20

Repository: AOSSIE-Org/LibrEd

Length of output: 87


🏁 Script executed:

# Search for usages of generate_text_async to understand the calling context
rg "generate_text_async" --type py -B 2 -A 2

Repository: AOSSIE-Org/LibrEd

Length of output: 625


🏁 Script executed:

# Check the context around the function to see if there are any comments or tests
cat -n generator/src/llm_utils.py | head -60

Repository: AOSSIE-Org/LibrEd

Length of output: 1956


🏁 Script executed:

cat generator/requirements.txt

Repository: AOSSIE-Org/LibrEd

Length of output: 221


🏁 Script executed:

git ls-files | grep -E "setup\.(py|cfg)|pyproject\.toml|\.python-version"

Repository: AOSSIE-Org/LibrEd

Length of output: 43


🏁 Script executed:

git ls-files | grep -E "workflow|github|\.yml|\.yaml" | head -10

Repository: AOSSIE-Org/LibrEd

Length of output: 397


🏁 Script executed:

cat .github/workflows/ci.yml | head -50

Repository: AOSSIE-Org/LibrEd

Length of output: 1162


Use asyncio.get_running_loop() instead of asyncio.get_event_loop().

asyncio.get_event_loop() is deprecated in Python 3.10+ when called from a coroutine and may return an unexpected loop. Since this is an async function always called from an async context (controlled_generate_text β†’ process_batch), use asyncio.get_running_loop() which correctly returns the running loop or raises RuntimeError if none is running.

Proposed fix
 async def generate_text_async(prompt, model=None, url_override=None):
-    loop = asyncio.get_event_loop()
+    loop = asyncio.get_running_loop()
     func = partial(generate_text, prompt, model, url_override)
     return await loop.run_in_executor(None, func)
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@generator/src/llm_utils.py` around lines 43 - 46, Replace the deprecated
asyncio.get_event_loop() call in generate_text_async with
asyncio.get_running_loop() so the coroutine uses the current running loop; in
function generate_text_async (which is invoked from controlled_generate_text β†’
process_batch) call loop = asyncio.get_running_loop(), keep the rest
(partial(generate_text, ...), loop.run_in_executor) unchanged, and ensure this
surfaces a RuntimeError if no loop is running.



async def controlled_generate_text(prompt):
async with semaphore:
return await generate_text_async(prompt)


async def process_batch(prompts):
tasks = [controlled_generate_text(p) for p in prompts]
return await asyncio.gather(*tasks, return_exceptions=True)
Comment on lines +54 to +56
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Add docstring clarifying that exceptions are returned as values.

With return_exceptions=True, failed requests return Exception objects mixed with successful str results. Callers must check isinstance(result, Exception) to distinguish errors from valid responses.

πŸ“ Proposed fix with docstring
 async def process_batch(prompts):
+    """Process multiple prompts concurrently with limited concurrency.
+    
+    Args:
+        prompts: List of prompt strings to process.
+        
+    Returns:
+        List of results where each element is either:
+        - str: The generated text response
+        - Exception: If the request failed
+        
+    Callers should check isinstance(result, Exception) to handle errors.
+    """
     tasks = [controlled_generate_text(p) for p in prompts]
     return await asyncio.gather(*tasks, return_exceptions=True)
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@generator/src/llm_utils.py` around lines 54 - 56, The process_batch coroutine
uses asyncio.gather(..., return_exceptions=True) which mixes Exception objects
with successful string results; add a docstring to process_batch explaining that
results may be Exception instances (from controlled_generate_text), callers must
check each result with isinstance(result, Exception) before treating it as a
valid response, and show an example pattern for handling errors vs. successful
strings so consumers know to inspect and handle exceptions returned as values.

Loading