Skip to content

Commit 7dedfed

Browse files
committed
feat: introduce gemini calls
1 parent e6c9bf1 commit 7dedfed

5 files changed

Lines changed: 28 additions & 14 deletions

File tree

.github/workflows/dev.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ jobs:
1616
env:
1717
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
1818
OPENAI_MODEL: ${{ vars.OPENAI_MODEL }}
19+
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
20+
GOOGLE_AI_MODEL: ${{ vars.GOOGLE_AI_MODEL }}
1921
steps:
2022
- name: Checkout Code
2123
uses: actions/checkout@v4

.github/workflows/main.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ jobs:
1616
env:
1717
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
1818
OPENAI_MODEL: ${{ vars.OPENAI_MODEL }}
19+
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
20+
GOOGLE_AI_MODEL: ${{ vars.GOOGLE_AI_MODEL }}
1921
steps:
2022
- name: Checkout Code
2123
uses: actions/checkout@v4

README.md

Lines changed: 14 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,19 @@ This chapter helps you to quickly set up a new Python chat module function using
99
> [!NOTE]
1010
> To develop this function further, you will require the following environment variables in your `.env` file:
1111
```bash
12-
> If you use azure-openai:
12+
> If you use OpenAI:
13+
OPENAI_API_KEY
14+
OPENAI_MODEL
15+
16+
> If you use GoogleAI:
17+
GOOGLE_AI_API_KEY
18+
GOOGLE_AI_MODEL
19+
```
20+
21+
> [!Note]
22+
> If you decide to use another endpoint such as Azure or Ollama or any other, please update the github workflow files to use the right secrets and variables for testing.
23+
```bash
24+
> If you use Azure-OpenAI:
1325
AZURE_OPENAI_API_KEY
1426
AZURE_OPENAI_ENDPOINT
1527
AZURE_OPENAI_API_VERSION
@@ -19,11 +31,7 @@ AZURE_OPENAI_EMBEDDING_1536_DEPLOYMENT
1931
AZURE_OPENAI_EMBEDDING_3072_MODEL
2032
AZURE_OPENAI_EMBEDDING_1536_MODEL
2133

22-
> If you use openai:
23-
OPENAI_API_KEY
24-
OPENAI_MODEL
25-
26-
> For monitoring of the LLM calls (follow instructions on how to set up on langsmith):
34+
> For monitoring of the LLM calls (follow instructions on how to set up on langsmith online):
2735
LANGCHAIN_TRACING_V2
2836
LANGCHAIN_ENDPOINT
2937
LANGCHAIN_API_KEY

src/agents/base_agent/base_agent.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
try:
2-
from ..llm_factory import OpenAILLMs
2+
from ..llm_factory import OpenAILLMs, GoogleAILLMs
33
from .base_prompts import \
44
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
55
from ..utils.types import InvokeAgentResponseType
66
except ImportError:
7-
from src.agents.llm_factory import OpenAILLMs
7+
from src.agents.llm_factory import OpenAILLMs, GoogleAILLMs
88
from src.agents.base_agent.base_prompts import \
99
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt, summary_system_prompt
1010
from src.agents.utils.types import InvokeAgentResponseType
@@ -35,9 +35,9 @@ class State(TypedDict):
3535

3636
class BaseAgent:
3737
def __init__(self):
38-
llm = OpenAILLMs()
38+
llm = OpenAILLMs() # OpenAILLMs() or GoogleAILLMs()
3939
self.llm = llm.get_llm()
40-
summarisation_llm = OpenAILLMs()
40+
summarisation_llm = OpenAILLMs() # OpenAILLMs() or GoogleAILLMs()
4141
self.summarisation_llm = summarisation_llm.get_llm()
4242
self.summary = ""
4343
self.conversationalStyle = ""
@@ -120,12 +120,12 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:
120120
conversationalStyle_message = self.conversation_preference_prompt
121121

122122
# STEP 1: Summarize the conversation
123-
messages = state["messages"][:-1] + [SystemMessage(content=summary_message)]
123+
messages = state["messages"][:-1] + [HumanMessage(content=summary_message)]
124124
valid_messages = self.check_for_valid_messages(messages)
125125
summary_response = self.summarisation_llm.invoke(valid_messages)
126126

127127
# STEP 2: Analyze the conversational style
128-
messages = state["messages"][:-1] + [SystemMessage(content=conversationalStyle_message)]
128+
messages = state["messages"][:-1] + [HumanMessage(content=conversationalStyle_message)]
129129
valid_messages = self.check_for_valid_messages(messages)
130130
conversationalStyle_response = self.summarisation_llm.invoke(valid_messages)
131131

src/agents/base_agent/base_prompts.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,11 +64,13 @@
6464
Structured: Organize the summary into sections such as 'Topics Discussed' and 'Top 3 Key Detailed Ideas'.
6565
Neutral and Accurate: Avoid adding interpretations or opinions; focus only on the content shared.
6666
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.
67-
Last messages: Include the most recent 4 messages to provide context for the summary.
67+
Last messages: Include the most recent 5 messages to provide context for the summary.
6868
6969
Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion."""
7070

71-
summary_prompt = f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion."""
71+
summary_prompt = f"""Summarize the conversation between a student and a tutor. Your summary should highlight the major topics discussed during the session, followed by a detailed recollection of the last five significant points or ideas. Ensure the summary flows smoothly to maintain the continuity of the discussion.
72+
73+
{summary_guidelines}"""
7274

7375
update_summary_prompt = f"""Update the summary by taking into account the new messages above.
7476

0 commit comments

Comments
 (0)