-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Description
Only short explanations, only short video, only short readings
Basically:
- you connect to a stateless LLM, the thing that takes text and throughs back text and forgets everything immediately after.
- These models are trained and fine-tuned to use “tools” and they’re very eager to do so.
<= this is where an Agent comes in. - you add a process that keeps the state = the memory of the conversation
- you give access to tools to read/write/fetch/find/grep...
and this it?
The problem: the context window is small
Remember what RAG does? you have your data, documentation... vectorized, and you run a "k closest neighbour" (kNN finds the k nearest vectors to the vectorized query) and retokenize it and inject it with your query to the LLM. This gives you much better context to the LLM to formulate an answer with reliable source of information. RAG makes it more precise if your scope is narrow but deep.
Agents keep the state and provide tools, and MCP is a unified way to access to these tools.
Agent build their context by using tools, such as digging into your code as there are trained to do so.
Now, building an agent:
Literature
I was looking at https://ampcode.com/ when I found this (in Go).
The best(?) article
nelsonic
Metadata
Metadata
Assignees
Labels
No labels


