Today’s AI coding tools solve one-shot code generation. But the real cost of software lies in long-term evolution: changing requirements, architectural drift, and endless maintenance — turning top engineers into permanent system caretakers.
EVOLT exists to end this.
EVOLT is a framework for self-evolving agents that enables AI to own the full software lifecycle — from initial construction, to long-term maintenance, to continuous evolution.
We are not building another code assistant. We are solving the core problem of long-horizon software engineering.
EVOLT is built around a self-evolving improvement engine with three layers:
- 🛠️ Tool-using agents — Agents that can read, write, test, and refactor real codebases like software engineers.
- 💾 Persistent experience — Successful actions and patterns are encoded as reusable, composable knowledge assets. (Planned)
- 🧠 Self-evolution — Agent strategies and workflows continuously improve through experience. (Planned)
EVOLT has already open-sourced and stabilized its core foundation: tool-using agents. You can today build AI agents that understand and operate on real code repositories.
We are actively designing and developing self-evolution and persistent experience. The full roadmap is public — contributions and discussion are welcome.
We believe that powerful, programmable tool-use is the foundation of long-term autonomy. Before an agent can learn how to improve itself, it must first be able to act reliably in the real world — just like a human engineer working on a live codebase.
For local development, you can use npm link to test the package, especially after making code changes and wanting to test the modified effects locally:
# In the project root directory
npm run build
npm link
# In the project where you want to use evoltagent
npm link evoltagent npm install evoltagent
⚠️ Note: The configuration fileconfig.yamluses camelCase naming (e.g.,apiKey,baseUrl), which differs from the snake_case used in the Python version. For details, refer to LLM_CONFIG.md.
import { Agent, ModelResponse, ModelConfig } from "evoltagent";
// Configure the model using ModelConfig
const modelConfig: ModelConfig = {
provider: "deepseek",
model: "deepseek-chat",
apiKey: process.env.DEEPSEEK_API_KEY,
baseUrl: process.env.DEEPSEEK_BASE_URL || "https://api.deepseek.com/v1",
contextWindowTokens: 128000,
maxOutputTokens: 8092,
temperature: 0.2,
topP: 0.9,
stream: true,
};
async function main() {
const agent = new Agent({
name: "posy",
profile: "You are a helpful assistant.",
tools: ["ThinkTool.execute", "Reply2HumanTool.reply"],
modelConfig: modelConfig,
});
const result = await agent.run("Please introduce yourself");
console.log("Agent response:", result);
}
main().catch(console.error); First, you need to set up the configuration file. For the configuration method of the MCP Server, refer to LLM_CONFIG.md for details. If you have configured model parameters, you can use the model name from the configuration file to replace the ModelConfig above.
import { Agent, ModelResponse } from "evoltagent";
async function runAgent(userInput: string): Promise<ModelResponse[]> {
const agent = new Agent({
name: "posy",
profile: "You are an assistant capable of operating a browser.",
mcpServerNames: ["playwright"],
modelConfig: "deepseek",
verbose: 2,
});
return await agent.run(userInput);
} npm run build
⚠️ Note: Files inexamples/agent/requireBOYUE_API_KEYandBOYUE_API_URLto be set in the.envfile.
⚠️ Note: To use another provider, simply adjust the configuration inexamples/agent/modelConfig.ts.
# Basic examples
npx tsx examples/agent/agentDemoWithModelConfig.ts
npx tsx examples/agent/agentDemo.ts
# MCP tools example
npx tsx examples/agent/agentWithMCPTools.ts Agent- Main Agent classModel- Model classMessage,MessageHistory- Message and message history management
SystemToolStore,FunctionCallingStore- Tool storageThinkTool- Thinking toolCommandLineTool- Command line toolFileEditor- File editorReply2HumanTool- Reply-to-human toolTodoListTool- Todo list toolGitTool- Git tool
tools- Register toolsregisterAgentAsTool- Register Agent as a tool
BaseEnvironment- Base environmentCodingEnvironment- Coding environment
loadModelConfig- Load model configuration- Other configuration constants and paths
Sincere thanks to the open-source project claude-quickstarts, which provided significant assistance during the development of this project.