From idea to production in just few lines
The first neuro-symbolic Language Model (LM) framework leveraging the simplicity of Keras and the rigor of Deep Learning best practices.
Build RAGs, autonomous agents, multi-agents systems, self-evolving systems and more in just few lines
Documentation Β· FAQ Β· Discord Β· Code Examples
β If you find Synalinks useful, please star the repo! Help us reach more AI/ML engineers and grow the community. β
Too busy to read the documentation? Give the llms.txt or llms-full.txt to you favorite LMs or AI coding tools
Synalinks is an open-source neuro-symbolic framework that makes it simple to create, train, evaluate, and deploy advanced LM-based applications, including graph RAGs, autonomous agents, and self-evolving reasoning systems.
Think Keras for Language Models applications, a clean, declarative API where:
- π§© You compose modules like you would with layers.
- βοΈ You train & optimize with in-context reinforcement learning.
- π You deploy instantly as REST APIs or MCP servers.
- Progressive complexity: Start simple and grow advanced naturally.
- Neuro-symbolic learning: Combine logic, structure, and deep models.
- In-context optimization: Improve model reasoning without retraining weights.
| Role | Why Synalinks Helps |
|---|---|
| π§βπ» Developers | Build complex LM apps without boilerplate. |
| π§ Researchers | Prototype neuro-symbolic and RL-in-context systems fast. |
| π’ Data Scientists | Integrate LM workflows with APIs & databases. |
| π Students/Hobbyists | Learn AI composition in a clean, intuitive framework. |
Building robust LM apps is hard. Synalinks simplifies it with:
- Prompt/Anything optimization per module via In-Context RL
- Versionable, JSON-serializable pipelines
- Constrained structured outputs (JSON) for correctness
- Automatic async & parallel execution
- Metrics, rewards & evaluations built-in
- Native integrations: OpenAI, Ollama, Anthropic, Mistral, Azure, Groq, Gemini, XAI
- Graph DB support: Neo4J, MemGraph
- API-ready: Deploy with FastAPI or FastMCP
- KerasTuner compatibility for hyperparameter search
| Framework | MCP | Graph DB | Logical Flow | Robust Branching | Parallel Function Calling | Hyperparameter Tuning | Ease of Use |
|---|---|---|---|---|---|---|---|
| Synalinks | β Yes | β Yes | β Yes | β Yes | β Yes | β Yes | π |
| DSPy | β Yes | β No | β No | β No | β No | β No | π’ |
| AdalFlow | β Yes | β No | β No | β No | β No | β No | π’ |
| TextGrad | β No | β No | β No | β No | β No | β No | π |
| Trace | β No | β No | β No | β No | β No | β No | π |
uv pip install synalinksimport synalinks
import asyncio
class Query(synalinks.DataModel):
query: str = synalinks.Field(
description="The user query",
)
class NumericalAnswer(synalinks.DataModel):
answer: float = synalinks.Field(
description="The final numerical answer",
)
language_model = synalinks.LanguageModel(
model="gemini/gemini-2.5-pro",
)
@synalinks.saving.register_synalinks_serializable()
async def calculate(expression: str):
"""Calculate the result of a mathematical expression.
Args:
expression (str): The mathematical expression to calculate, such as
'2 + 2'. The expression can contain numbers, operators (+, -, *, /),
parentheses, and spaces.
"""
if not all(char in "0123456789+-*/(). " for char in expression):
return {
"result": None,
"log": "Error: invalid characters in expression",
}
try:
# Evaluate the mathematical expression safely
result = round(float(eval(expression, {"__builtins__": None}, {})), 2)
return {
"result": result,
"log": "Successfully executed",
}
except Exception as e:
return {
"result": None,
"log": f"Error: {e}",
}
async def main():
inputs = synalinks.Input(data_model=Query)
outputs = await synalinks.FunctionCallingAgent(
data_model=NumericalAnswer,
tools=[
synalinks.Tool(calculate),
],
language_model=language_model,
)(inputs)
program = synalinks.Program(
inputs=inputs,
outputs=outputs,
name="math_agent",
description="A math agent",
)To print a tabular summary of your program:
program.summary()Or a plot (Useful to document your system):
synalinks.utils.plot_program(
program,
show_module_names=True,
show_trainable=True,
show_schemas=True,
)To run your program use the following:
result = await program(
Query(
query=(
"A bookstore receives a shipment of 135 new books."
"They place the books evenly onto 9 shelves."
"Later, they decide to move 3 books from each shelf to a display table"
" at the front of the store. "
"How many books are left on the shelves after the books are moved?"
)
),
)async def main():
# ... your program definition
(x_train, y_train), (x_test, y_test) = synalinks.datasets.gsm8k.load_data()
program.compile(
reward=synalinks.rewards.ExactMatch(
in_mask=["answer"],
),
optimizer=synalinks.optimizers.OMEGA(
language_model=language_model,
embedding_model=embedding_model,
),
)
batch_size=1
epochs=10
history = await program.fit(
x_train,
y_train,
validation_split=0.2,
batch_size=batch_size,
epochs=epochs,
)
if __name__ == "__main__":
asyncio.run(main())To save the entire architecture and variables (the program's state) into a JSON file, do:
program.save("my_program.json")In order to load it, do:
loaded_program = synalinks.Program.load("my_program.json")To save only the state your program (the variables) into JSON:
program.save_variables("my_program.variables.json")To load its variables (needs a program with the same architecture), do:
program.load_variables("my_program.variables.json")To enable logging, use the following at the beginning of your script:
synalinks.enable_logging()You can learn more by reading our documentation. If you have questions, the FAQ might help you.
Contributions are welcome, either for the implementation of additional modules, metrics, or optimizers. For more information, or help for implementing your ideas (or ones from a paper), please join our discord.
Beware that every additional metric/module/optimizer should be approved by the core team, we want to keep the library minimal and clean as possible to avoid an uncontrolled growth leading to bad software practices like in most current leading LM frameworks.
If you have specific feedbacks or features request we invite you to open an issue.
Your contributions, feedback, and support are what make this project thrive.
From small bug fixes to major features, thank you for believing in open collaboration and the future of neuro-symbolic AI.
Join our community to learn more about neuro-symbolic systems and the future of AI. We welcome the participation of people from very different backgrounds or education levels.
This work have been done under the supervision of FranΓ§ois Chollet, the author of Keras. If this work is useful for your research please use the following bibtex entry:
@misc{sallami2025synalinks,
title={Synalinks},
author={Sallami, Yoan and Chollet, Fran\c{c}ois},
year={2025},
howpublished={\url{https://github.com/SynaLinks/Synalinks}},
}Synalinks would not be possible without the great work of the following open-source projects:
