This is a LlamaIndex and Together.ai RAG chatbot using Next.js bootstrapped with create-llama.
It's powered by Llama Index, Mixtral (through Together AI Inference) and Together Embeddings. It'll embed the PDF file in data, generate embeddings stored locally, then give you a RAG chatbot to ask questions to.
Copy your .example.env file into a .env and replace the TOGETHER_API_KEY with your API key from together.ai.
- Install the dependencies.
npm install
- Generate the embeddings and store them locally in the
cachefolder. You can also provide a PDF in thedatafolder instead of the default one.
npm run generate
- Run the app and send messages to your chatbot. It will use context from the embeddings to answer questions.
npm run dev
- Ensure your environment file is called
.env - Specify a dummy
OPENAI_API_KEYvalue in this.envto make sure it works (temporary hack, Llama index is patching this)
To learn more about LlamaIndex and Together AI, take a look at the following resources:
- Together AI Documentation - learn about Together.ai (inference, finetuning, embeddings)
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).