66- Docker installed on your system
77- Basic knowledge of Docker commands
88
9- ## 1. Build the Streamlit Chatbot Docker Image
9+ ## Build the Streamlit Chatbot Docker Image
1010
1111### Build the Docker Image
1212
@@ -16,26 +16,31 @@ Run the following command in the same directory as the `Dockerfile`:
1616docker build -t ai-chatbot .
1717```
1818
19- ## 2. Run Ollama on Docker
19+ ## Run Ollama on Docker
2020
2121To run Ollama with the LLaMA 3.2:1B model, execute:
2222
2323``` sh
2424docker run -d --name ollama -p 11434:11434 ollama/ollama:latest
2525```
26+ If you have GPU in your machine then use
27+
28+ ``` sh
29+ docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
30+ ```
2631
2732Pull and prepare the LLaMA 3.2:1B model:
2833
2934``` sh
3035docker exec -it ollama ollama pull llama3.2:1b
3136```
3237
33- ## 3. Run the Streamlit Chatbot Container
38+ ## Run the Streamlit Chatbot Container
3439
3540Once Ollama is running, start the chatbot container:
3641
3742``` sh
38- docker run -d --name ai-chatbot -p 8501:8501 streamlit -chatbot
43+ docker run -d --name ai-chatbot -p 8501:8501 ai -chatbot
3944```
4045
4146Get the Container IP of the Ollama Container
@@ -57,7 +62,7 @@ On Linux/Mac
5762docker inspect 2cf4d51a43c9 | grep IPAddress
5863```
5964
60- ## 4. Access the Chatbot
65+ ## Access the Chatbot
6166
6267Open your browser and visit:
6368
@@ -66,7 +71,23 @@ http://localhost:8501
6671```
6772Update the Backend URL
6873
69- ## 5. Stop and Remove Containers
74+ ### Check if Ollama is serving your model
75+
76+ For CPU Only
77+ ```
78+ docker exec ollama ollama ps
79+ NAME ID SIZE PROCESSOR UNTIL
80+ llama3.2:1b baf6a787fdff 2.2 GB 100% CPU 4 minutes from now
81+ ```
82+ With GPU support
83+
84+ ```
85+ docker exec ollama ollama ps
86+ NAME ID SIZE PROCESSOR UNTIL
87+ llama3.2:1b baf6a787fdff 2.7 GB 100% GPU 4 minutes from now
88+ ```
89+
90+ ## Stop and Remove Containers
7091
7192To stop and remove all running containers:
7293
0 commit comments