Ollama on Docker

Ollama on Docker

Ollama can be easily downloaded and installed on all major OS platforms, such as Linux, Macintosh, and Windows. The official download link provides more information about the installation process. For our tutorial, we will install Ollama on Docker, which is common for all OS platforms.

The latest Ollama Docker Image can be found on the official Link of Ollama Docker. Ollama Docker Image provides a platform for Large Language Models or LLM to run. Ollama has a library of varieties of models suitable for different tasks, list of all available Ollama Models can be found on the official website of Ollama Model Library. In this tutorial, we will be using the latest Ollama 3.1.

To run the Ollama 3.1, execute the below docker compose file.

version: '3.8'
services:
  ollama-model:
    image: ollama/ollama:latest
    container_name: ollama_container
    ports:
      - 11434:11434/tcp
    healthcheck:
      test: ollama --version || exit 1
    command: serve
    volumes:
      - ./ollama/ollama:/root/.ollama
      - ./entrypoint.sh:/entrypoint.sh
    pull_policy: missing
    tty: true
    restart: no
    entrypoint: [ "/usr/bin/bash", "/entrypoint.sh" ]
#!/bin/bash
# Start Ollama in the background.
/bin/ollama serve &
# Record Process ID.
pid=$!
# Pause for Ollama to start.
sleep 5
echo "🔴 Retrieve LLAMA3 model..."
ollama pull llama3.1
echo "🟢 Done!"
# Wait for Ollama process to finish.
wait $pid

execute docker-compose up

aThe LLM Model is running can be validated by executing query docker exec -it ollama_container ollama list

follow us on