Detailed Command’s of Ollama
Ollama offers various commands to manage and run models effectively across different operating systems. Here is a detailed look at each command with examples:
- serve
- Usage:
ollama serve
- Description: Starts the Ollama server.
- Example:
ollama serve
- Flags:
-h, --help
: Help for serve.
- Environment Variables:
OLLAMA_HOST
: The host and port to bind to (default is127.0.0.1:11434
).OLLAMA_ORIGINS
: Allowed origins (comma-separated).OLLAMA_MODELS
: Path to the models directory (default is~/.ollama/models
).OLLAMA_KEEP_ALIVE
: Duration models stay loaded in memory (default is5m
).OLLAMA_DEBUG
: Set to1
to enable debug logging.
- Usage:
- create
- Usage:
ollama create MODEL
- Description: Creates a model from a Modelfile.
- Example:
ollama create custom-model -f myModelfile.yaml
- Flags:
-f, --file string
: Name of the Modelfile (default isModelfile
).-q, --quantize string
: Quantize model to this level (e.g.,q4_0
).
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- show
- Usage:
ollama show MODEL
- Description: Shows information for a model.
- Example:
ollama show llama3 --parameters
- Flags:
--license
: Show license of a model.--modelfile
: Show Modelfile of a model.--parameters
: Show parameters of a model.--system
: Show system message of a model.--template
: Show template of a model.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- run
- Usage:
ollama run MODEL [PROMPT]
- Description: Runs a model.
- Example:
ollama run llama3 "Explain quantum mechanics."
- Flags:
--format string
: Response format (e.g., JSON).--keepalive string
: Duration to keep a model loaded (e.g.,5m
).--nowordwrap
: Don’t wrap words to the next line automatically.--verbose
: Show timings for response.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- pull
- Usage:
ollama pull MODEL
- Description: Pulls a model from a registry.
- Example:
ollama pull llama3
- Flags:
--insecure
: Use an insecure registry.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- push
- Usage:
ollama push MODEL
- Description: Pushes a model to a registry.
- Example:
ollama push custom-model
- Flags:
--insecure
: Use an insecure registry.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- list
- Usage:
ollama list
- Aliases:
list, ls
- Description: Lists all models.
- Example:
ollama list
- Flags:
-h, --help
: Help for list.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- ps
- Usage:
ollama ps
- Description: Lists running models.
- Example:
ollama ps
- Flags:
-h, --help
: Help for ps.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- cp
- Usage:
ollama cp SOURCE DESTINATION
- Description: Copies a model.
- Example:
ollama cp llama3 backup-llama3
- Flags:
-h, --help
: Help for cp.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- rm
- Usage:
ollama rm MODEL [MODEL...]
- Description: Removes a model.
- Example:
ollama rm llama3
- Flags:
-h, --help
: Help for rm.
- Environment Variables:
OLLAMA_HOST
: The host and port or base URL of the Ollama server.
- Usage:
- help
- Usage:
ollama help [command]
- Description: Provides help for any command in the application.
- Example:
ollama help run
- Flags:
-h, --help
: Help for help.
- Usage: