Provider Setup Guide

Recommended setup for Ollama (primary), plus LM Studio and llama.cpp local endpoints. This guide reflects current multi-provider behavior in v0.6.0.

1. Install the Extension

Install from Chrome Web Store: Ollama Client

2. Choose a Provider

Ollama (recommended baseline)

Default endpoint:

http://localhost:11434

LM Studio

Default profile endpoint:

http://localhost:1234/v1

llama.cpp server

Default profile endpoint:

http://localhost:8000/v1

3. Start Ollama (primary path)

Install Ollama from ollama.com, then start it:

ollama serve

Pull at least one chat model:

ollama pull qwen2.5:3b
Optional helper script in this repo: tools/ollama-env.sh (helps with LAN and Firefox origin setup).

4. Configure Providers in Extension

  1. Open extension settings.
  2. Go to Providers tab.
  3. Enable provider(s) you want.
  4. Set base URL and run connection test.
  5. Select a model from model menu in chat.

5. Verify Endpoints

Ollama check

curl http://localhost:11434/api/tags

LM Studio check

curl http://localhost:1234/v1/models

llama.cpp check

curl http://localhost:8000/v1/models

6. Important Reality Checks

7. CORS and Browser Notes

Chrome-based browsers use DNR-based request handling in extension flow. Firefox has different extension API behavior.

For Firefox and strict environments, you may need explicit OLLAMA_ORIGINS setup when using Ollama.

8. Troubleshooting

9. Related Docs