Running Smart Composer in Obsidian with Local LLMs Using Ollama: A Private, Powerful Note-Taking Upgrade

Why I Wanted This Setup

I wanted to bring AI-powered note-taking into Obsidian, but without relying on cloud models. Smart Composer seemed like a great fit, especially with Ollama allowing local LLMs to run efficiently on my machine. The goal: fast, private, and offline.


Step 1: Install and Run Ollama

I installed Ollama on my Mac, then ran the mistral:instruct model as a starting point:

ollama run mistral:instruct

Once I confirmed it was accessible at http://localhost:11434, I moved to Obsidian.


Step 2: Configure Smart Composer

In Obsidian’s Smart Composer plugin:

• Added Ollama as a provider

• Set:

Host: http://sagan.local:11434 (my machine’s hostname)

Streaming: Off

I selected mistral:instruct as the Chat model and tested it with a few notes.


Step 3: Improving Output

The first results were underwhelming. To improve response quality, I:

• Switched to a stronger model (llama3)

• Added a system prompt:

“You are a helpful, concise expert assistant…”

• Disabled streaming for stability


Step 4: Embeddings with mxbai-embed-large

To enable better context-based responses (RAG), I indexed my vault using:

ollama pull mxbai-embed-large

Then set it as the embedding model in Smart Composer’s RAG settings and rebuilt the index.


Results & Reflections

This setup now gives me ChatGPT-style assistance entirely offline, working inside my notes, with context awareness. It’s not perfect, but it’s local, fast, and evolving.


Credits

Big thanks to Hunter Zhang for the original walkthrough that helped me get Smart Composer and Ollama working together. Their guide laid the foundation for this local-first AI workflow.