Obsidian + Smart Connection + Ollama:Make Local LLM Your Intelligent Note-Taking Assistant

Authors

Due to restrictions from schools, companies, or other external environments, many people cannot use external closed-source large models like OpenRouter and OpenAI. Recently, many people have been asking how to use their own deployed Ollama model in the Smart Connection plugin of Obsidian. To meet everyone’s needs, we have specially created this tutorial to teach you how to seamlessly integrate the Ollama model in the Smart Connection plugin of Obsidian. We hope this guide will provide inspiration and help to make your note-taking system more intelligent and efficient!

Download and Run Ollama Locally

Install Ollama

macOS

MacOS download link for Ollama — https://ollama.com/download/Ollama-darwin.zip

Windows

Windows download link for Ollama — https://ollama.com/download/OllamaSetup.exe

Linux

Download and install Ollama on Linux

curl -fsSL https://ollama.com/install.sh | sh

Start the Model

Below is an example of how to start the llama3 model with Ollama on a Mac computer. The process is similar for other operating systems.

ollama run llama3

After starting successfully, you can enter a prompt to test if the model is running properly.

Finally, use the curl command to check if the HTTP address can be accessed normally.

Configure the Installed Model in the Smart Connection Plugin

In the plugin configuration page, fill in the settings as shown below. Pay special attention to ensure that the Model Name matches the name of the model you installed exactly. This is because, when using the Smart Chat dialog, the model name will be passed as a parameter to Ollama. The hostname, port, and path use the default settings here, and you should follow the same if you have not customized Ollama.

Then you can open the Smart Chat page to test if the configuration was successful.

If you encounter any issues, you can also open the console in Obsidian to check the parameters of the HTTP request initiated by Obsidian. Here you can see that the model name matches the name in the configuration, so if it is filled in incorrectly, it will not work.

Summary

The entire process we just walked through is relatively simple. Of course, there are some challenges with local deployment, such as high hardware resource requirements, increased maintenance complexity, and the need for specialized technical support. However, overall, running a large model locally remains a very competitive option in scenarios that emphasize data security and privacy protection. If you have any questions, feel free to leave a comment.

Share this content