Improve the performance of the Logseq RAG chatbot project. Read More →
Transform your Logseq experience with conversational search powered by locally running Ollama. Retrieve the most relevant information with ease while keeping your data private and secure! Read More →
Guide to run your own offline AI writing assistance using Open Llama and Raycast on Mac Read More →
This post explores running Function Calling locally with NexusRaven, a model trained specifically for this task. Using a local LLM is preferable when you want to keep your data on-premise rather than sending it to the cloud. Read More →