Posts Tagged with “llm”

Logseq RAG Further Exploration with Qdrant and Chat History

Logseq RAG Further Exploration with Qdrant and Chat History

Improve the performance of the Logseq RAG chatbot project. Read More →

Unleash Your Logseq Knowledge: Conversational Search with Local LLMs

Unleash Your Logseq Knowledge: Conversational Search with Local LLMs

Transform your Logseq experience with conversational search powered by locally running Ollama. Retrieve the most relevant information with ease while keeping your data private and secure! Read More →

Free Offline AI Writing Assistance For Mac With Local LLM

Free Offline AI Writing Assistance For Mac With Local LLM

Guide to run your own offline AI writing assistance using Open Llama and Raycast on Mac Read More →

Local LLM for Function Calling with Open Llama, NexusRaven and JS/TS

Local LLM for Function Calling with Open Llama, NexusRaven and JS/TS

This post explores running Function Calling locally with NexusRaven, a model trained specifically for this task. Using a local LLM is preferable when you want to keep your data on-premise rather than sending it to the cloud. Read More →