Chat With Your Logseq Journal with GPT4All

Calvin,2 min read
Top Banner

In previous Logseq RAG experiments, I used a locally running instance of Ollama to “chat” with my Logseq journal, trying to generate insights and answers from my own notes as the knowledge base. The results were interesting, but the setup process was a bit cumbersome.

Recently, I discovered GPT4All, an all-in-one desktop app that can run local GPT models, as well as connect to cloud-based models if an API key is provided. The app is easy to use and has a nice chat interface with chat history. Most importantly, it can take a folder of files as a knowledge base and generate responses based on the content of the files. This is perfect for Logseq journals! First, I assigned the Logseq MD document folder to the app:

Assigning Logseq folder in GPT4All app

Once the folder is assigned, it automatically indexes the files for quick querying. I can then ask questions or start a conversation with the GPT model of my choice.

Using GPT4All with Logseq

I am using my MacBook Pro M1, and the response speed is quite fast. Considering that everything is running locally, I am quite impressed. It got me thinking that this is what Siri should be like, and I am pretty sure Apple has the technology to do this and will eventually implement it.