L
Lukas Köhler
Hello,
I would like to propose some technical details on this:
MVP:
- Use an embedding-model to index all objects into a RAG Vector Storage. Depending on the average size and complexity of notes, you may only require 1536 or less dimensions to provide good results.
- While writing notes in a new object, the text (and maybe properties) are computed into another embedding vector. You may use typing stops as an indicator to compute a new embedding to compare against the object vector storage, or recompute the embedding on every typed char (live, expensive).
- Then create a Sidepanel which shows suggested objects to link as a list/cardlist. The order of the elements is derived from the similarity score. You may use a threshold of the similarity score to actually check if you catched relevant objects. If relevant objects are found, a visual indicator is placed to notify the user that he might want to link and to open the actual sidepanel.
- NOT MVP: Clicking on a suggested object tries to link it automatically at the right places in the object text you are currently editing. Doing this requires an algorithm to find the actual places to link reliably or a small transformer model that can give you the right position.
4.1 MVP Workaround: Adding suggested objects just appends the links to the bottom of the note. This is not as clean and beautiful, but reliable and it is at least linked.
Usage of the embedding-models could be done in two ways:
- If enough computing power is available on the client (CUDA?), you may run an open-source embedding-model on device (optionally).
- The usual way would be running the embedding model in the cloud, in the background, counting it against the user's AI-Budget (Pro).
I think this feature would be insanely practical and fits very well into the "just works" vision of Capacities.
Using models like https://huggingface.co/intfloat/multilingual-e5-large
which perform well on MTEB and run on-premise, cost may actually be very manageable within the AI Budget that Pro Users have anyway. Training/Fine-Tuning is not needed in my opinion.
However, if you run the embedding models on device, the application will eat up a lot of RAM or VRAM respectively. That is why I would suggest to include that only optionally and handle everything in the backend.
Additionally: I would combine this with https://capacities.io/feedback/p/ai-interaction-with-all-notes
combined RAG Setup Vector Storage
Beth
Merged in a post:
AI auto linking in blocks
L
Latte macchiato
Use the AI assistant to automatically rewrite text to link to existing objects in the space.
My current system has me writing down freely before going back and turning things into links, which is manual effort for something that could be automated.
Beth
Merged in a post:
Smart AI-Paste for Object Integration
S
Sula Douglas
I’d like to suggest a feature inspired by Tana’s “Paste-GPT” that could improve how users import and structure content in Capacities. The idea is to introduce an AI-powered “Smart Paste” function that automatically converts pasted text into structured, linkable objects.
This could be powered either in-house or via a custom GPT integration.
What it does:
When users paste unstructured content (e.g. from articles, meeting notes, research summaries), Capacities would intelligently parse the text and propose object creation or linking directly within the note.
How it works:
Paste detection prompts an optional “AI Paste” overlay.
The AI identifies terms that could become pages, tags, tasks, people, or dates.
Users see in-context suggestions for creating or linking objects.
Users can customise, accept, or ignore suggestions.
Option to apply a custom template (e.g. Meeting Note, Research Note).
Example use cases:
– Pasted brainstorm: “Explore ideas around onboarding and behavioural psychology.”
→ Suggests tags and a new concept page.
– Meeting note: “Spoke with Alex Chen re: Q3 roadmap.”
→ Suggests linking ‘Alex Chen’ to a person object and adding a task.
– Research note: “Memory is performative.”
→ Suggests creating objects for ‘performative memory’ and ‘embodied knowledge.’
Why it matters:
This feature would streamline the move from unstructured input to structured knowledge, helping users capture insights with less friction and maintain a richer, more interconnected graph.
Luca Joos
Thanks for the idea, we'll discuss this internally.
J
Jonathan McQ
Having this feature would be game changing. I believe more than people would realize at first. I use capacities for all my notes, School work etc. I am a believer, but I do use another application to this. Just make sure that it not only finds links on new notes are entered but also relooks at old notes and makes links there
K
Koen Vanhooff
This would indeed be a nice feature to have! This would upgrade my notes immensely. Would be nice to be able to just confirm the suggestions for linking to existing notes.
Steffen Bleher
planned
G
Guilherme Costa
Having a pkm proactively help surface potential connections through semantic similarity is currently a feature so rare in this space - although so powerful!
It has the potential to elevate your thinking, to generate a-ha moments, to reduce the time searching...
It would be awesome to see a software nail this. Mem is the best one, but it lacks so much in the other fronts it's difficult to cope with. I currently do not use Capacities but a differentiation such as this would make a difference for me.
S
Samir Lavingia
"suggesting in-text links if title matches your text" would would fix my workflow! I often just send messages from WhatsApp and then have to spend time doing the linking later, and if there was just a "check the box" for linking names, places, etc. it would be great. I would pay for that feature!