AI memory that actually understands what it remembers.
Nastalgic runs a 4-stage extraction pipeline — Resolver → Facts → Inference → Graph — turning raw conversations into a knowledge graph your agents can reason over. Not compressed chat logs. Structured intelligence.
AI agents have amnesia. You already know this — you've shipped around it.
Existing tools either compress chat logs into embeddings, or paywall the actually useful parts behind enterprise plans. Nastalgic does neither.
Extraction, not compression.
Four stages turn unstructured conversation into a queryable knowledge graph. Same pipeline runs on every message, in every vault.
Structured Extraction
Four stages. We don't compress chat logs. We extract entities, facts, relationships, and causal chains — the units your agent can actually reason with.
Knowledge Graph
13 entity types. Directed edges with confidence scores. Causal chain detection. Not a vector store with a marketing budget.
Vault Isolation
Per-tenant from day one. Free = logically isolated. Paid = dedicated DB and vector store. Enterprise = dedicated infrastructure. Architecture, not upsell.
Two-Stage RAG
Qdrant vector similarity, then CrossEncoder reranking. Good retrieval isn't just embeddings — it's embeddings plus a model that reads.
Built on research, not buzzwords.
Nastalgic implements well-published techniques from NLP, information retrieval, and knowledge representation. No magic. Just the field's vocabulary, applied carefully.
Neural Coreference Resolution
Pronouns, ellipses, and "that thing we talked about" get resolved back to the canonical entity. When memory recalls a conversation about Sarah from three sessions ago, it knows which Sarah.
// output: every reference linked to a single entity IDNamed Entity Recognition + Linking
Thirteen entity types — people, organizations, locations, events, artifacts, and more — extracted from raw conversation and linked across the entire vault. Same entity, same node, regardless of how it was referenced.
// output: 13 typed entities, vault-scopedOpen Information Extraction
Free-form messages get parsed into structured (subject, predicate, object) claims with attached evidence. The pipeline isn't compressing the conversation — it's extracting the facts inside it.
// output: structured claims with provenanceCausal Chain Detection
Beyond what happened, the inference stage tracks why. Cause-and-effect relationships, temporal ordering, and dependency chains get surfaced as first-class graph structure.
// output: directed causal edges, traversableProvenance Tracking
Every fact, every node, every edge links back to its source message via typed evidence edges. No floating claims, no "the model said so." Every assertion has a receipt.
// output: typed evidence edges, fully auditableTwo-Stage Hybrid Retrieval
Dense vector similarity for recall (Qdrant), then cross-encoder reranking for precision. The top candidates from stage one get reordered by a model that actually reads — not one that just matches embeddings.
// output: ranked, attributed context// none of these are new. that's the point.
Production launches July 2026.
Python and TypeScript SDKs. REST APIs. Docker self-hosting. Built for builders, shipped when it's ready.