Back to Explore

Mengram

LLM Memory

AI memory API with 3 types: facts, events, and workflows

💡 An AI memory API featuring three distinct types: semantic (facts), episodic (events), and procedural (learned workflows). With a single API call, it automatically extracts all three. Its standout feature? Once your agent completes a task, Mengram saves the steps so it already knows the optimal path next time, complete with success and failure tracking. Compatible with Claude (MCP), LangChain, CrewAI, and OpenClaw. Completely free and open-source under the Apache 2.0 license.

"Mengram is like giving your AI a personal journal and a 'lessons learned' log, turning a forgetful intern into a seasoned pro who never makes the same mistake twice."

30-Second Verdict
What is it: An open-source API that gives AI Agents 'memory,' supporting facts, events, and workflow memory.
Worth attention: Worth watching. Its 'three-tier memory' architecture is leading-edge, making it a strong free alternative to Mem0, though it's still in the early stages.
4/10

Hype

7/10

Utility

103

Votes

Product Profile
Full Analysis Report

Mengram: The Open-Source API Giving AI 'Human-Like Memory,' a Free Mem0 Alternative

2026-02-20 | Product Hunt | Official Site | GitHub


30-Second Quick Judgment

What is it?: It adds "memory" to AI Agents—not just remembering facts, but also events and workflows, so the next time it faces the same task, it takes the optimal path immediately.

Is it worth watching?: Yes, but manage your expectations. This is a project by a 22-year-old indie developer from Kazakhstan. The concept is excellent (the three-tier memory architecture is backed by academic theory), but the product is very early. If you're looking for a memory solution for your Agent, it's worth putting on your evaluation list; if you need production-grade stability, Mem0 is more mature.


Three Questions That Matter

Does it matter to me?

Who is the target user?:

  • Developers building AI Agents/Chatbots
  • Teams that need AI to remember context and learn workflows
  • People using frameworks like LangChain, CrewAI, or Claude Desktop

Is that me?: If you are doing any of the following, you are the target user:

  • Building AI apps that need to "remember the user" (customer service, assistants, copilots)
  • Frustrated that your Agent starts from scratch in every conversation
  • Wanting your Agent to automatically learn the best path instead of restarting every time

When would I use it?:

  • Scenario 1: You built an AI customer service bot, and when the user returns the next day, it has to ask for their name again --> Use Mengram's Semantic Memory to remember the user profile.
  • Scenario 2: Your Agent has to figure out the steps every time it deploys --> Use Procedural Memory to remember successful workflows.
  • Scenario 3: You just need a simple chat history --> You don't need this; just use the LLM's native context window.

Is it useful to me?

DimensionBenefitCost
TimeAgent doesn't have to re-learn, saving time on workflow debuggingLearning the 3 memory concepts + API integration, ~1-2 hours
MoneyCompletely free, saving $19-249/month compared to Mem0Self-hosting requires server costs (PostgreSQL + pgvector)
EffortOne API call replaces a complex RAG pipelineEarly product; documentation and community are still sparse

ROI Judgment: If you don't have a memory solution yet, spending 2 hours to try it is totally worth it—it's free, after all. If you're already using Mem0 and it's stable, there's no immediate need to switch.

Is it exciting?

What's the 'Wow' factor?:

  • Procedural Memory is the real highlight: After an Agent completes a task, Mengram automatically saves the steps. Next time it hits a similar task, it goes straight to the optimal path. Did it fail? It automatically evolves to a new version. 3+ successes? It creates a new workflow. This is literally "AI learning."
  • Cognitive Profile: One API call returns a system prompt fused with all three memory types. Just drop it into any LLM's system prompt for zero-cost personalization.

Real User Feedback:

"Mengram's positioning is more important than it looks. In 2024-25, AI was about 'generating good output.' In 2026, it's about 'consistently generating the right output for this company, this user, and this scenario'—and that requires memory." — UIComet Industry Analysis

To be honest, the product is so new that there are almost no real user reviews on Twitter or Reddit. This is both a risk (unverified) and an opportunity (early adopters have more influence).


For Developers

Tech Stack

  • Backend: Python + PostgreSQL + pgvector
  • Deployment: Railway
  • Search Engine: Vector + BM25 + graph expansion + LLM re-ranking (4-layer search)
  • SDK: Python (pip install mengram-ai) + JS/TS (npm install mengram-ai)
  • Protocol: MCP Server (connects directly to Claude Desktop, Cursor, Windsurf)

Core Implementation

Mengram's core logic stems from three memory classifications in cognitive science:

  1. Semantic Memory—Stores facts. "User is named Xiao Ming, uses Python, lives in Hangzhou." Similar to a traditional RAG knowledge base.
  2. Episodic Memory—Stores events. "Last Wednesday, Xiao Ming discussed the new feature with the PM and decided on Option B." Includes timelines, participants, and outcomes.
  3. Procedural Memory—Stores workflows. "Steps to deploy to production: 1. Run tests 2. Build 3. Push 4. Verify." Includes success/failure counts and version evolution.

When searching, Unified Search queries all three types simultaneously. The Cognitive Profile then packages these into a system prompt to be injected into the LLM.

There are also three autonomous Agents:

  • Curator: Automatically cleans up contradictory info (e.g., if a user changes their email, the old one is invalidated).
  • Connector: Discovers hidden relationship patterns.
  • Digest: Generates weekly memory summaries.

Open Source Status

  • Fully Open Source: Apache 2.0, GitHub repo alibaizhanov/mengram
  • Self-Hostable: Yes
  • Similar Projects: Mem0 (Apache 2.0, 41K Stars), Letta/MemGPT, OpenMemory, MemOS
  • Build Difficulty: Medium. The core is PostgreSQL + pgvector for storage, plus an LLM for extraction and ranking. The hard part is the classification logic and self-improving workflows. Expect 1-2 person-months for an MVP.

Business Model

  • Monetization: Currently free; the founder is designing a freemium model.
  • Pricing: $0 (Competitor Mem0 charges $19-249/month).
  • User Base: Not disclosed; 103 votes on PH.

Giant Risk

High. This space is being watched closely:

  • OpenAI already has native Memory features.
  • Anthropic Claude is exploring memory layers within the MCP ecosystem.
  • AWS just chose Mem0 as the exclusive memory provider for its Agent SDK.
  • However, Mengram's "three-tier memory" and workflow learning are unique differentiators; giants usually build more generic solutions.

For Product Managers

Pain Point Analysis

  • Problem Solved: AI Agent "amnesia"—starting every conversation from zero and failing to accumulate experience.
  • Severity: High-frequency essential need. As Agents move from toys to tools in 2026, "memory" is the key bottleneck for production.

User Persona

  • AI Developers: Building Agents on LangChain/CrewAI needing persistent memory.
  • AI Product Teams: Wanting chatbots/copilots to remember user preferences.
  • Power Users: Heavy AI users who want Claude/ChatGPT to truly "know them."

Feature Breakdown

FeatureTypeDescription
Three Memory TypesCoreSemantic + Episodic + Procedural; the biggest selling point
Cognitive ProfileCoreOne API call to generate a personalized system prompt
Procedural Memory Self-ImprovementCoreAuto-optimizes workflows; evolves on failure, creates on success
MCP ServerCoreOne-click access to Claude/Cursor/Windsurf
Team Shared MemoryNice-to-haveShared AI context for team members
WebhooksNice-to-haveConnects to Slack/Zapier/Notion
Autonomous AgentsNice-to-haveAuto-cleanup, pattern discovery, and summarization

Competitor Comparison

DimensionMengramMem0Letta/MemGPTSupermemory
Core Difference3 Memory Types + WorkflowsFact Memory + Knowledge GraphOS-style Layered MemoryUniversal Memory API
PriceFree$19-249/moOpen Source + SaaSEnterprise
MaturityEarlyMature ($24.5M funding)GrowingGrowing
LoCoMo ScoreNot Tested66.9-68.5%74.0%Not Tested
GitHub StarsNew Project41K+ActiveActive
Best ForThose wanting 3 types + FreeProduction-grade stabilityFully transparent AgentsEnterprise users

Key Takeaways

  1. The "Three Memories" Narrative: Dividing memory into facts/events/workflows is very intuitive. If you're building an AI product, use a similar cognitive science framework to organize features.
  2. Cognitive Profile: An API that returns a ready-to-use system prompt—this "zero integration cost" mindset is worth emulating.
  3. Self-Improving Workflows: Agents shouldn't just remember; they should "learn." This narrative is a level above simple "memory storage."

For Tech Bloggers

Founder Story

  • Founder: Ali Baizhanov
  • Age: 22
  • Location: Almaty, Kazakhstan
  • Background: Bachelor's from Al-Farabi Kazakh National University; Data Engineering at Bank CenterCredit, Software Dev at Prime Source, AWS Certified Expert.
  • Why build this?: As an indie dev, he saw the success of Mem0 ($24.5M funding) and Supermemory but felt they only handled "fact memory," missing "event" and "workflow" memory.

This is a classic "Central Asian indie dev takes on Silicon Valley VC-backed project" story. At 22, alone, he used Python + PostgreSQL to build a memory architecture that is conceptually more complete than Mem0.

Discussion Angles

  • Angle 1 -- Open Source vs. VC Route: Mem0 took $24.5M (YC + Peak XV), while Mengram chose to be free and open-source. Can open-source win the AI memory war?
  • Angle 2 -- Feature Richness vs. Production Verification: Mengram's feature list is longer than Mem0's, but it lacks independent benchmarks. Is more features always better?
  • Angle 3 -- Almaty vs. Silicon Valley: Supermemory's founder got Google exec funding at 19; Mengram's founder is 22 in Almaty. Can tech prowess bridge the geographic gap?

Hype Data

  • PH Ranking: 103 votes, 2 comments (just launched)
  • Twitter Discussion: Almost none; founder @BaizhanovB has low activity.
  • Industry View: UIComet says "this positioning is more important than it looks."

Content Suggestions

  • Headline: "The AI Memory War: A 22-Year-Old Indie Dev Challenges a $24.5M Giant with Open Source"
  • Trend Opportunity: AI Agent memory is a hot topic for 2026; Mengram's framework is perfect for educational content.

For Early Adopters

Pricing Analysis

TierPriceIncludesEnough?
Free$0All features, Apache 2.0 Open SourceTotally enough
Self-HostedServer CostFull data controlNeeds PostgreSQL + pgvector

Comparison: Mem0 free tier is 10K memories, Pro starts at $19/mo, Enterprise at $249/mo.

Getting Started

  • Setup Time: 5-30 minutes
  • Learning Curve: Low (if you know Python/JS SDKs)
  • Steps:
    1. pip install mengram-ai or npm install mengram-ai
    2. Register at mengram.io for an API Key (Free, no credit card needed)
    3. Call the API: Save memory -> Search memory -> Generate Cognitive Profile
    4. For Claude Desktop: Configure the MCP Server to use it directly in Claude
    5. For LangChain: Use the MengramMemory class to replace default memory

Pitfalls and Critiques

  1. Product is too new, no community: You won't find discussions on Reddit/Twitter; you're stuck with GitHub Issues or contacting the founder.
  2. No independent benchmarks: Mem0 has a LoCoMo score of 66.9%, Letta has 74%, Mengram has zero. Features look good on paper, but data talks.
  3. Solo maintainer risk: If a 22-year-old dev stops updating, what happens to your Agent's memory? Fortunately, it's open-source, so you can fork it.
  4. LLM Summarization Issues: Mem0 was found to lose details during summarization. Mengram likely has similar issues due to its underlying logic.

Security and Privacy

  • Data Storage: Default is sent to mengram.io cloud; can be self-hosted.
  • Self-Hosting: Supported, requires PostgreSQL + pgvector.
  • Advice: Use self-hosting for sensitive data and the cloud API for non-sensitive data.

Alternatives

AlternativeProsCons
Mem0Mature, $24.5M funding, SOC2/HIPAA, AWS partner$19-249/mo, only fact memory
Letta/MemGPTLoCoMo 74% (highest), fully open-sourceMore of a framework than a pure API
OpenMemoryOpen-source, supports 5 memory typesEven earlier stage
ZepTemporal knowledge graph, enterprise-gradeEnterprise pricing
Build your ownFull controlHigh development cost

For Investors

Market Analysis

  • AI Agent Market: $8.03B in 2025 -> $11.78B in 2026 -> $2.51T by 2034, CAGR 46.61% (Fortune Business Insights).
  • AI Orchestration and Memory: Expected to reach $33.54B by 2030, CAGR 38.9%.
  • Drivers: Agents moving from demo to production; IDC predicts 80% of enterprise apps will have AI copilots by 2026.

Competitive Landscape

TierPlayersPositioning
LeadersMem0 ($24.5M, YC/Peak XV)Production-grade memory SaaS
Mid-TierLetta/MemGPT, Zep, SupermemorySpecialized memory solutions
New EntrantsMengram, OpenMemory, MemOSOpen-source/Differentiated challengers
GiantsOpenAI Memory, AWS Agent SDKPlatform-native memory

Timing Analysis

  • Why now?: 2024-25 was about "good output"; 2026 is about "right output for specific users." Memory is shifting from nice-to-have to must-have.
  • Tech Maturity: The pgvector + LLM re-ranking stack is mature; no need to reinvent the wheel.
  • Market Readiness: Gartner predicts 40% of enterprise apps will embed AI Agents by 2026, exploding memory demand.

Team Background

  • Founder: Ali Baizhanov, 22, Almaty, Kazakhstan.
  • Education: Bachelor's from Al-Farabi Kazakh National University.
  • Experience: Data Engineer at Bank CenterCredit -> Software Dev at Prime Source + AWS Expert.
  • Team Size: Indie Developer (currently 1 person).

Funding Status

  • Mengram: No funding info, indie project.
  • Competitor Benchmarks: Mem0 $24.5M (YC + Peak XV + Basis Set), Supermemory backed by Google execs.
  • Judgment: As an investment target, Mengram is too early. But as a sector signal, the AI memory layer is one of the most critical infrastructures to watch in 2026.

Conclusion

Bottom Line: Mengram's "three-tier memory" architecture is the right approach, but the product is in its infancy. It's great for tech-curious developers to play with, but not yet ready for mission-critical production.

User TypeRecommendation
DevelopersWorth a try -- Free, open-source, and the "Procedural Memory" concept is inspiring.
Product ManagersWorth watching -- The "three memories" framework is a great reference.
BloggersGreat to write about -- The "indie dev vs. VC giant" narrative is a winner.
Early AdoptersTry with caution -- Free and low risk, but support is thin.
InvestorsWatch the sector -- AI memory is a 2026 infra hotspot, even if Mengram is early.

Resource Links

ResourceLink
Official Sitehttps://mengram.io/
GitHubhttps://github.com/alibaizhanov/mengram
Product Hunthttps://www.producthunt.com/products/mengram
Founder LinkedInhttps://www.linkedin.com/in/alibaizhanov/
Founder Twitterhttps://twitter.com/BaizhanovB
Competitor Mem0https://mem0.ai/
Competitor Lettahttps://www.letta.com/

Information Sources


2026-02-20 | Trend-Tracker v7.3

One-line Verdict

An open-source alternative with a forward-thinking architectural approach, suitable for technical exploration and small-to-medium projects. For production environments, it's recommended to evaluate more mature competitors first.

FAQ

Frequently Asked Questions about Mengram

An open-source API that gives AI Agents 'memory,' supporting facts, events, and workflow memory.

The main features of Mengram include: Three memory types (Semantic/Episodic/Procedural), Cognitive Profile one-click prompt generation, Self-improving workflow mechanism, MCP Server integration.

Free (Open Source/API).

AI Agent developers, teams needing AI to remember context, and developers using LangChain/CrewAI.

Alternatives to Mengram include: Mem0, Letta/MemGPT, Supermemory, Zep..

Data source: ProductHuntFeb 20, 2026
Last updated: