ChatGPT Memory Is Not Available via API—But You Can Build Your Own

ChatGPT Memory Is Not Available via API—But You Can Build Your Own

TL;DR: This article shows a way to quickly add User Memory in OpenAI API.

OpenAI recently made waves by introducing enhanced memory capabilities for ChatGPT. While this is exciting news, there's an important catch: this memory feature is only available in OpenAI's apps(or Gemini, Meta AI...) — not through their API. This leaves many developers wondering: how can they add similar memory capabilities to their own AI applications?

After this blog, you will know how to add memory to your OpenAI client with a simple user_id field:

response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "Who am I?"},
    ],
    model="gpt-4o",
    user_id="test_user" 
)

What Is Memory in LLMs?

Memory is a way to digest more information without increasing too much context window

Before diving into solutions, let's understand what we mean by "memory" in AI systems. There are two key concepts:

  • Context Window: This is like your AI's short-term memory—it can only hold a limited amount of recent conversation (usually between 8K to 128K tokens). Think of it as what the AI can "see" at any given moment.
  • Long-term Memory: This is where things get interesting. Long-term memory allows AI to remember information across multiple conversations and sessions, creating more personalized and contextual interactions.

The difference is crucial: while context windows are built into LLMs, long-term memory requires additional infrastructure to store and retrieve information over time.

Why the API Gap Matters

ChatGPT's new memory feature demonstrates how powerful persistent memory can be for user interactions. However, since it's not available via API, developers can't directly integrate this capability into their applications. That means you can't leverage OpenAI's memory feature to build your own Apps. You will need another solution.

Build Your Own Memory Layer

The good news? You can build your own memory system for your AI applications. For your reference, below are some memory solutions:

  • List-Based Memory: mem0
  • Graph-Based Memory: Zep
  • Profile-Based Memory: Memobase
Image

One particularly effective approach is profile-based memory, which offers several advantages:

  • Structured Storage: Instead of keeping raw conversation logs, organize information in meaningful user profiles
  • Controlled Memory: Define what to remember and what to forget
  • Efficient Retrieval: Quick access to relevant information without searching through entire conversation histories

Meet Memobase: An Open-Source Solution

Memobase Image

Memobase is an open-source memory backend that makes it easy to add long-term memory to your AI applications. Here's what makes it special:

  • User, not Agent: Memobase focus on User-side, not Agent-side. Perfect for Chatbot, Virtual Assistants, Education, and more
  • Profile-Based: Organizes memory around user profiles/events for better structure
  • Scalable: Designed to handle many users' memories efficiently
  • Cost-Effective: Memory is not updated in hot path, which will trigger the memory process less often.
  • LLM-Agnostic: Memobase is designed to be used with any LLM provider.
OpenAI Client

To facilitate quick use by developers, we have created an OpenAI patch feature that hides all Memobase-related APIs behind the regular OpenAI Client. You only need to simply add a user ID to maintain long-term memory:

from openai import OpenAI
from memobase import MemoBaseClient
from memobase.patch.openai import openai_memory
 
# Setup clients
client = OpenAI()
mb_client = MemoBaseClient(project_url=ENDPOINT, api_key=TOKEN)
 
# Patch OpenAI client with memory
client = openai_memory(client, mb_client)
 
client.chat.completions.create(
    messages=[{"role": "user", "content": "I'm Gus"}],
    model="gpt-4o",
    user_id="test_user"
)
client.flush()
# Use OpenAI with memory - just add user_id!
response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Who am I?"}],
    model="gpt-4o",
    user_id="test_user"
)

Check the full script.

How to Try Memobase

Ready to add memory to your AI application? Here's how to get started:

  1. Visit the Memobase GitHub repository. Follow the setup guide.
  2. Check out the documentation for detailed guides

This Is Your Moment to "Remember"

ChatGPT's memory feature has shown just how powerful persistent memory can be for AI applications. But you don't need to wait for OpenAI to release an API version. With tools like Memobase, you can start building memory-enabled AI experiences today.

The future of AI isn't just about raw intelligence — it's about creating experiences that feel personal and meaningful. By adding memory to your AI applications, you're taking a significant step toward that future. Ready to get started? Head over to the Memobase repository and begin building more personalized AI applications today!