Dispatches from an Internet PioneerDispatches from an Internet Pioneer

Deep Drive: The Hidden Memory of Generative AI

Generative AI tools like ChatGPT, Microsoft Copilot, and Google Gemini retain more user data than most people realize, raising serious ethical, legal, and privacy concerns. In this episode, Olivia and Mark explore real-world cases, including sensitive data retention in a government agency, and the growing challenges of AI-driven profiling and data transparency. They discuss how AI systems silently build user “digital twins,” why privacy compliance is struggling to keep up, and what needs to change. If you think AI forgets, think again—tune in for a deep dive into AI memory, regulation gaps, and the urgent need for user control.

Published OnMarch 15, 2025
Chapter 1

The GenAI Memory Problem: What You Think vs. What’s Actually Happening

Olivia Carter

Alright, let's dive in. So, when you're chatting with something like ChatGPT, you probably assume that once you're done, the conversation just... disappears, right? Like, delete chat history and—poof!—it’s gone.

Mark Putnam

Exactly, that's the idea most users have. Ephemeral systems, as they're called. The premise is that once a session ends, those interactions aren't saved anywhere permanent.

Olivia Carter

But—but that's not really true, is it?

Mark Putnam

No, it's not. Some Generative AI models, like ChatGPT, Copilot, and Gemini, have started delivering memory features that allow them to retain details across sessions. This can include preferences, context, and sometimes even specifics about past interactions.

Olivia Carter

Wait, are we talking months here? Like—like it remembers something you said way back?

Mark Putnam

Yes, months. For example, OpenAI has been testing memory features with ChatGPT where the model recalls user-specific information in successive conversations. It's new, but it already hints at the kind of persistent memory systems we're dealing with.

Olivia Carter

Okay, that’s... kinda creepy, honestly. What about Copilot? I’ve heard things about it, too.

Mark Putnam

Right, Microsoft Copilot. In enterprise settings, there’s evidence suggesting it retains work-related context between sessions. So, let’s say you’re drafting a project report—Copilot might ‘remember’ earlier edits even after you close that session.

Olivia Carter

And Gemini?

Mark Putnam

Gemini’s a bit different because it’s part of Google’s broader ecosystem. Its memory features could potentially connect across services, suggesting a more integrated approach to retaining user patterns and preferences over time.

Olivia Carter

So, they’re all building these... what? Digital versions of us?

Mark Putnam

Essentially. You can think of them as ‘digital twins.’ These AI systems are collecting data—your tone, your preferences, how you interact—and using that to create profiles. Whether we like it or not, that’s what’s happening.

Chapter 2

The Real-World Legal Risks of GenAI Retaining Memory

Olivia Carter

You know, That's unsettling. I keep wondering—what exactly are we being told about what these AI systems remember? Or, maybe the better question is, what aren’t we being told?

Mark Putnam

Not enough, honestly. Companies are somewhat vague on this. They’ll say things like ‘data isn’t stored permanently,’ but at the same time, they are testing memory features that clearly retain user preferences.

Olivia Carter

So, we’re in this weird spot where we don’t precisely know what’s being kept, right?

Mark Putnam

Exactly. We can't even verify it. If you’re using something like ChatGPT, you have little visibility into whether it’s forgotten or still holding onto data from earlier.

Olivia Carter

That seems like a huge problem. Especially for public institutions—like universities or, I don’t know, the government. Could they get hit with, like, Freedom of Information Act requests for GenAI interactions?

Mark Putnam

Maybe, and that’s where the legal risk comes in. If an AI assistant remembers user interactions, those could be interpreted as retained records. And that raises questions about whether they’re subject to legal discovery or subpoenas in a lawsuit.

Olivia Carter

Whoa, so institutions using GenAI might not even realize they’re, basically, creating records they can’t control.

Mark Putnam

Right. Most privacy laws, like GDPR or CCPA, regulate stored data you can see—emails, databases, that sort of thing. But GenAI operates in a gray area since its memory isn’t technically ‘saved’ the way we traditionally think of it.

Olivia Carter

And that would apply to businesses, too, right? Like, imagine AI tools in the workplace remembering—

Mark Putnam

Exactly. Productivity tools like Microsoft Copilot or Google Workspace AI already do this. They help draft emails or summarize meetings, and that’s helpful, but what happens if that context persists? Could it influence business decisions down the line?

Olivia Carter

So, then the big question is, if GenAI memory isn’t stored in a database but still... exists, does it even count as a record under the law?

Mark Putnam

That’s the crux of it. We don’t have clear answers. Businesses and institutions need to ask themselves how they’ll ensure compliance with privacy standards, especially when it comes to data that can’t easily be audited or erased.

Chapter 3

The Path Forward: What Needs to Change

Olivia Carter

So, if we can’t even define whether GenAI memory counts as a legal record, where does that leave us? I mean, this isn’t some future problem—it’s happening now, and we need to figure it out before it spirals out of control.

Mark Putnam

Right. Companies developing these tools must take responsibility first. Transparency about how memory works—that’s key. Users need to know what’s being retained, how long it sticks around, and, most importantly, how they can delete it permanently.

Olivia Carter

No more vague privacy policies, right? Like, people should be able to see what their AI “knows” about them. A real opt-out option would—would make a huge difference too. Otherwise, using these tools just feels like a massive gamble with your personal data.

Mark Putnam

Exactly. And from a user perspective, we need to stop assuming these tools forget everything. They don’t. It's safer to treat every interaction like it’s building your so-called ‘digital twin.’ That means keeping personal or sensitive details out of conversations, unless you’re absolutely sure about the memory policies.

Olivia Carter

Yeah, which, let’s be real, most of us aren’t gonna read through those terms of service anyway. I mean, who has time for that?

Mark Putnam

That’s why regulations are so important. Privacy laws like GDPR and CCPA need to evolve to explicitly address GenAI memory. Governments and public institutions can’t wait until it’s too late—they need governance policies now, or they’ll face massive headaches later.

Olivia Carter

Exactly! And businesses? Honestly, I think they should be questioning how these memory features might create risks they’re not even aware of, like legal liabilities or non-compliance with data protection laws.

Mark Putnam

And not just risks. They should also think about trust—because trust is what’s at stake here. If companies continue to be opaque about how memory works, they’ll lose public confidence. Plain and simple.

Olivia Carter

Absolutely. So, the big takeaway here? Stop worrying about the risks of AGI for now. It’s—it's GenAI’s memory problem we should be paying attention to. Until these changes happen, I think we all need to assume GenAI knows more than it should—and isn’t forgetting anytime soon.

Mark Putnam

And on that note, that’s all for today. Thanks for joining us—and, uh, stay curious, but stay cautious out there.

Olivia Carter

Catch you next time!

About the podcast

Technology is reshaping higher education, leadership, and the economy—but the biggest challenges aren’t just technical, they’re cultural and structural. Created by Timothy Chester, this podcast explores the real impact of AI, automation, and digital transformation on universities, work, and society. With a sociologist’s lens and decades in higher ed IT leadership, he cuts through the hype to uncover what truly matters.

© 2025 All rights reserved.