Technologies
Gen AI Chatbots Are Starting to Remember You. Should You Let Them?
An AI model’s long memory can offer a better experience — or a worse one. Good thing you can turn it off.

Until recently, generative AI chatbots didn’t have the best memories: You tell it something and, when you come back later, you start again with a blank slate. Not anymore.
OpenAI started testing a stronger memory in ChatGPT last year and rolled out improvements this month. Grok, the flagship tool of Elon Musk’s xAI, also just got a better memory.
It took significant improvements in math and technology to get here but the real-world benefits seem pretty simple: You can get more consistent and personalized results without having to repeat yourself.
«If it’s able to incorporate every chat I’ve had before, it does not need me to provide all that information the next time,» said Shashank Srivastava, assistant professor of computer science at the University of North Carolina at Chapel Hill.
Those longer memories can help with solving some frustrations with chatbots but they also pose some new challenges. As with when you talk to a person, what you said yesterday might influence your interactions today.
Here’s a look at how the bots came to have better memories and what it means for you.
Improving an AI model’s memory
For starters, it isn’t quite a «memory.» Mostly, these tools work by incorporating past conversations alongside your latest query. «In effect, it’s as simple as if you just took all your past conversations and combined them into one large prompt,» said Aditya Grover, assistant professor of computer science at UCLA.
Those large prompts are now possible because the latest AI models have significantly larger «context windows» than their predecessors. The context window is, essentially, how much text a model can consider at once, measured in tokens. A token might be a word or part of a word (OpenAI offers one token as three-quarters of a word as a rule of thumb).
Early large language models had context windows of 4,000 or 8,000 tokens — a few thousand words. A few years ago, if you asked ChatGPT something, it could consider roughly as much text as is in this recent CNET cover story on smart thermostats. Google’s Gemini 2.0 Flash now has a context window of a million tokens. That’s a bit longer than Leo Tolstoy’s epic novel War and Peace. Those improvements are driven by some technical advances in how LLMs work, creating faster ways to generate connections between words, Srivastava said.
Other techniques can also boost a model’s memory and ability to answer a question. One is retrieval-augmented generation, in which the model can run a search or otherwise pull up documents as needed to answer a question, without always keeping all of that information in the context window. Instead of having a massive amount of information available at all times, it just needs to know how to find the right resource, like a researcher perusing a library’s card catalog.
Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts
Why context matters for a chatbot
The more an LLM knows about you from its past interactions with you, the better suited to your needs its answers will be. That’s the goal of having a chatbot that can remember your old conversations.
For example, if you ask an LLM with no memory of you what the weather is, it’ll probably follow up first by asking where you are. One that can remember past conversations, however, might know that you often ask it for advice about restaurants or other things in San Francisco, for example, and assume that’s your location. «It’s more user-friendly if the system knows more about you,» Grover said.
A chatbot with a longer memory can provide you with more specific answers. If you ask it to suggest a gift for a family member’s birthday and tell it some details about that family member, it won’t need as much context when you ask again next year. «That would mean smoother conversations because you don’t need to repeat yourself,» Srivatsava said.
A long memory, however, can have its downsides.
You can (and maybe should) tell AI to forget
Having a chatbot recommend a gift poses a conundrum that’s all too common in human memories: You told your aunt you liked airplanes when you were 12 years old, and decades later you still get airplane-themed gifts from her. An LLM that remembers things about you could bias itself too much toward something you told it before.
«There’s definitely that possibility that you can lose your control and that this personalization could haunt you,» Srivastava said. «Instead of getting an unbiased, fresh perspective, its judgment might always be colored by previous interactions.»
LLMs typically allow you to tell them to forget certain things or to exclude some conversations from their memory.
You may also deal with things you don’t want an AI model to remember. If you have private or sensitive information you’re communicating with an LLM (and you should think twice about doing so at all), you probably want to turn off the memory function for those interactions.
Read the guidance on the tool you’re using to be sure you know what it’s remembering, how to turn it on and off and how to delete items from its memory.
Grover said this is an area where gen AI developers should be transparent and offer clear commands in the user interface. «I think they need to be providing more controls that are visible to the user, when to turn it on, when to turn it off,» he said. «Give a sense of urgency for the user base so they don’t get locked into defaults that are hard to find.»
How to turn off gen AI memory features
Here’s how to manage memory features in some common gen AI tools.
ChatGPT
OpenAI has a couple types of memory in its models. One is called «reference saved memories» and it stores details that you specifically ask ChatGPT to save, like your name or dietary preferences. Another, «reference chat history,» remembers information from past conversations (but not everything).
To turn off either of these features, you can go to Settings and Personalization and toggle the items off.
You can ask ChatGPT what it remembers about you and ask it to forget something it has remembered. To completely delete this information, you can delete the saved memories in Settings and the chat where you saved that information.
Gemini
Google’s Gemini model can remember things you’ve discussed or summarize past conversations.
To modify or delete these memories, or to turn off the feature entirely, you can go into your Gemini Apps Activity menu.
Grok
Elon Musk’s xAI announced memory features in Grok this month and they’re turned on by default.
You can turn them off under Settings and Data Controls. The specific setting is different between Grok.com, where it’s «Personalize Grok with your conversation history,» and on the Android and iOS apps, where it’s «Personalize with memories.»
Technologies
Tinder Users Must Start Logging In With Their Faces, Starting Nationwide
The social app now has new US requirements including face identification to help quell longstanding problems with catfishing and more.
US Tinder users will find a new feature when they open up the dating app starting Wednesday: A mandatory Face Check on their phones will be required before they can log into their profiles.
The Face Check step will begin with a new request to record a video of your face, a more casual version of setting up Apple’s Face ID login. Tinder will then run checks comparing your face data to your current profile pics and automatically create a small face badge for your profile. We already know how it works, because Tinder has already launched the feature in Canada and California before the full US rollout.
The technology, powered by FaceTec, will keep biometric data of the user’s face in encrypted form but discard the scanning video for privacy. Tinder will be able to use the face data to detect duplicate accounts, in an effort to cut down on fake profiles and identity theft.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Tinder’s facial recognition rollout is also made to prevent catfishing, or people pretending to be someone else on Tinder to scam or blackmail them. But that also points to a deeper problem on the rise in dating apps — a growing number of bots, many controlled by AI, are designed to glean personal information or fool users into scammy subscriptions, among other problems.
Tinder’s working against these bots on several fronts, including this Face Check push as well as ID Check, which requires a government-issued ID and other types of photo verification.
The dating app also recently released a feature in June to enable double-dating with your friends, which Tinder reports is especially popular with Gen Z users. If you’re worried about the latest hazards on Tinder, we have guide to safety practices.
A representative for Tinder did not immediately respond to a request for comment.
Technologies
Today’s NYT Mini Crossword Answers for Thursday, Oct. 23
Here are the answers for The New York Times Mini Crossword for Oct. 23.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Read more: Tips and Tricks for Solving The New York Times Mini Crossword
Let’s get to those Mini Crossword clues and answers.
Mini across clues and answers
1A clue: Like some weather, memories and I.P.A.s
Answer: HAZY
5A clue: Statement that’s self-evidently true
Answer: AXIOM
7A clue: Civic automaker
Answer: HONDA
8A clue: What fear leads to, as Yoda told a young Anakin
Answer: ANGER
9A clue: Foxlike
Answer: SLY
Mini down clues and answers
1D clue: Verbal «lol»
Answer: HAHA
2D clue: Brain signal transmitter
Answer: AXON
3D clue: Hits with a witty comeback
Answer: ZINGS
4D clue: Sing at the top of a mountain, maybe
Answer: YODEL
6D clue: Name of the famous «Queen of Scots»
Answer: MARY
Technologies
Today’s NYT Strands Hints, Answers and Help for Oct. 23 #599
Here are hints and answers for the NYT Strands puzzle for Oct. 23, No. 599.
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle might be Halloween-themed, as the answers are all rather dangerous. Some of them are a bit tough to unscramble, so if you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: Please don’t eat me!
If that doesn’t help you, here’s a clue: Remember Mr. Yuk?
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- POND, NOON, NODE, BALE, SOCK, LOVE, LOCK, MOCK, LEER, REEL, GLOVE, DAIS, LEAN, LEAD, REEL
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- AZALEA, HEMLOCK, FOXGLOVE, OLEANDER, BELLADONNA
Today’s Strands spangram
Today’s Strands spangram is POISONOUS. To find it, look for the P that is the first letter on the far left of the top row, and wind down and across.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow