Technologies
Gen AI Chatbots Are Starting to Remember You. Should You Let Them?
An AI model’s long memory can offer a better experience — or a worse one. Good thing you can turn it off.

Until recently, generative AI chatbots didn’t have the best memories: You tell it something and, when you come back later, you start again with a blank slate. Not anymore.
OpenAI started testing a stronger memory in ChatGPT last year and rolled out improvements this month. Grok, the flagship tool of Elon Musk’s xAI, also just got a better memory.
It took significant improvements in math and technology to get here but the real-world benefits seem pretty simple: You can get more consistent and personalized results without having to repeat yourself.
«If it’s able to incorporate every chat I’ve had before, it does not need me to provide all that information the next time,» said Shashank Srivastava, assistant professor of computer science at the University of North Carolina at Chapel Hill.
Those longer memories can help with solving some frustrations with chatbots but they also pose some new challenges. As with when you talk to a person, what you said yesterday might influence your interactions today.
Here’s a look at how the bots came to have better memories and what it means for you.
Improving an AI model’s memory
For starters, it isn’t quite a «memory.» Mostly, these tools work by incorporating past conversations alongside your latest query. «In effect, it’s as simple as if you just took all your past conversations and combined them into one large prompt,» said Aditya Grover, assistant professor of computer science at UCLA.
Those large prompts are now possible because the latest AI models have significantly larger «context windows» than their predecessors. The context window is, essentially, how much text a model can consider at once, measured in tokens. A token might be a word or part of a word (OpenAI offers one token as three-quarters of a word as a rule of thumb).
Early large language models had context windows of 4,000 or 8,000 tokens — a few thousand words. A few years ago, if you asked ChatGPT something, it could consider roughly as much text as is in this recent CNET cover story on smart thermostats. Google’s Gemini 2.0 Flash now has a context window of a million tokens. That’s a bit longer than Leo Tolstoy’s epic novel War and Peace. Those improvements are driven by some technical advances in how LLMs work, creating faster ways to generate connections between words, Srivastava said.
Other techniques can also boost a model’s memory and ability to answer a question. One is retrieval-augmented generation, in which the model can run a search or otherwise pull up documents as needed to answer a question, without always keeping all of that information in the context window. Instead of having a massive amount of information available at all times, it just needs to know how to find the right resource, like a researcher perusing a library’s card catalog.
Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts
Why context matters for a chatbot
The more an LLM knows about you from its past interactions with you, the better suited to your needs its answers will be. That’s the goal of having a chatbot that can remember your old conversations.
For example, if you ask an LLM with no memory of you what the weather is, it’ll probably follow up first by asking where you are. One that can remember past conversations, however, might know that you often ask it for advice about restaurants or other things in San Francisco, for example, and assume that’s your location. «It’s more user-friendly if the system knows more about you,» Grover said.
A chatbot with a longer memory can provide you with more specific answers. If you ask it to suggest a gift for a family member’s birthday and tell it some details about that family member, it won’t need as much context when you ask again next year. «That would mean smoother conversations because you don’t need to repeat yourself,» Srivatsava said.
A long memory, however, can have its downsides.
You can (and maybe should) tell AI to forget
Having a chatbot recommend a gift poses a conundrum that’s all too common in human memories: You told your aunt you liked airplanes when you were 12 years old, and decades later you still get airplane-themed gifts from her. An LLM that remembers things about you could bias itself too much toward something you told it before.
«There’s definitely that possibility that you can lose your control and that this personalization could haunt you,» Srivastava said. «Instead of getting an unbiased, fresh perspective, its judgment might always be colored by previous interactions.»
LLMs typically allow you to tell them to forget certain things or to exclude some conversations from their memory.
You may also deal with things you don’t want an AI model to remember. If you have private or sensitive information you’re communicating with an LLM (and you should think twice about doing so at all), you probably want to turn off the memory function for those interactions.
Read the guidance on the tool you’re using to be sure you know what it’s remembering, how to turn it on and off and how to delete items from its memory.
Grover said this is an area where gen AI developers should be transparent and offer clear commands in the user interface. «I think they need to be providing more controls that are visible to the user, when to turn it on, when to turn it off,» he said. «Give a sense of urgency for the user base so they don’t get locked into defaults that are hard to find.»
How to turn off gen AI memory features
Here’s how to manage memory features in some common gen AI tools.
ChatGPT
OpenAI has a couple types of memory in its models. One is called «reference saved memories» and it stores details that you specifically ask ChatGPT to save, like your name or dietary preferences. Another, «reference chat history,» remembers information from past conversations (but not everything).
To turn off either of these features, you can go to Settings and Personalization and toggle the items off.
You can ask ChatGPT what it remembers about you and ask it to forget something it has remembered. To completely delete this information, you can delete the saved memories in Settings and the chat where you saved that information.
Gemini
Google’s Gemini model can remember things you’ve discussed or summarize past conversations.
To modify or delete these memories, or to turn off the feature entirely, you can go into your Gemini Apps Activity menu.
Grok
Elon Musk’s xAI announced memory features in Grok this month and they’re turned on by default.
You can turn them off under Settings and Data Controls. The specific setting is different between Grok.com, where it’s «Personalize Grok with your conversation history,» and on the Android and iOS apps, where it’s «Personalize with memories.»
Technologies
Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense
The latest meme around generative AI’s hallucinations proves you can’t lick a badger twice.
Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search’s AI Overviews to define phrases never before uttered.
What, you’ve never heard the phrase «blew up like a brook trout»? Sure, I just made it up, but Google’s AI overviews result told me it’s a «colloquial way of saying something exploded or became a sensation quickly,» likely referring to the eye-catching colors and markings of the fish. No, it doesn’t make sense.
The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched «peanut butter platform heels.» Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure.
It moved to other social media sites, like Bluesky, where people shared Google’s interpretations of phrases like «you can’t lick a badger twice.» The game: Search for a novel, nonsensical phrase with «meaning» at the end.
Things rolled on from there.
This meme is interesting for more reasons than comic relief. It shows how large language models might strain to provide an answer that sounds correct, not one that is correct.
«They are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,» said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. «They are not trained to verify the truth. They are trained to complete the sentence.»
Like glue on pizza
The fake meanings of made-up sayings bring back memories of the all too true stories about Google’s AI Overviews giving incredibly wrong answers to basic questions — like when it suggested putting glue on pizza to help the cheese stick.
This trend seems at least a bit more harmless because it doesn’t center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same — a large language model, like Google’s Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense.
A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features.
«When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,» the Google spokesperson said. «This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context.»
This particular case is a «data void,» where there isn’t a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear.
You won’t always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched «like glue on pizza meaning,» and it didn’t trigger an AI Overview.
The problem doesn’t appear to be universal across LLMs. I asked ChatGPT for the meaning of «you can’t lick a badger twice» and it told me the phrase «isn’t a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use.» It did, though, try to offer a definition anyway, essentially: «If you do something reckless or provoke danger once, you might not survive to do it again.»
Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts
Pulling meaning out of nowhere
This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls «hallucinating.» When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.
LLMs are «not fact generators,» Li said, they just predict the next logical bits of language based on their training.
A majority of AI researchers in a recent survey reported they doubt AI’s accuracy and trustworthiness issues would be solved soon.
The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like «you can’t get a turkey from a Cybertruck,» you probably expect them to say they haven’t heard of it and that it doesn’t make sense. LLMs often react with the same confidence as if you’re asking for the definition of a real idiom.
In this case, Google says the phrase means Tesla’s Cybertruck «is not designed or capable of delivering Thanksgiving turkeys or other similar items» and highlights «its distinct, futuristic design that is not conducive to carrying bulky goods.» Burn.
This humorous trend does have an ominous lesson: Don’t trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won’t necessarily indicate it’s uncertain.
«This is a perfect moment for educators and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it matters,» Li said. «Users should always stay skeptical and verify claims.»
Be careful what you search for
Since you can’t trust an LLM to be skeptical on your behalf, you need to encourage it to take what you say with a grain of salt.
«When users enter a prompt, the model just assumes it’s valid and then proceeds to generate the most likely accurate answer for that,» Li said.
The solution is to introduce skepticism in your prompt. Don’t ask for the meaning of an unfamiliar phrase or idiom. Ask if it’s real. Li suggested you ask «is this a real idiom?»
«That may help the model to recognize the phrase instead of just guessing,» she said.
Technologies
Clair Obscur Expedition 33 Screenshots: Beauty and Wonder in a World of Death
Technologies
Disable These 3 iOS Settings to Extend Your iPhone’s Battery Life
Switching off these features can deliver better battery life.
Do you find yourself constantly charging your iPhone when the Low Power Mode warning pops up? While phones hold less of a charge over time, you don’t want your phone to die on you while you’re using it to navigate on the road or in the middle of a conversation.
While your phone’s battery might not have the capacity to hold the charge it did when it was fresh out of the box, there are options that can help you squeeze more juice out of each charge. By disabling certain settings, you can ensure your iPhone battery can go the distance when you need it most.
You can also keep an eye on your Battery Health menu — it’ll tell you your battery health percentage (80% or higher is considered good), as well as show you how many times you’ve cycled your battery and whether or not your battery is «normal.»
We’ll explain three iOS features that put a strain on your iPhone’s battery to varying degrees, and show how you can turn them off to help preserve battery life. Here’s what you need to know.
Turn off widgets on your iPhone lock screen
All the widgets on your lock screen force your apps to automatically run in the background, constantly fetching data to update the information the widgets display, like sports scores or the weather. Because these apps are constantly running in the background due to your widgets, that means they continuously drain power.
If you want to help preserve some battery on iOS 18, the best thing to do is simply avoid widgets on your lock screen (and home screen). The easiest way to do this is to switch to another lock screen profile: Press your finger down on your existing lock screen and then swipe around to choose one that doesn’t have any widgets.
If you want to just remove the widgets from your existing lock screen, press down on your lock screen, hit Customize, choose the Lock Screen option, tap on the widget box and then hit the «—» button on each widget to remove them.
Reduce the motion of your iPhone UI
Your iPhone user interface has some fun, sleek animations. There’s the fluid motion of opening and closing apps, and the burst of color that appears when you activate Siri with Apple Intelligence, just to name a couple. These visual tricks help bring the slab of metal and glass in your hand to life. Unfortunately, they can also reduce your phone’s battery life.
If you want subtler animations across iOS, you can enable the Reduce Motion setting. To do this, go to Settings > Accessibility > Motion and toggle on Reduce Motion.
Switch off your iPhone’s keyboard vibration
Surprisingly, the keyboard on the iPhone has never had the ability to vibrate as you type, an addition called «haptic feedback» that was added to iPhones with iOS 16. Instead of just hearing click-clack sounds, haptic feedback gives each key a vibration, providing a more immersive experience as you type. According to Apple, the very same feature may also affect battery life.
According to this Apple support page about the keyboard, haptic feedback «might affect the battery life of your iPhone.» No specifics are given as to how much battery life the keyboard feature drains, but if you want to conserve battery, it’s best to keep this feature disabled.
Fortunately, it is not enabled by default. If you’ve enabled it yourself, go to Settings > Sounds & Haptics > Keyboard Feedback and toggle off Haptic to turn off haptic feedback for your keyboard.
For more tips on iOS, learn how to download iOS 18 and how to automatically delete multifactor authentication messages from texts and emails.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies3 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies3 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow