Anticipating AI: When LLMs think ahead

picture of a predictive search quary

Before we begin

Most of us use search engines daily. For all kinds of questions, clicking links, and piecing together answers. But what if search didn’t just respond to what we ask, but anticipated what we needed before we asked it? That’s the shift we’re beginning to see with large language models (LLMs), especially the newly released GPT-5. It’s not just an upgrade in capability, it's a fundamental change in how we interact with information.

So what is happening with searches? 

Searches are user-driven and reactive by nature 

For most of its history, Google Search has been reactive. You have a question, it finds the most relevant pages, and you, hopefully armed with a mix of trust and skepticism, click through to get your answer. It’s a system that leaves the final judgment and the actual doing up to you, the user. This is important! 

It meant that search was (and for the most part is) a tool, not a decision-maker. You stayed in control, weighing sources, piecing things together, deciding what mattered. But now, something is shifting. Search isn’t just surfacing information, it's starting to shape it. The line between tool and advisor is beginning to blur.

Search gets smarter with GPT-5: A fictional scenario 

Now picture that system reimagined with a GPT-5 LLM at its core. You type, “What should I do this weekend?” and instead of a list of links, you get a full itinerary: a Saturday hike with friends, a Sunday afternoon art exhibit, dinner reservations at your favorite restaurant, and tickets already purchased for a local concert. It checks the weather, considers your past preferences, and even factors in your budget. In short, it moves from “answering” to anticipating.

That shift feels almost magical-ish. When AI takes the initiative, we move from researchers to recipients. From deciding what to do to accepting what’s been decided for us. To take a real-life case as an example. Think about Spotify and how they serve you the perfect playlist, or your calendar is auto-filled with meeting suggestions from teams. IN Spotify’s case, the algorithms aren’t just reacting to your choices, they’re anticipating them. This can, in turn, create a seamless flow where you don’t have to think about what comes next. As Gustav Söderström, Spotify’s Chief R&D Officer, put it:

“The magic happens when you stop thinking about what to play next and just let the system do it for you.”

The comfort of not having to choose can quietly turn into a habit of letting the system choose for you, not just for songs, but for how you spend your time, where you go, and even what you believe.

This is where the “counting problem” comes in

Where i dive in depth

Imagine you’re doing math in your head, and you let someone else handle all the adding. If a small mistake occurs, you might never notice, with the reasoning that you didn’t understand where or why it would be wrong. AI anticipatory systems create a similar blind spot. It looks polished and complete to a degree that it becomes a temptation to just go with your gut and trust it.

Convenience can breed complacency.

When the system is right, that’s wonderful; you save time and mental energy. But when it’s wrong, the errors can be subtle and you might miss them entirely. Maybe it booked you for an event that was canceled last week. Maybe it suggests a restaurant that’s closed for renovations. The details may seem small, but if they cascade, your “perfect weekend” can unravel quickly.

This isn’t an argument against AI-powered search. I, for one, think it is inevitable. It’s instead an nudge to use it with awareness. Just as GPS navigation has made us more mobile yet sometimes hilariously wrong about directions, LLM-powered search could make us more informed while also introducing new types of mistakes.

When AI systems produce sleek and seemingly complete outputs, we often over-trust them, even when they’re wrong, since we don’t fully understand their reasoning. - Buçinca et al. (2021)

An interesting article exploring this shows that prompting users to pause and explicitly consider alternatives (“cognitive forcing functions”) significantly reduces this blind trust compared to standard explainable AI approaches.

So the question isn’t just what anticipatory AI can do, but how we’ll engage with it. Will we treat it as a partner we double-check, or a butler we never question? The leap from reactive search to anticipatory planning is exciting, with a lot of unanswered questions. At the same time, it’s a chance to have information work for us, not just with us. Looking forward to more in this field.