In a podcast interview, Google VP of Search Liz Reid described two methods LLMs are altering what Google can index and the way it ranks outcomes for particular person customers.
Reid advised the Entry Podcast that multimodal AI fashions now permit Google to know audio and video content material at a deeper degree than was beforehand doable. She additionally pointed to a future the place search outcomes adapt primarily based on a person’s paid subscriptions.
What’s New
Multimodal Understanding Is Increasing What Google Can Index
Reid stated LLMs being multimodal has opened up content material codecs that Google beforehand struggled to course of.
Reid advised the hosts:
“The beauty of LLM is that they’re multimodal. So we will truly perceive audio content material and video content material truly at a degree we couldn’t years in the past.”
She went additional, describing how Google can now transcend primary transcription when analyzing video.
“Now you may perceive audio a lot better. Now you may perceive video a lot better. Now you may perceive not simply the video transcription however like what’s the video extra about or what’s the type or different issues like that.”
Reid related this to a long-standing hole in how search works for non-English audio system. For customers in India who converse Hindi or different languages, the online usually lacks the knowledge they want of their language. Beforehand, translating all internet content material into each language wasn’t scalable. LLMs modified that.
“Now with an LLM, you may take data in a single language, perceive it, after which output in one other language. Like that opens up data.”
Google has been transferring on this course for a while. In October 2025, Reid advised the Wall Avenue Journal that Google had adjusted rating to floor extra short-form video, boards, and user-generated content material.
The feedback additionally add context to Google’s Audio Overviews experiment launched in Search Labs final June, which generates spoken AI summaries of search outcomes.
That wasn’t doable a couple of years in the past. In 2021, Google and KQED examined whether or not audio content material could possibly be made searchable and located that speech-to-text accuracy wasn’t excessive sufficient, notably for correct nouns and regional references. Reid’s feedback counsel that the barrier has fallen.
Subscription-Conscious Search May Change How Outcomes Are Personalised
Reid additionally outlined a course for personalization that goes past Google’s present Most popular Sources function.
She advised the hosts Google desires to floor content material from retailers a person pays for, not paywalled outcomes from sources they’ll’t entry.
“For those who love this supply and also you do have a relationship with it then that content material ought to floor extra simply for you on Google.”
Reid gave a sensible instance. Say 20 interviews on a subject are paywalled however a person subscribes to at least one outlet. Google ought to make it straightforward to search out the one they’ll learn.
“We should always floor the one which they’re paying for and never the six that they’ll’t get entry to extra.”
She urged the corporate has “taken small steps to this point however need to do extra” to strengthen how audiences and trusted sources join by way of search. She additionally talked about the opportunity of micropayments for particular person articles, although she acknowledged that mannequin hasn’t taken off traditionally.
Google expanded Most popular Sources globally for English-language customers in December, and introduced a function that highlights hyperlinks from customers’ paid information subscriptions. Google stated it will prioritize these hyperlinks in a devoted carousel, beginning within the Gemini app, with AI Overviews and AI Mode to observe. On the time, Google stated customers who choose a most popular supply click on to that web site twice as usually on common. Reid’s feedback counsel the corporate sees subscription-aware search as a broader evolution of that very same course.
Why This Issues
The multimodal capabilities Reid pointed to increase which content material codecs get found by way of search. Podcasts, video collection, and audio-first content material have traditionally been tougher for Google to judge past metadata and transcripts. Google’s rising means to evaluate relevance and depth from audio and video instantly modifications who might be discovered by way of search and the way.
For manufacturers and creators investing in non-text codecs, Google’s means to floor that work is catching as much as the place the viewers already is.
The subscription-aware personalization course issues for any writer with a paywall or membership mannequin. Search outcomes that adapt to what particular person customers pay for would tighten the connection between subscriber retention and search visibility. Paywalled content material might carry out higher for the viewers that issues most to the writer, quite than being deprioritized as a result of most customers can’t entry it.
Trying Forward
Reid didn’t connect timelines to both improvement. The multimodal indexing capabilities she talked about seem like present, whereas the subscription-aware personalization is a acknowledged course with some present options already in place.
Google I/O is scheduled for Might 19-20. Reid stated on the podcast that the corporate is “actively constructing” however that the tempo of AI improvement means some options might come collectively as late as April and nonetheless make it to the stage.
Featured Picture: Mawaddah F/Shutterstock








