Google Gemini Can Now Generate Music Using DeepMind’s Lyria 3 Model

Photo of author

By Tech Daffy

Google just made its Gemini AI app a lot more creative. The company announced Wednesday that it’s rolling out a music generation feature to Gemini, powered by DeepMind’s Lyria 3 — its latest and most advanced music generation model. The feature is currently in beta, but it’s already available to Gemini users aged 18 and older in multiple languages across the globe.

Whether you want a heartfelt ballad, an upbeat pop track, or a “comical R&B slow jam about a sock finding its match,” Gemini will take your description and turn it into a fully realized 30-second song — complete with lyrics and AI-generated cover art courtesy of Nano Banana.

What Lyria 3 Brings to the Table

Lyria 3 is a meaningful step forward from Google’s previous music generation models. According to the company, the new model produces more realistic and structurally complex tracks — music that sounds less like a rough AI approximation and more like something a producer might actually work with.

Users have direct control over several creative elements: style, vocals, and tempo can all be adjusted, giving you more say over the final sound than a simple text prompt alone would allow. The result is a tool that functions less like a novelty and more like a genuine creative starting point.

One of the more interesting features is the ability to generate music from visual media. Upload a photo or a short video clip, and Gemini will analyze the mood of the content and compose a track to match it — a capability that could be especially useful for content creators building social media videos, reels, or short films.

Artist Style: Inspiration, Not Imitation

Predictably, Google has taken a careful position on one of the thorniest issues in AI music: the question of artist style replication. You can include an artist’s name in your prompt, but Gemini won’t reproduce their sound directly. Instead, the model will treat the reference as “broad creative inspiration,” generating a track that shares a similar mood or stylistic feel.

“Music generation with Lyria 3 is designed for original expression, not for mimicking existing artists,” Google said in a blog post. “We also have filters in place to check outputs against existing content.”

Whether those filters are robust enough to satisfy the music industry remains to be seen — and Google itself acknowledged that the line between “stylistic inspiration” and “decoding an artist’s style” isn’t entirely clear-cut. It’s a nuance the company will likely be pressed on as the feature scales.

SynthID: Watermarking Every Track

Every song created through Lyria 3 will carry a SynthID watermark — an invisible, embedded signal that identifies the track as AI-generated content. This is part of Google’s broader commitment to making AI-created media traceable and transparent.

Going a step further, Google is also adding SynthID detection capabilities directly into Gemini. Users will be able to upload any audio track and ask Gemini whether it was AI-generated — a feature that could prove valuable for platforms, labels, and listeners trying to distinguish human-made music from synthetic output.

YouTube Dream Track Goes Global

Alongside the Gemini app launch, Google is expanding its Dream Track feature on YouTube — a tool that helps creators build AI-generated music for their content. Previously limited to YouTube creators in the United States, Dream Track is now rolling out globally, opening up Lyria 3-powered music creation to a much wider base of video creators.

It’s a strategic move that positions Google’s AI music tools as a core part of the YouTube creator workflow — not just a standalone app feature, but an integrated creative asset within one of the world’s largest content platforms.

AI Music: A Complicated Landscape

Gemini’s new feature arrives at a genuinely contested moment for AI and the music industry. On one side, major platforms are actively embracing generative audio: YouTube and Spotify have both been negotiating licensing agreements with music labels to enable and monetize AI-generated content in controlled, revenue-sharing ways.

On the other side, a wave of lawsuits from artists and rights holders has targeted AI companies over the use of copyrighted music in training data — a legal battle with no clear resolution yet in sight. The outcomes of those cases will likely shape how broadly and confidently companies like Google can deploy music generation tools going forward.

Detection is also becoming a priority for the industry. Streaming platform Deezer has already published tools designed to identify and flag AI-generated music to prevent fraudulent stream manipulation — an early sign that platforms are building infrastructure to handle the flood of synthetic content that tools like Lyria 3 will inevitably produce.

Who Can Use It and When

Music generation via Lyria 3 is rolling out now to all Gemini users aged 18 and older worldwide. The feature currently supports eight languages: English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese.

For casual users, it’s a genuinely fun creative tool. For content creators, it could reduce the friction of sourcing licensed background music. And for the music industry, it’s yet another signal that AI-generated audio is no longer a hypothetical — it’s here, it’s scaling, and it’s time to figure out what comes next.

Leave a Comment