Want to learn how to USE AI technology to make money and/or your life easier? Join our FREE AI community here: https://www.skool.com/ai-with-apex/about
Gemini Gets Generative Music, Real-Time Emotional Avatars Arrive, and EEG Foundation Models Take Shape
Today’s theme: AI is moving from “outputs” to experiences—music inside the chat app, video agents that react in real time, and foundation models aimed at messy biosignals.
TL;DR
- Google says the Gemini app is rolling out Lyria 3 (beta) for 30-second music generation with SynthID watermarking.
- Tavus announced Phoenix-4, pitching real-time conversational video with “active listening” behaviors and emotional controls.
- Zyphra introduced ZUNA, a 380M-parameter EEG foundation model positioned around denoising/reconstruction and future noninvasive “thought-to-text” work.
- KDnuggets shared a practical set of Python/pandas preprocessing patterns for building sturdier data pipelines.
- KDnuggets also rounded up five AI code review tools to help teams handle rising PR volume and review bottlenecks.
Google DeepMind / Gemini: Lyria 3 music generation ships in the Gemini app
What happened
Google says the Gemini app now includes Lyria 3 in beta, letting users generate 30-second music clips from prompts (and, per Google, also from uploaded images/video). Google also says every generated track is embedded with SynthID watermarking for identification.
Why it matters
This is a notable shift toward “creative tools inside the assistant,” not a standalone music-gen product. Just as importantly, Google is pairing consumer distribution with provenance-by-default: watermarking is presented as part of the product, not an afterthought.
Key details
- Google describes Lyria 3 as producing 30-second tracks inside the Gemini app. (link)
- Google highlights new features including auto-lyrics, more creative control (e.g., style/vocals/tempo), and improved musical complexity/realism. (link)
- Google says generated audio includes SynthID watermarking for AI-audio identification/verification. (link)
- Availability notes called out by Google include 18+, multiple languages, and higher limits for subscribers. (link)
- Google also points to YouTube’s Dream Track as another place Lyria 3 appears (with rollout details described by Google). (link)
Source links
https://blog.google/innovation-and-ai/products/gemini-app/lyria-3/?utm_source=openai
https://deepmind.google/models/lyria/?utm_source=openai
Tavus: Phoenix-4 pitches real-time “emotional intelligence” for generative video
What happened
Tavus announced Phoenix-4, describing it as a real-time human rendering system designed for conversational video interfaces, including “active listening” behaviors and emotional control. Tavus also positions Phoenix-4 as part of a broader “behavioral stack” alongside Raven-1 (perception) and Sparrow-1 (timing).
Why it matters
If the next wave of agents is meant to feel present, the hard part isn’t just photorealism—it’s timing, turn-taking, and subtle reactions while the user is speaking. Tavus is explicitly competing on that behavioral layer, which could reshape how teams design customer-facing avatars (and how they think about disclosure and consent).
Key details
- Tavus frames Phoenix-4 as real-time rendering with “emotional intelligence” and active listening behaviors. (link)
- Tavus describes 10+ emotion states and the ability to control them. (link)
- Performance claims include 40fps at 1080p and millisecond-level latency in Tavus’s description. (link)
- MarkTechPost summarizes Tavus’s conversational interface as claiming sub-600ms end-to-end latency. (link)
- Tavus positions Phoenix-4 as working with Raven-1 (perception) and Sparrow-1 (timing) in a behavioral stack. (link)
Source links
https://www.tavus.io/post/phoenix-4-real-time-human-rendering-with-emotional-intelligence?utm_source=openai
https://www.marktechpost.com/2026/02/18/tavus-launches-phoenix-4-a-gaussian-diffusion-model-bringing-real-time-emotional-intelligence-and-sub-600ms-latency-to-generative-video-ai/?utm_source=openai
Zyphra: ZUNA, a 380M-parameter EEG foundation model aimed at noninvasive BCI workflows
What happened
Zyphra announced ZUNA, a 380M-parameter foundation model for EEG (scalp brain-signal) data. Zyphra describes ZUNA as a diffusion autoencoder trained to denoise and reconstruct EEG, with capabilities like upsampling and reconstructing missing channels across varying electrode layouts.
Why it matters
EEG is notoriously noisy and inconsistent across devices and lab protocols, which makes scaling datasets—and model reuse—hard. A foundation model tuned for reconstruction and representation learning could become shared infrastructure for downstream BCI tasks, even as “thought-to-text” remains a longer-term and sensitive frontier.
Key details
- Zyphra positions ZUNA as a 380M-parameter EEG foundation model and connects it to work toward noninvasive “thought-to-text.” (link)
- Zyphra describes ZUNA as a diffusion autoencoder trained to denoise and reconstruct EEG signals. (link)
- Capabilities Zyphra lists include upsampling EEG, reconstructing missing channels, and working across arbitrary channel layouts. (link)
- Zyphra cites dataset fragmentation (small datasets, differing protocols) as a motivation for why the field needs more general models. (link)
Source links
https://www.zyphra.com/post/zuna?utm_source=openai
Builder corner: 8 Python tricks for sturdier data preprocessing
What happened
KDnuggets published a hands-on list of Python/pandas preprocessing patterns aimed at taking messy datasets into more reliable, pipeline-friendly shapes. The emphasis is on practical fixes: standardizing columns, safely coercing types, and handling duplicates/outliers.
Why it matters
Many “AI problems” fail upstream—bad types, hidden whitespace, inconsistent timestamps, and outliers that quietly skew training and analytics. Small, repeatable cleaning patterns reduce silent errors and make downstream modeling and BI more trustworthy.
Key details
- Patterns include normalizing column names and stripping whitespace across datasets. (link)
- Recommendations include safe numeric conversion with
errors="coerce". (link) - Recommendations include safe datetime parsing with
errors="coerce". (link) - The article also covers de-duplication patterns and quantile clipping for outlier handling. (link)
Source links
https://www.kdnuggets.com/from-messy-to-clean-8-python-tricks-for-effortless-data-preprocessing?utm_source=openai
Dev productivity: Top 5 AI code review tools (and what each is trying to solve)
What happened
KDnuggets published a roundup of five AI-assisted code review tools: Graphite, Greptile, Qodo, CodeRabbit, and Ellipsis. The list focuses on different approaches—workflow, context building, test generation, PR summaries, and even auto-fixing.
Why it matters
As teams adopt coding assistants, pull request volume can climb faster than review capacity. AI review tools are increasingly positioned as the quality gate: summarizing changes, proposing test plans, and enforcing standards—without forcing senior engineers to read every line first.
Key details
- KDnuggets lists five tools (not ranked): Graphite, Greptile, Qodo, CodeRabbit, Ellipsis. (link)
- Graphite is highlighted for a stacked PR workflow plus AI summaries/test plans. (link)
- Greptile is described as building a repo “knowledge graph” for deeper context beyond the diff. (link)
- Qodo is framed around test generation and quality analysis. (link)
- CodeRabbit is positioned as a PR bot offering walkthrough summaries and configurable rules/PR chat. (link)
- Ellipsis is described as being able to implement fixes from review comments. (link)
Source links
https://www.kdnuggets.com/top-5-ai-code-review-tools-for-developers
Takeaway: The product surface area for AI is expanding in two directions at once—more creative power for everyday users (music in chat) and more “human-facing” interfaces for agents (real-time video behavior)—while the builder stack keeps tightening with practical tools that make data and code review less brittle.
—
Want to learn how to USE AI technology to make money and/or your life easier? Join our FREE AI community here: https://www.skool.com/ai-with-apex/about











