Want to learn how to USE AI technology to make money and/or your life easier? Join our FREE AI community here: https://www.skool.com/ai-with-apex/about
Today in AI: MIT Designs Proteins by Motion, AI Turns Sound Into Visuals, and Google Upgrades Gemini Live
Today’s AI news shares a common thread: the most interesting systems are no longer built around static outputs. They are being designed for motion, responsiveness, and real-time interaction.
TL;DR
- MIT researchers introduced VibeGen, a model that designs proteins based on desired motion patterns rather than only static structure.
- MIT says the VibeGen work appeared in Matter on March 24, 2026, and the team validated designs with physics-based molecular simulations.
- MIT also profiled graduate student Carlos Mariano Salcedo, who is using neural cellular automata to turn music and sound into evolving visual systems.
- Google announced Gemini 3.1 Flash Live, which it describes as its highest-quality audio and voice model yet for real-time conversation.
- Across biotech, creative tools, and voice software, the day’s stories point to AI systems that work with movement, sound, and longer-running interaction.
MIT wants AI-designed proteins to move with purpose
What happened
MIT researchers unveiled VibeGen, a system for designing proteins around how they move, not just how they look once folded. The project shifts protein design toward dynamics, treating motion as a core part of biological function rather than a secondary detail.
Why it matters
That is a meaningful change in emphasis. In biology, many proteins act like molecular machines, and their usefulness depends on flexing, vibrating, and changing shape over time. If AI can reliably design for those behaviors, it could expand the toolkit for biomaterials and future therapeutic research.
Key details
- MIT published the research story on March 26, 2026, and said the underlying paper was published March 24 in Matter. MIT News
- VibeGen is designed to generate amino-acid sequences that match a target motion pattern rather than only a target structure. MIT News
- MIT says the system uses diffusion-model techniques, starting from a random sequence and iteratively refining it toward a desired vibrational behavior. MIT News
- The approach includes a second predictor model that evaluates whether candidate sequences match the intended motion, creating a generate-and-check loop. MIT News
- MIT reports that many outputs were de novo proteins and that the team validated behavior with physics-based molecular simulations. MIT News
- The researchers also highlight functional degeneracy: different sequences and folds may achieve the same vibrational goal. MIT News
Source links
https://news.mit.edu/2026/mit-engineers-design-proteins-by-motion-not-just-shape-0326
An MIT student is building AI that lets audiences see sound
What happened
MIT profiled Carlos Mariano Salcedo, a master’s student in the institute’s Music Technology and Computation Graduate Program, for work that uses AI to visualize music and other sounds. His system turns audio into evolving visual behavior rather than fixed rendered effects.
Why it matters
This story adds a useful counterpoint to the day’s harder science news. It shows AI not only as a tool for prediction or automation, but as a medium for creative expression and new sensory experiences built around sound, emergence, and live response.
Key details
- MIT published the profile on March 26, 2026. MIT News
- Salcedo is part of MIT’s new Music Technology and Computation Graduate Program, which MIT says enrolled five master’s students in its first cohort. MIT News
- His work uses neural cellular automata, combining cellular automata concepts with machine learning to grow images that can regenerate. MIT News
- MIT says the system can be driven by audio so that visuals react to sound in real time. MIT News
- He also built a web interface that lets users tune how musical energy affects the visual system during performance. MIT News
- MIT notes that Salcedo was selected to deliver the student address at the 2026 Advanced Degree Ceremony for the School of Humanities, Arts, and Social Sciences. MIT News
Source links
https://news.mit.edu/2026/seeing-sounds-mariano-salcedo-0326
Google says Gemini 3.1 Flash Live makes voice AI faster and more natural
What happened
Google announced Gemini 3.1 Flash Live as a new step forward for its real-time audio and voice stack. The company is positioning it as a more natural and reliable model for continuous conversation across developer, enterprise, and consumer products.
Why it matters
Voice AI is moving from simple command-and-response systems toward longer, more fluid interaction. Google’s latest update shows how major AI companies are competing on latency, tonal understanding, and conversational continuity, not just text quality.
Key details
- Google published the announcement on March 26, 2026, and described Gemini 3.1 Flash Live as its “highest-quality audio and voice model yet.” Google DeepMind Blog
- Google says the model offers improved precision, lower latency, better tonal understanding, and more reliable handling of complex tasks in voice settings. Google DeepMind Blog
- The company says Gemini Live can now follow conversation threads for twice as long as the previous model. Google DeepMind Blog
- Google reports a 90.8% score on ComplexFuncBench Audio and a 36.1% result on Scale AI’s Audio MultiChallenge with thinking enabled. Google DeepMind Blog
- According to Google, the model is available in preview through the Gemini Live API in Google AI Studio, in Gemini Enterprise for Customer Experience, and for end users through Search Live and Gemini Live. Google DeepMind Blog
- Google also says all audio from 3.1 Flash Live is watermarked, and its developer changelog shows the company has been iterating on native audio dialog models since 2025. Google DeepMind Blog Google AI Developer Changelog
Source links
https://deepmind.google/blog/gemini-3-1-flash-live-making-audio-ai-more-natural-and-reliable/
https://ai.google.dev/gemini-api/docs/changelog
The connective idea across all three stories is simple: AI is becoming less static. Whether it is designing protein behavior, translating sound into living visuals, or powering longer real-time conversations, the field is increasingly focused on systems that respond, adapt, and move through time.
—
Want to learn how to USE AI technology to make money and/or your life easier? Join our FREE AI community here: https://www.skool.com/ai-with-apex/about











