Also be sure to follow our Google Research Kaggle site as we plan more challenges in the coming months: https://www.kaggle.com/organizations/google-research (5/5)
Also be sure to follow our Google Research Kaggle site as we plan more challenges in the coming months: https://www.kaggle.com/organizations/google-research (5/5)
Thank you to every participant for helping us bridge the gap between AI research and clinical impact. Stay tuned as we review the submissions and announce the winners. (4/5)
These open models are designed to accelerate the development of privacy-focused, adaptable AI for the front lines of care β and the community delivered. (3/5)
We invited developers to prototype human-centered AI applications using Googleβs Health AI Developer Foundations (HAI-DEF), and the response was inspiring. (2/5)
Image from Twitter
The MedGemma Impact Challenge has come to a close with a remarkable 850+ submissions in six weeks. (1/5)
By using Google Cloud AI to handle the "heavy lifting" of network maintenance and customer support, mobile companies can lower their costs while providing customers with a faster, more reliable, and more secure service. Learn more. https://goo.gle/40gcXKQ
This means you can log into apps securely and instantly, without waiting for SMS text codes. (4/4)
-Instead of you waiting on hold to fix a billing error or a connection issue, AI agents can spot the mistake and fix it automatically. They can even apply a credit to your bill without you having to call.
-We're helping telcos use the network itself to verify your identity. (3/4)
-We're working with partners like Deutsche Telekom and Vodafone to build networks that sense trouble and repair it instantly. If a tower goes down or a connection drops, the AI re-routes traffic in seconds β often before you notice. (2/4)
At #MWC26 this week, @GoogleCloud shared how we're helping the worldβs largest mobile companies move into the era of agentic AI. (1/4)
Cinematic Video Overviews in @NotebookLM are rolling out now for Ultra users in English.
Learn more β https://blog.google/innovation-and-ai/products/notebooklm/generate-your-own-cinematic-video-overviews-in-notebooklm/
Video: https://twitter.com/google/status/2030040439642783914 (2/2)
You can now turn your sources into cinematic video explainers in @NotebookLM π½οΈ
It uses our most advanced models like Gemini 3, Nano Banana Pro and Veo 3, to animate the visual story behind your sources.
Hereβs a Cinematic Video Overview explaining... Cinematic Video Overviews β (1/2)
https://blog.google/innovation-and-ai/products/google-ai-updates-february-2026/?utm_source=tw&utm_medium=social&utm_campaign=nfg&utm_content=&utm_term= (3/3)
Hereβs what you may have missed from February, including new releases of Gemini 3.1 Pro and Nano Banana 2, a new skills training from Grow with Google and @SundarPichaiβs AI Impact Summit announcements of Google's investments in AI infrastructure. (2/3)
Weβre back with a monthly recap of our biggest AI news moments. (1/3)
Learn more and check out the WAXAL dataset here: http://goo.gle/4cxNHae
We just released WAXAL. This open-access dataset delivers 2,400+ hours of high-quality speech data for 27 Sub-Saharan African languages, serving 100M+ speakers. Crucially, this community-rooted effort β led by African organizations β changes the roadmap for truly inclusive voice AI. (2/2)
Image from Twitter
The biggest barrier for AI applications in Africa isn't model complexity -- it's the scarcity of data for the 2000+ spoken languages there. (1/2)
See how our open-source Earth AI model SpeciesNet is helping to promote wildlife monitoring and conservation worldwide. https://goo.gle/4rgCqyG
We launched it as a free, open-source tool a year ago, and today, research groups are using it to make sense of their camera data faster than ever. (2/2)
A collage shows wildlife in North American forests. Each animal, including three black bears and a coyote, is framed by a colored box with a species label and a high confidence percentage.
The four photos show African wildlife in their natural habitat at various times of day. Digital overlays identify the animals, such as an "elephant: 99.8%" and a "lion: 99.4%," for data collection purposes.
SpeciesNet is an Earth AI model trained to automatically identify nearly 2,500 categories of mammals, birds and reptiles from data captured by motion-triggered wildlife cameras. (1/2)
π @Producer_AI: Welcomed ProducerAI to Labs! ProducerAI is a creative collaborator, whether writing lyrics, refining a melody, or inventing entirely new genres.
What a start to the year. 2026 is just getting warmed up.
Explore all of our experiments at: https://labs.google/ (13/13)
π Opal: Introduced a new agent step that analyzes a goal, determines the next best step, and automatically calls models and tools to finish a the task, such as Veo for video or web search for research. (12/13)
π Pomelli: Launched βPhotoshoot,β allowing users to take a single image of their product and easily create high quality, customized product shots to elevate their marketing. (11/13)
Plus, NotebookLM partnered with @Zillow to launch a featured notebook pre-loaded with their extensive home buying resources. Be on the lookout for other exciting partnerships to come! (10/13)
π @NotebookLM: A massive month for mobile. Users can now add customization prompts to Infographics and Slide Decks directly and generate video overviews on the mobile app. (9/13)
π Project Genie: Introducing an early research prototype powered by @GoogleDeepMindβs Genie 3 model, Project Genie lets users create and explore infinitely diverse worlds. Available to AI Ultra subscribers in the US. (8/13)
We also joined the Antigravity MCP store and added new tools, like a coding agent that can ask to edit screens and generate screen variants. (7/13)
π @StitchbyGoogle: We leveled up the MCP ecosystem. Users can now get step-by-step MCP client instructions and grab their API key directly from the Exports panel. (6/13)