Unlock your trading potential! Become a verified Bitget elite trader and earn 10,000 USDT to help skyrocket your profits. Join now and start your journey to success!
Share link:In this post: Google revealed that it is rolling out Gemini’s real-time AI video features. The company also said that Gemini can now ‘see’ screens and camera feeds in real-time. The new feature is available to select Google One AI Premium subscribers.
Google announced that it is rolling out Gemini’s real-time AI capabilities, which will allow the AI system to analyze smartphone screens and camera feeds instantly. These features will be accessible to select Google One AI Premium subscribers.
The new feature followed Google’s first “Project Astra” demonstration nearly a year ago, bringing powered camera and screen-sharing capabilities to Gemini Live. Google revealed that the new live video feature would let Gemini interpret the feed from users’ smartphone cameras in real time and answer questions.
The tech company released a video earlier this month demonstrating the use of Gemini’s live video feature to choose a paint color for freshly glazed pottery.
Google rolls out Gemini’s real-time AI video features
The new feature would also enable users to have a back-and-forth conversation with Gemini based on the screen’s feed in real-time. A user on Reddit accessed the “Share screen with Live” by tapping the button above the ‘Ask Gemini’ text field on the Gemini overlay.
Source: Reddit . A Reddit user demonstrates how users can share their screen (mobile) with Gemini Live.
The Reddit user also posted a video below demonstrating Gemini’s new screen-reading ability. It is among the two features the tech company said it would “start rolling out to Gemini Advanced Subscribers as part of the Google One AI Premium Plan” later this month. The real-time camera capabilities can be accessed by opening the full Gemini Live interface and starting a video stream.
See also Cathie Wood to lead El Salvador's AI education program
Google maintained that Gemini Live would use a new phone call-style notification and a more compact full-screen interface, though they had not yet been widely rolled out. The tech firm also acknowledged in January that Pixel (and Galaxy S25 series) owners will be “among the first to get Project Astra capabilities like screen sharing and live video streaming.”
Google rolls out Canvas and audio overview features
Google also released another new Gemini feature on March 18 called “Canvas,” which lets users refine their documents and code. The firm acknowledged that users could select “Canvas” in their prompt bar, where they can write and edit documents or code, with changes appearing in real time.
The tech company highlighted that Canvas will streamline the process of transforming coding ideas into working prototypes for web apps, Python scripts, games, simulations, and other interactive apps. Google also said the new feature will allow users to focus on creating, editing, and sharing their code and design in one place without the hassle of switching between multiple applications. The tech company rolled out Canvas for Gemini and Gemini Advanced subscribers globally in all languages.
See also Google introduces a feature that can generate AI podcasts from Gemini’s Deep Research
Google also introduced Audio Overview, which transforms users’ documents, slides, and even Deep Research reports into engaging podcast-style discussions between two AI hosts.
Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot
0
0
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.