Google I/O 2024 was spearheaded by the company's future and current artificial intelligence features, showcasing Gemini in a more assistive setup and AI-powered upgrades for other features.

First, Gemini is being updated as an overlay over applications to assist the user in several ways. For instance, when a user opens Gemini after watching a YouTube video, they can use the "Ask this video" option to ask questions or summarize the film using the knowledge base.

If a user has a Gemini Advanced subscription, they can additionally use PDFs and its extended context window. 

Google Makes Jobs Cuts in Trust and Safety Amid Backlash Over AI Chatbot Gemini
(Photo : SEBASTIEN BOZON/AFP via Getty Images) Google is reducing its workforce, including its trust and safety team, amid ongoing adjustments within the company.

Additionally, Gemini will function more naturally inside programs; drag-and-drop is one such example. At the speech, Google showed off how users could instruct the chatbot to create an image and then, when it was complete, drag and drop it into a messaging app to send it to a buddy.

Google promises that, as time passes, Gemini will learn more about each app on a user's phone and provide Dynamic Suggestions that will make navigating between them easier.

Read Also: Google Gemini Tiers: Nano, Flash, Pro, and Ultra-What Are the Differences? 

Google AI Beyond Gemini

Next, Circle to Search's problem-solving capabilities, currently installed on over 100 million Android smartphones, are also being improved.

In particular, it will assist kids with their homework by helping them comprehend difficult math and physics word problems they are having trouble with.

Users won't ever need to touch digital syllabi or info sheets because they will receive a thorough explanation of how to solve the problem. 

Google's LearnLM paradigm, which attempts to simplify learning with AI, powers the new capability. Google promises that Circle to Search will eventually enable you to tackle increasingly intricate issues, including mathematics, graphs, and symbolic diagrams.

Google also revealed that Gemini Nano, the model pre-installed on Android, will get an update known as "Gemini Nano with Multimodality."

With this new LLM, users will be able to interact with Gemini and get information on queries, answers to questions, and more using a variety of media inputs, including photos, text, voice, and videos.

This model will power features like TalkBack, which provides written descriptions of photos and real-time spam alerts during phone calls. 

Google Chrome AI

Google revealed that Chrome for desktop users will now reportedly include its Gemini AI. With Chrome 126, users can access Gemini Nano, an on-device AI function allowing text generation.

Last year, the Pixel 8 Pro and Pixel 8 were the first devices to use the tech giant's lightweight large language model (LLM), Gemini Nano. To accelerate Chrome's AI capabilities, Google changed the browser and model.

According to the official blog, Google's Gemini Nano is powered by the AICore system-level capabilities of Android 14. Users can obtain foundation models on their mobile devices with this function.

AICore eliminates the need for app distribution and downloads by pre-installing foundation models. Developers can alter these models with LoRa. Android AICore, which enables new Google app features, is operated on devices like the Samsung S24 Series and Google Pixel 8 Pro. 

Related Article: Google Boosts Trust in AI with Enhanced Content Watermarking for Ethical Innovation 

Written by Aldohn Domingo

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion