Google is Adding New Accessibility and AI Features for Chrome and Android Browsers

Google announced on Thursday that Chrome and Android will soon get new AI and accessibility features. Most notably, you can now ask Gemini what’s on your screen and what’s in pictures using TalkBack, Android’s screen reader.
Google Unveils New Accessibility Features
Google introduced Gemini’s capabilities to TalkBack last year, enabling blind and low-vision users to receive AI-generated visual descriptions even in the absence of Alt text. People can now post queries and receive answers regarding their photos.
Google also revealed that it is improving Android’s real-time captioning function, Expressive Captions, which employs artificial intelligence to record both the speaker’s words and their delivery.
According to the report, it has created a new duration option for expressive captions because it recognizes that lingering on the sound of words is one way individuals express themselves. Now you’ll be able to tell whether a sports commentator is announcing a “amaaazing shot” or when someone is shouting “nooooo” instead of just “no.” New names for sounds, such whistling or clearing one’s throat, will also begin to appear.
There are also a few new accessibility features coming to Google Chrome. Screen readers were not supported for scanned PDF files in the previous version of the browser’s desktop version. With the use of optical character recognition (OCR), the business is now altering that. Chrome can now recognize scanned PDFs, highlight, copy, and search them, and even read them with screen readers.
The update is available for Android 15 and for devices in the United States, United Kingdom, Canada, and Australia in English.