Samsungs New Bixby 8.5 for Galaxy devices is laced with (Perplexity AI)

Person with smartphone in city

The new Bixby debuting with One UI 8.5 is a major step toward turning Samsung’s assistant into a truly conversational device agent instead of a simple voice command tool. Built on deeper integration with system settings and real-time web search, it lets users describe what they want in natural language and have the phone take care of the details. For example, saying “I don’t want the screen to time out while I’m still looking at it” will automatically switch on Keep screen on while viewing—no menus, no hunting through sub‑pages. Because results from the open web now appear directly in the Bixby interface through a new Perplexity-powered search experience, users can research, compare, and take action in one seamless flow without bouncing between apps.[

Under One UI 8.5, Bixby becomes a more intuitive control hub for Galaxy devices, with richer context awareness and smarter responses. The assistant can interpret intent, reference the phone’s current settings, and propose concrete actions, such as surfacing accidental touch protection when asked why the screen keeps activating in a pocket and allowing that setting to be toggled on immediately, as screen reader users, this is great.  This conversational layer sits on top of existing device controls so users can adjust display, sound, connectivity, and privacy preferences entirely by voice, building on earlier natural-language search in Settings and extending it system‑wide. Early beta deployments on Galaxy S25 devices show that this revamped Bixby is being refined ahead of a broader public rollout alongside the Galaxy S26 series, with additional markets to follow after initial testing.

For blind and low‑vision users, the new Bixby in One UI 8.5 ties into Samsung’s accessibility stack, including features like Bixby Vision’s scene describer, text reader, object identifier, and color detection to support everyday orientation and information access. With accessibility modes enabled, pointing the camera at a scene or object can trigger Bixby to announce what is in front of the user, read on‑screen or printed text aloud, or describe colors—capabilities that pair well with the assistant’s new conversational controls for settings and apps. Because Bixby can both fetch live information from the web and manipulate device behavior, a blind user can, for example, ask for nearby cafés that match specific criteria, have results presented in a consistent interface, then immediately adjust navigation, display, or audio preferences through further voice instructions. This reduces the cognitive and physical load of memorizing menu paths or switching between multiple inaccessible web views.

*Written by the PASS Power Blog Team

Scroll to Top
Skip to content