
OpenAI introduces ChatGPT Atlas — a new AI-powered web browser that brings real-time assistance, automation, and privacy controls directly into browsing
What is the Atlas Browser?
On October 21, 2025, OpenAI launched ChatGPT Atlas, a web browser built directly into ChatGPT Users can now access AI assistance anywhere on the web No need to copy-paste or switch tabs. Atlas puts AI at the center of browsing, understanding user context, using memory from previous chats, and helping with tasks The company also says the browser offers a new way to view the web ChatGPT previously offered a search feature Now, Atlas has brought it to the browser, allowing users to use AI to perform tasks such as research or booking appointments.
Key Features of Atlas
In-browser assistance that allows ChatGPT to understand the page, understand user intent, and provide instant assistance For example, it analyzes slides during a lecture and generates practice questions You can interact with the sidebar, as well as the New Tab page, which is your starting point. Enter a question or URL in the search bar Autocomplete suggestions are provided, and tabs display links, images, videos, and news Suggestions and automation Atlas remembers browsed content Suggests next steps, such as returning to a page or deepening research Creates to-do lists. Works from recent activity Continues holiday gift research Agent Mode, which is an enhanced version of ChatGPT AI performs tasks such as research, analysis,
task automation, event planning, or appointment booking, then uses browsing context to perform faster in Atlas Examples Adding recipe ingredients to a cart on Instacart Compiling team insights Starting from a prompt or button Opening tabs or clicking elements Stopping on sensitive sites, such as financial institutions.
Discussing control and privacy
Users keep control. They block ChatGPT with a toggle in the address bar. Incognito mode logs users out of ChatGPT, and users do not save chats or activity. Browser memories remain private and do not serve for model training. If users opt in, ChatGPT provides parental controls. Users can disable memories or Agent mode, along with safety safeguards that restrict Agent mode Logged-out mode supports users, and some OpenAI red-teaming implements practices. They mitigate vulnerabilities. such as monitoring users for malicious instructions hidden in webpages or emails.
How Atlas Works
Atlas works within the user’s flow Interact with sidebars or prompts Real-time help from page content For example, suggesting relevant links on vrbo.com. Adding to cart on shopping sites Agent mode starts upon approval Safeguards are in place Dark mode and a security dropdown are useful, as is the integration with ChatGPT, which Atlas natively embeds in ChatGPT Syncs memories, chats, and settings. Expands agent capabilities so website owners can add ARIA tags, and availability is also available on macOS Free, Plus, Pro, and Go users Beta for Business users. Admin-enabled for Enterprise and Edu Windows, iOS, and Android coming soon Download https://chatgpt.com/atlas.
Competition from Google
Atlas challenges traditional browsers Puts AI at the core Reduces the need for external tools Enables agentic web use Challenges Google Chrome’s dominance by delegating routine tasks OpenAI wants to acquire Chrome and make it AI-first Rumors and other information suggest this launch is a step towards agentic systems The future roadmap includes multi-profile support and improved developer tool Discoverability for the Apps SDK See the Website Publishers FAQ New features will be coming soon Check the release notes Early testers liked the integration, as well as avoiding screenshots for lecture aids Agent Mode is in preview, so errors may occur on complex tasks. We will improve reliability and latency, but for now, let’s move on to another topic AI that reads human brains
Brain Decoder that Reads Thoughts
Researchers at the University of Texas created an AI brain decoder It converts thoughts into text It uses fMRI brain scans, developed by Alexander Huth and Jerry Tang It will help people with aphasia Aphasia is a problem with understanding or speaking language To learn how it works, we first train on reference participants They listen to 10 hours of radio stories fMRI data is collected Adaptations are made for newcomers Two converter algorithms are used, one from 70 minutes of story data The second 70 minutes are from silent Pixar films and functional alignment maps brain responses The decoder interprets new brain activity and predicts semantically related text, not the exact words.
AI that recreates images from brain scans
Researchers at Osaka University created an AI system that recreates images seen through fMRI It uses Stable Diffusion and a method that modified Stable Diffusion Brain activity is linked to text descriptions.OR. From the University of Minnesota dataset 4 participants 10,000 photos Layout from the occipital lobe Content from the temporal lobe, then starting with noise Brain matches patterns Adds keywords. Creates realistic images with accuracy that is layout and perspective accurate Problems with objects and keywords No outside the training set Limited to 4 participants, so in importance, explains how the brain processes visual input Explores animal perception Records dreams. Communication for people with paralysis New possibilities in cognitive neuroscience



