According to TechSpot, OpenAI’s latest models, including o3 and o4-mini, can analyse images with astonishing precision, even if the photos are low-quality, cropped or rotated.

HÀ NỘI — ChatGPT, the artificial intelligence (AI) tool developed by OpenAI, is attracting widespread attention for its striking ability to deduce geographic locations from ordinary images.
According to TechSpot, OpenAI’s latest models, including o3 and o4-mini, can analyse images with astonishing precision, even if the photos are low-quality, cropped or rotated. A recent social media trend has seen users challenge ChatGPT to “guess the location” by submitting selfies, snapshots of restaurant menus, or even a simple corner of a room.
What’s shocked many is the eerie accuracy of the AI’s guesses. In one test, the o3 model correctly identified a specific bar in Williamsburg from nothing more than a purple rhino head decoration visible in the background.
According to TechCrunch, even the previous GPT-4o model — which lacked the advanced image reasoning of o3 — had already demonstrated unexpectedly high precision in similar tasks.
The problem, however, lies in the sheer power of this intelligence. Seemingly innocuous details — the shape of a window, a wall tile pattern, or part of a signboard — can become “golden clues” that allow the AI to trace where a photo was taken. This raises significant concerns about doxxing, the act of deliberately identifying and exposing someone’s private information — such as their home or workplace — without their consent, a growing problem across social media platforms.
“Just a glimpse of a window frame, a sign or even the lighting in a room can be enough for the AI to deduce the location,” one user warned on TechSpot. The concern is particularly acute given that casually taking photos, posting stories and sharing daily moments online is now a routine behaviour, especially among younger users.
In response to rising concerns, OpenAI acknowledged the advanced capabilities of its latest models but noted that technical safeguards had been built in to prevent the AI from responding to sensitive or identifying prompts. The company also emphasised the positive potential of image reasoning in fields such as accessibility for people with disabilities, scientific research and emergency response operations.
Nonetheless, cybersecurity experts continue to sound the alarm. They urge users to be more cautious about the images they share publicly. The line between a casual photo and an inadvertent leak of one’s location has never been thinner, especially in an age where AI can dissect visual information with such granularity.
As artificial intelligence grows more perceptive, protecting personal privacy is becoming a far more complex and urgent challenge — and one that both tech developers and users must address with care. — VNS