Pokémon Go's AR data has been turned into centimeter-accurate navigation for delivery robots

A dataset built from billions of player images is now guiding real-world robots

by · TechSpot

Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.

Connecting the dots: Pokémon Go's global AR craze is now steering something far more prosaic than virtual Pikachu: real delivery robots trying to find the right doorway on a crowded city block. The same location data and street-level imagery that once anchored monsters to sidewalks and plazas have been repurposed by Niantic. Coco Robotics is now using that technology to guide its sidewalk bots through dense urban areas where GPS alone is too unreliable to keep them on course.

Niantic Spatial, an AI spinout formed in 2025, has turned years of mobile gaming data into what it describes as a high-precision world model of the physical environment. The company is now commercializing that work through a visual positioning system that can locate devices to within a few centimeters using only camera input and map context.

Its first large-scale deployment is with Coco Robotics, a last-mile delivery startup operating roughly a thousand sidewalk robots across US and European cities, where satellite signals are often too noisy to support reliable autonomy.

The technical problem Niantic Spatial is tackling is straightforward to describe but difficult to solve. GPS degrades badly in dense cities, with position estimates drifting by tens of meters as signals bounce off glass and concrete.

That level of error can place a delivery bot on the wrong block or even the wrong side of the street. Coco's robots, which travel at about five miles per hour and carry loads ranging from multiple extra-large pizzas to several grocery bags, must hit promised arrival times and precise pickup and drop-off points if they are to match or exceed human couriers.

Niantic Spatial's alternative is a visual positioning system (VPS) that localizes a device based on what it sees rather than relying on radio signals alone. Over the past several years, the company has aggregated data from Pokémon Go and its earlier augmented-reality title, Ingress. Both games encouraged players to visit specific real-world locations such as gyms, battle arenas, and other points of interest.

// Related Stories

Those gameplay loops produced a dense global dataset of images captured in urban settings, each paired with rich metadata from the phone, including latitude and longitude, camera orientation, device pose, motion data, and other sensor readings.

Niantic Spatial says it trained its models on roughly 30 billion images, heavily clustered around more than a million "hot spot" locations photographed from many angles, at different times of day, and under varied weather conditions.

Because each frame is tied to a centimeter-scale pose estimate, the training set effectively functions as a multi-view 3D sampling of city streets, crosswalks, storefronts, and building facades. The company then trains its model to infer an exact location and orientation from a handful of current images, even in areas that are less thoroughly covered than those original hot spots.

For Coco, that means its robots can fuse GPS with camera-based localization from Niantic Spatial's model. Each unit carries four hip-height cameras that look in all directions, a perspective different from a person holding up a phone but one that Coco says was straightforward to adapt to the existing data.

Coco's robots have already logged hundreds of thousands of deliveries and more than a million miles across Los Angeles, Chicago, Miami, Jersey City, and Helsinki, giving the company a baseline against which to measure improvements in reliability from the new system.

Visual positioning itself is not new, but it has historically been constrained by the availability and coverage of high-quality imagery. Niantic Spatial's bet is that the sheer volume and diversity of its crowdsourced data gives it an advantage over rivals that build maps primarily using their own sensor fleets.

Other delivery-robot vendors, such as Starship Technologies, use their sensors to build local 3D maps of edges, poles, and building outlines as they move through an area, then rely on those maps for subsequent runs. By contrast, Niantic Spatial aims to maintain a global, shared geospatial model and expose it through an API to any robot, phone, or headset that needs to know exactly where it is.

The company calls that model a "living map": a virtual representation of the world that is constantly updated as machines move through it. As Coco's robots and other future partners traverse sidewalks and streets, their sensors can contribute fresh observations that refine and extend Niantic Spatial's underlying maps. The aim is not just geometric accuracy but also semantic understanding, with objects tagged and described in ways that make sense to machines.

Niantic's leaders describe this effort as a continuation of long-running work in digital mapping rather than a departure from it.

As mapping has evolved from 2D to 3D and into dynamic "digital twin" simulations, the core link between map coordinates and physical locations has remained. What is changing is the primary consumer of those maps. Increasingly, it is machines rather than humans.

In that view, the same spatial intelligence that once kept virtual Pikachu aligned with the sidewalk is now being repurposed to keep a 100-pound delivery robot on course through traffic and weather.