We're a spatial computing company creating city-scale augmented reality in a real landscape  

And it all begins with
a simple
phone camera

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Data Collection

First, we're shooting videos of the location. 
A smartphone is all it takes to capture the landscape of the environment.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Processing: Stage 1

Then AI build a 3D-reconstruction. It analyzes video footage frame by frame, identifying key points and transforming it into a spatial representation of the environment.

In other words, algorithms turn flat frames into three-dimensional objects.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Processing: Stage 2

Using point clouds, we reconstruct a spatial geometry of the environment—an approximation of it. This digital replica is as close to the real landscape as it can be.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Spatial Map

Here, spatial geometry transforms into 3D representation of reality. Now we have the machine-readable format that will be interpreted by the visual positioning system (VPS) through algorithms.

This is the final step of landscape digitization and the first step of AR integration.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Visual Positioning

Using cues from the footage, VPS locates the position and orientation of the camera within the environment in real time.

It allows us to integrate AR objects into the landscape precisely and seamlessly, making you feel as if you're seeing it with your own eyes.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Realistic 
Augment Reality

With spatial map and VPS, we can integrate virtual objects of any size into any real landscape at any time.

They say the sky is the limit, but not for us.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

The AI constantly matches point clouds with the original footage back and forth. This is how we can obtain precise 3D objects even when the camera is moving and the angle of view is changing.

Technologies

The pipeline combines both classical methods and deep learning techniques. We utilise neural networks that we train ourselves to tackle tasks such as image retrieval and the extraction of key points from images.

To support this, we've developed and continuously expand our own datasets, gathered through thousands videos taken in various cities worldwide.

Neural networks play a key role in both processes: during real-time operations within our Visual Positioning System (VPS) and in offline data processing as a part of our Spatial Mapping Pipeline.

Storing map data in the cloud has the benefit of eliminating the need for users to download map data to their devices. It also allows us to continually enhance our maps and algorithms without requiring any action from our clients.

The pipeline combines both classical methods and deep learning techniques. We utilise neural networks that we train ourselves to tackle tasks such as image retrieval and the extraction of key points from images.

To support this, we've developed and continuously expand our own datasets, gathered through thousands videos taken in various cities worldwide.

Neural networks play a key role in both processes: during real-time operations within our Visual Positioning System (VPS) and in offline data processing as a part of our Spatial Mapping Pipeline.

Storing map data in the cloud has the benefit of eliminating the need for users to download map data to their devices. It also allows us to continually enhance our maps and algorithms without requiring any action from our clients.The pipeline combines both classical methods and deep learning techniques. We utilise neural networks that we train ourselves to tackle tasks such as image retrieval and the extraction of key points from images.

To support this, we've developed and continuously expand our own datasets, gathered through thousands videos taken in various cities worldwide.

Neural networks play a key role in both processes: during real-time operations within our Visual Positioning System (VPS) and in offline data processing as a part of our Spatial Mapping Pipeline.

Storing map data in the cloud has the benefit of eliminating the need for users to download map data to their devices. It also allows us to continually enhance our maps and algorithms without requiring any action from our clients.

The pipeline combines both classical methods and deep learning techniques. We utilise neural networks that we train ourselves to tackle tasks such as image retrieval and the extraction of key points from images.

To support this, we've developed and continuously expand our own datasets, gathered through thousands videos taken in various cities worldwide.

Neural networks play a key role in both processes: during real-time operations within our Visual Positioning System (VPS) and in offline data processing as a part of our Spatial Mapping Pipeline.

Storing map data in the cloud has the benefit of eliminating the need for users to download map data to their devices. It also allows us to continually enhance our maps and algorithms without requiring any action from our clients.

The pipeline combines both classical methods and deep learning techniques. We utilise neural networks that we train ourselves to tackle tasks such as image retrieval and the extraction of key points from images.

To support this, we've developed and continuously expand our own datasets, gathered through thousands videos taken in various cities worldwide.

Neural networks play a key role in both processes: during real-time operations within our Visual Positioning System (VPS) and in offline data processing as a part of our Spatial Mapping Pipeline.

Storing map data in the cloud has the benefit of eliminating the need for users to download map data to their devices. It also allows us to continually enhance our maps and algorithms without requiring any action from our clients.

Subview App

Our in-house development for real-time AR allows you to see what we're talking about

Coming in 2024

Join Our Team

We are looking for specialists who are deep into spatial computing and above