Projects
Live Demo

DepthLens Depth Estimation

Upload any photo and get an AI-generated depth map — warm colors show nearby objects, cool colors show distant ones. All from a single image, no special hardware needed.

MiDaS EfficientNet Monocular Depth PyTorch Resource-Constrained Inference
depthlens — estimate
Try an example image
📷
Click to upload or drag & drop an image
JPG, PNG, WEBP — max 10 MB
Depth Estimation MiDaS
Upload an image and click "Estimate Depth" to see the AI-generated depth map. The Space may need ~1-2 minutes to wake up on the first request.

How it works

01

Image Preprocessing

Your photo is resized and normalized to match the model's expected input format. MiDaS transforms handle aspect ratio and color channel normalization automatically.

02

Depth Prediction

The image passes through MiDaS — a deep neural network with an EfficientNet-Lite backbone — that outputs a per-pixel inverse depth map predicting relative distances.

03

Depth Normalization

Raw depth values are rescaled to a [0, 1] range and resized to the original image dimensions using bicubic interpolation for smooth, high-resolution output.

04

Visualization

Scientific colormaps (inferno, magma, viridis, plasma) are applied to produce striking false-color depth images that clearly show spatial relationships in the scene.

scroll