Upload any photo and get an AI-generated depth map — warm colors show nearby objects, cool colors show distant ones. All from a single image, no special hardware needed.
Your photo is resized and normalized to match the model's expected input format. MiDaS transforms handle aspect ratio and color channel normalization automatically.
The image passes through MiDaS — a deep neural network with an EfficientNet-Lite backbone — that outputs a per-pixel inverse depth map predicting relative distances.
Raw depth values are rescaled to a [0, 1] range and resized to the original image dimensions using bicubic interpolation for smooth, high-resolution output.
Scientific colormaps (inferno, magma, viridis, plasma) are applied to produce striking false-color depth images that clearly show spatial relationships in the scene.