How Snacks actually works
Most AI food loggers send your photo to a vision model and get back a guess. Snacks measures your food in 3D.
The problem with every other app
Apps like Cal AI and Macrofactor use vision AI to identify what food is in your photo. But vision models cannot measure how much food is there. There's no depth, no scale, and no volumetric training data. The result is a confident-sounding guess.
This isn't a problem that gets fixed in the next model update — it's a structural limitation of estimating 3D quantities from 2D images.
The Snacks pipeline
Detect foods
Our fine-tuned detection model identifies every food on your plate. A reasoning agent determines which items to measure.
Segment each food
A segmentation model isolates the exact pixels of each food — the actual shape, not a bounding box.
Measure volume in 3D
We project the image into 3D using depth data (enhanced with LiDAR when available) and calculate the real volume of each food.
Convert to macros
Each food is broken into editable ingredients, matched to our nutrition database with known weight-to-volume ratios. You can always adjust.
Built by someone who knows this space
Snacks is built by an engineer with 10 years in AI + cameras — including leading engineering at Mira (AR headset company, acquired by Apple) and running product for Camera at Snapchat.
Ready to stop guessing?
Try the only food logger that actually measures your food.