RealTime AI Camera iPhone

RealTime AI Camera banner

⭐ Now live on the App Store — 100% free

RealTime AI Camera

A free iPhone app that identifies 601 different objects in real time, fully offline. YOLOv8 with the complete Open Images V7 class set. No network, no account, no ads.

⭐ 5 stars
🍴 0 forks
📱 iPhone
🔒 100% offline
💝 Free forever
601
Object Classes
10 FPS
Average on iPhone
iOS 15+
iPhone X and newer
$0
No ads or IAPs

What the app does

🎯 Object Detection

YOLOv8 with all 601 object classes from Open Images V7. Every standard iPhone detection app caps at the 80-class COCO set. This one recognizes 7.5× more categories — musical instruments by type, kitchen appliances, rare animals, scientific instruments, the works.

📝 On-Device OCR

English optical character recognition using Apple’s Vision framework. Point the camera at a sign, label, menu, or document — it reads the text without sending a frame anywhere.

🌎 Offline Translation

Spanish → English translation using a rule-based engine + dictionary. No cloud translation service, no Google Translate API call. Works in airplane mode.

📏 LiDAR Distance

On Pro iPhone models, per-object depth measurement using the built-in LiDAR scanner. Every detected object gets a distance overlay so the app can tell you how far away everything actually is.

Why this is a bigger deal than it sounds: we’ve quietly crossed a line where a machine running on the 6-ounce thing in your pocket can recognize 601 different objects in the world around it without phoning anywhere. No cloud. No account. No waiting. That’s an extraordinary amount of sight to hand to a piece of consumer hardware — and it’s available right now, for free.

📸 App Screenshots

RealTime AI Camera home screen
RealTime AI Camera detecting objects live
RealTime AI Camera with bounding boxes
RealTime AI Camera LiDAR depth overlay
RealTime AI Camera OCR or translation mode

Performance

Average 10 FPS across supported iPhone models. Optimizations include CoreML with Metal acceleration, Neural Engine utilization on A12+ chips, smart thermal and battery management, and adaptive frame-rate based on device capability.

CoreML
Apple native inference
Metal
GPU acceleration
Neural Engine
A12+ optimized
SwiftUI
Native UI

Compatibility

PlatformiPhone (iOS 15+)
CompatibleiPhone X and newer
Optimized foriPhone 12+
LiDAR featuresPro models only

Privacy-first by design

🔒 Works Offline

Put the phone in airplane mode. Every feature — detection, OCR, translation, LiDAR depth — still works. The ML runs on-device end to end.

🚫 No Tracking

No analytics SDKs. No user identifiers. No usage telemetry. The app doesn’t know who you are and neither do we.

🖥️ No Servers

There is no backend. The app never makes an outbound network request to any server, ours or anyone else’s.

💝 Free, No Ads

Not a trial. Not a freemium tier. No ads, no in-app purchases, no subscriptions. Free as in actually free.

The build story

Getting 601-class YOLO to run on an iPhone at 10 FPS wasn’t a weekend project. The hard parts were the PyTorch → CoreML conversion (some ops don’t translate cleanly and silently produce garbage), hallucination tuning across the extra 521 classes, responsive layout across every iPhone form factor from SE to 17 Pro Max, and the memory-bandwidth bottleneck in the camera pipeline — zero-copy plumbing from AVCaptureSession straight through to the model was what actually got the frame rate up.

Full write-up on the engineering: This Is What a Robot Can See Now.

Try it now

Free on the App Store. Source on GitHub. Model weights on HuggingFace.

Shopping Cart
06.12.26 Disclosure Day
Scroll to Top