Twitter
Google plus
Facebook
Vimeo
Pinterest

Fluid Edge Themes

Add Depth to Dual Cameras

Leverage normal streams of just two standard cameras to easily generate depth in real-time with Lucid’s solution. The complete android-based Lucid engine runs its algorithms on GPU, DSP and Lucid’s cloud providing perfect 3D video streams and images in mp4, jpg or png format. In addition it generates a depth map which allows an unlimited amount of applications from VR/AR, 3D scanning and modeling to mapping and gesture tracking.

Machine Learning With Data

The same way as our brain processes images through our eyes to drive actions or estimate an approaching object, so does the Lucid cloud apply machine learning and neural networks to a huge amount of depth data. Every dual camera is set up with an initial Lucid infant profile which keeps learning and growing over time the more it captures the world outcompeting at a mature stage any hardware depth solution such as depth sensors, structured light and time of flight systems.

Integrated SLAM and Gesture Controls

Lucid’s core render framework has built-in high speed, low-latency and low-energy localization, mapping, and gesture tracking. Process movement and location through space more accurately with real-time stereo 3D input. Gestures are easily learned and recognized through Lucid’s AI Cloud database.