Duration: October 2019 - March 2020 Technologies: OpenCV, TensorFlow Since Fall 2019, I've been exploring the idea of making a Lightform device automatically find a pre-selected object and map content to it. The detection part is somewhat trivial but the challenge is getting pin point accuracy since a misalignment of even a few pixels can result in a very blurry and not so magical experience. Another challenge is that the detection has to work 99% of the time or else it wouldn't meet the bar of a product feature and be more like a prototype. The device should also be able to quickly realign content if the object or the projector moves. We've gone on this path in the past too but quickly abandoned it since making it work 99% of the time without any user input has been challenging. Well, second time is the charm right?
Keeping both challenges in mind, I worked on a marker based approach to begin with. I started with adding 4 Aruco markers to a test surface since it guarantees both challenges of detection accuracy and repeatability.
The next iteration removed the dependency on 4 markers and brought it down to 1. I also replaced the marker with the Lightform logo. The single logo as the marker looked good and worked well enough to green light the project from an R&D project to a product feature.
I also added the ability to quick realign content if it goes out of alignment. It needed some trickery with the projector camera correspondences but in the end, it worked pretty damn good!
In the last couple of months, I've worked on removing any dependency on visual marker at all and going fully markerless and still guarantee the accuracy of alignment of content. The current prototype runs on an embedded device and will soon be launched as a feature on the latest generation of Lightform devices.