Layering digital information onto the physical world
Real-time hand perception is challenging for computer vision – but holds large potential for layering digital information into the physical world, for example as in augmented reality.
A great resource for hand perception is found on ai.googleblog The site provides a cutting edge machine learning library that can detect hand poses with just a webcam. The technology used to be reserved for more advanced stereographic depth-sensing cameras. Now it can be done with just a webcam and a few lines of code in the web browser.