Layering digital information onto the physical world
Real-time hand perception is challenging for computer vision – but holds large potential for layering digital information into the physical world, for example as in augmented reality.
A great resource for hand perception is found on ai.googleblog The site provides a cutting edge machine learning library that can detect hand poses with just a webcam. The technology used to be reserved for more advanced stereographic depth-sensing cameras. Now it can be done with just a webcam and a few lines of code in the web browser.
Ates, Morten and I will meet to discuss visions and brainstorm about ideas and content of an application that will address learning situations, new immersive and intelligent learning spaces and embodied interaction.
The questions we will ask are: What are the main challenges and problems that we face in the project organised, problem oriented and interdisciplinary education at RUC? How can the design of new learning spaces which integrate physical space with virtual 3D models and virtual layers of augmented reality help bring forward the particular educational approach at RUC?