I led R&D of this autonomous drone in 2016-2017 at NVIDIA. It uses Deep Neural Network for model-free end-to-end navigation from pixels to control. Our drone, named Redtail, can fly along forest trails autonomously, achieving record-breaking long-range flights of more than 1 kilometer (about six-tenths of a mile) in the lower forest canopy. We released our navigation DNN (TrailNet), as well as other models and control code on GitHub as open source project (Redtail). The tech can be used to create autonomous drones or ground rovers that can can navigate complex, unmapped places without GPS. I presented this work at IROS 2017 conference. 

Advertisements
Continue Reading...

This is a mixed reality / VR drone prototype I created in 2015. It allows a remote pilot to fly a drone via VR headsets like Oculus Rift. The pilot has an immersive stereoscopic telepresence with 180 degress field of view. It greatly simplifies flight maneuvers compared to traditional narrow field of view monocular systems. I was also planning to use the system to train autopilot AI via deep imitation learning.

Continue Reading...

Sailing Boat Simulation

February 18, 2014 — Leave a comment

I made this sailing simulation demo back in 2009-2010 while learning how to sail. The sailboat model is motivated by the real physics of sailing, but it is not 100% correct. I made a dynamical system that roughly approximates the real thing. NPC agents use optimization techniques to race (and catch) the player boat.

Continue Reading...

This is a high-definition real-time face tracking and face modelling tech that my team at Microsoft shipped as part of XBox One Kinect and Kinect For Windows 2 in 2013-2014. I contributed a lot of algorithms / code to the 3D face tracker and face geometry computation. More information can be found in my IMAVIS 2014 paper. 

Continue Reading...

Check these facial animation videos by DIY UCap System:

this one shows the underlying tech:

 

Kinect has 2 cameras – RGB and depth (IR) and therefore has 2 different camera frames. Here I describe how to convert from the RGB camera space to the depth camera space (with the code).

Continue Reading...

After a long journey, my team at Microsoft shipped our real-time face tracking technology as API in Kinect For Windows 1.5 in 2012. I worked on the 2D & 3D face tracking technology. Here I describe its capabilities and limitations. For more information, see my paper “Real-time 3D face tracking based on active appearance model constrained by depth data” in IMAVIS 2014. Later on, this technology was used in Skype’s face augmentations.

Continue Reading...