I led R&D of this autonomous drone in 2016-2017 at NVIDIA. It uses Deep Neural Network for model-free end-to-end navigation from pixels to control. Our drone, named Redtail, can fly along forest trails autonomously, achieving record-breaking long-range flights of more than 1 kilometer (about six-tenths of a mile) in the lower forest canopy. We released our navigation DNN (TrailNet), as well as other models and control code on GitHub as open source project (Redtail). The tech can be used to create autonomous drones or ground rovers that can can navigate complex, unmapped places without GPS. I presented this work at IROS 2017 conference. 

Continue Reading...

This is a mixed reality / VR drone prototype I created in 2015. It allows a remote pilot to fly a drone via VR headsets like Oculus Rift. The pilot has an immersive stereoscopic telepresence with 180 degress field of view. It greatly simplifies flight maneuvers compared to traditional narrow field of view monocular systems. I was also planning to use the system to train autopilot AI via deep imitation learning.

Continue Reading...

Sailing Boat Simulation

February 18, 2014 — Leave a comment

I made this sailing simulation demo back in 2009-2010 while learning how to sail. The sailboat model is motivated by the real physics of sailing, but it is not 100% correct. I made a dynamical system that roughly approximates the real thing. NPC agents use optimization techniques to race (and catch) the player boat.

Continue Reading...

This is a high-definition real-time face tracking and face modelling tech that my team at Microsoft shipped as part of XBox One Kinect and Kinect For Windows 2 in 2013-2014. I contributed a lot of algorithms / code to the 3D face tracker and face geometry computation. More information can be found in my IMAVIS 2014 paper. 

Continue Reading...

Check these facial animation videos by DIY UCap System:

http://vimeo.com/64562384

this one shows the underlying tech:

http://vimeo.com/64563659

 

Kinect has 2 cameras – RGB and depth (IR) and therefore has 2 different camera frames. Here I describe how to convert from the RGB camera space to the depth camera space (with the code).

Continue Reading...

After a long journey, my team at Microsoft shipped our real-time face tracking technology as API in Kinect For Windows 1.5 in 2012. I worked on the 2D & 3D face tracking technology. Here I describe its capabilities and limitations. For more information, see my paper “Real-time 3D face tracking based on active appearance model constrained by depth data” in IMAVIS 2014. Later on, this technology was used in Skype’s face augmentations.

Continue Reading...

Here I describe how to control Robotis Dynamixels (smart actuators) from a PC via a serial protocol. I also provide code for this.

Continue Reading...

Avatar Kinect

May 19, 2012 — 4 Comments

In summer 2011, my group at Microsoft shipped an interesting Computer Vision app/mini game for XBox 360 called “Avatar Kinect”. I worked on the 2D/3D face tracking technology. You can pose in front of the Kinect camera and the application tracks movements of your head and facial features (lips, eyebrows) and renders you as an XBox avatar. Pretty cool app if you want to record little videos of yourself as an animated cartoon avatar and then post them to YouTube. Or if you want to talk to your friends as an avatar, the app allows a multiparty avatar chat. 

Continue Reading...

Separable Subsurface Scattering allows making pretty imressive human faces. The next thing is to animate them with some level of realism and we finally can cross the “uncanny valley” 🙂