Finally, I decided to do some inverse kinematics and control my assembled Bioloid humanoid “type A” from PC:

Bioloid Humanoid

Bioloid Humanoid Type A

Unfortunately, it turned out to be not so easy as I thought initially…

I read some online stuff from Robotis (their site lacks good documentation by the way) and expected that I can use USB2Dynamixel connector to connect my Bioloid to PC and then use Dynamixel SDK from Robotis to control dynamixels programmatically from a C++ program. After some trial and error and looking for answers online I found to my huge surprise that you cannot use Dynamixel SDK when the kit is connected to PC via CM-510 or CM-5 controller.

This  setup (which I think should be fairly popular): PC<->USB2Dynamixel<->CM-510<->Dynamixels  is NOT supported by Dynamixel SDK 😦   You can only use this SDK if you hook Dynamixels directly to USB2Dynamixel and connect them to the battery/power. I know that many “advanced” people do it this way, but I don’t like it when the most obvious setup is not supported out of the box.

Fortunately, after some searching I found how to do it yourself (no soldering required, only programming). It is a non-documented solution and it is sort of easy to do once you know how, but it is not easy to find good explanation about it and working code. So I decided to document it here step by step and provide a working C++ code sample for Windows (see below).

BIG thanks to Aprendiendo whos site contains the solution, but it is not complete for C++ if you run on Windows. So I re-wrote some of that code to make it a complete C++ code sample that you can download and run on Windows. This could save you some time if you want to do the same 🙂

So, to control your Bioloid from PC you need to:

1) Use USB2Dynamixel or other USB to Serial connector to connect CM-510 (or CM-5) controller via a serial cable to PC’s USB port. If you have USB2Dynamixel, then you have to switch it to RS232 mode (position #3)

2) Usually, USB2Dynamixel appears as COM3 port on your PC. On Windows, you have to open this port’s properties (in devices) and go to “Advanced” settings and set “Latency Timer” to 1 (msec) as described here

3) Power up your CM-510 (or CM-5). The “manage” mode indicator should start blinking. Press the “Start” button to start the “managed mode”. The indicator next to “manage” title should light up. If you skip this step, Bioloid’s controller is not going to be in the right state for controlling from PC.

4) Now, the most important part – the Bioloid controller (CM-510 or CM-5) needs to be switched to the “Toss mode“. In this mode, CM-510 firmware works as a bridge and forwards all commands it receives from PC to Dynamixels connected to it and also forwards all Dynamixel responses back to PC. This mode is not documented by Robotis by some reason. To switch to it you need to send “t\r” (‘t’ + 0x0D sequence) from your C/C++ program via a serial port to CM-510. See SendTossModeCommand() function that does it in the C++ code below.

5) After that, you can create Dynamixel protocol commands and send them to CM-510 via the serial port. Make sure that the program initializes the COM port communication speed at 57600, since this is the max speed that CM-5 and CM-510 support. See C++ code below for details.

NOTE 1: I found that on some PCs this process was not enough. After powering your Bioloid you need to run RoboPlus Manager from Robotis and connect the app to your Bioloid (press “connect” button) before controlling it from your PC program. The RoboPlus Manager does tons of obfuscated non documented communication and by some reason the “Toss mode” can be initialized only after that!

NOTE 2: “Toss mode” cannot be turned OFF (at least I don’t know how at the moment) and to turn it off you need to turn your Bioloid power OFF.

NOTE 3: I am still not sure if I can read sensors connected to CM-510 without custom firmware. See Aprendiendo website for more information about custom firmware to do this.

This BioloidAPI.ZIP file contains the C++ project for Windows (VS 2010 project) that demonstrates how to switch to the “toss mode” and how to send Dynamixel commands to move servos. I was trying to make it easy to understand. Here is most of the C++ code:


#include
#include
#include

class SerialPort
{
public:
    SerialPort();
    virtual ~SerialPort();

    HRESULT Open(const wchar_t* szPortName, DWORD baudRate);
    void Close();
    void Clear();

    HRESULT SendData(BYTE* pBuffer, unsigned long* pSize);
    HRESULT ReceiveData(BYTE* pBuffer, unsigned long* pSize);

private:
    HANDLE serialPortHandle;
};

SerialPort::SerialPort() :
serialPortHandle(INVALID_HANDLE_VALUE)
{
}

SerialPort::~SerialPort()
{
    Close();
}

HRESULT SerialPort::Open(const wchar_t* szPortName, DWORD baudRate)
{
    HRESULT hrResult = S_OK;
    DCB dcb;

    memset( &dcb, 0, sizeof(dcb) );

    dcb.DCBlength = sizeof(dcb);
    dcb.BaudRate = baudRate;
    dcb.Parity = NOPARITY;
    dcb.fParity = 0;
    dcb.StopBits = ONESTOPBIT;
    dcb.ByteSize = 8;

    serialPortHandle = CreateFile(szPortName, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, NULL, NULL);
    if ( serialPortHandle!=INVALID_HANDLE_VALUE )
    {
        if( !SetCommState(serialPortHandle, &dcb) )
        {
            hrResult = E_INVALIDARG;
            Close();
        }
    }
    else
    {
        hrResult = ERROR_OPEN_FAILED;
    }

    return hrResult;
}

void SerialPort::Close()
{
    if (serialPortHandle!=INVALID_HANDLE_VALUE || serialPortHandle!=NULL)
    {
        PurgeComm(serialPortHandle, PURGE_RXCLEAR | PURGE_TXCLEAR);
        CloseHandle(serialPortHandle);
    }
    serialPortHandle = INVALID_HANDLE_VALUE;
}

void SerialPort::Clear()
{
    if (serialPortHandle!=INVALID_HANDLE_VALUE || serialPortHandle!=NULL)
    {
        PurgeComm(serialPortHandle, PURGE_RXCLEAR | PURGE_TXCLEAR);
    }
}

HRESULT SerialPort::SendData(BYTE* pBuffer, unsigned long* pSize)
{
    HRESULT hrResult = ERROR_WRITE_FAULT;

    if (serialPortHandle!=INVALID_HANDLE_VALUE && serialPortHandle!=NULL)
    {
        if( WriteFile(serialPortHandle, pBuffer, *pSize, pSize, NULL) &&
            FlushFileBuffers(serialPortHandle)
            )
        {
            hrResult = S_OK;
        }
    }

    return hrResult;
}

HRESULT SerialPort::ReceiveData(BYTE* pBuffer, unsigned long* pSize)
{
    HRESULT hrResult = ERROR_READ_FAULT;

    if (serialPortHandle!=INVALID_HANDLE_VALUE && serialPortHandle!=NULL)
    {
        if( ReadFile(serialPortHandle, pBuffer, *pSize, pSize, NULL) )
        {
            hrResult = S_OK;
        }
    }

    return hrResult;
}

bool CreateAX12SetPositionCommand(BYTE id, short goal, BYTE* pBuffer, DWORD* pSize)
{
    const unsigned int packetSize = 9;

    if(*pSize < packetSize)     {         return false;     }     // PACKET STRUCTURE: OXFF 0XFF ID LENGTH INSTRUCTION PARAMETER_1 …PARAMETER_N CHECKSUM     *pSize = packetSize;     pBuffer[0] = 0xFF;     pBuffer[1] = 0xFF;     pBuffer[2] = id;     pBuffer[3] = 2 /* number of parameters */ + 3;	// packet body length     pBuffer[4] = 3;						// instruction id = write data     // Parameters     pBuffer[5] = 30;					// start address of position goal setting     pBuffer[6] = BYTE(goal);			// goal low byte (to address 30)     pBuffer[7] = BYTE(goal>>8);			// goal high byte (to address 31)

    // Checksum
    DWORD packetSum = 0;
    for(size_t i=2; iSendData(buffer, &size);
    if(FAILED(hr))
    {
        printf("Failed to send set dynamixel position command\n");
        return false;
    }
    Sleep(10);

    memset(buffer, 0, sizeof(buffer));
    size = sizeof(buffer);
    pSerialPort->ReceiveData(buffer, &size);

    if (size>4 && buffer[4] == 0)
    {
        printf("id=%d set to position=%d\n", id, position);
    }
    else
    {
        printf("Error while setting id=%d position=%d, error:%d\n", id, position, buffer[4]);
        return false;
    }

    return true;
}

bool SendTossModeCommand(SerialPort* pSerialPort)
{
    BYTE buffer[1024];
    buffer[0]='t';
    buffer[1]='\r';
    DWORD size = 2;

    HRESULT hr = pSerialPort->SendData(buffer, &size);
    if(FAILED(hr))
    {
        printf("Failed to send TOSS model command\n");
        return false;
    }
    Sleep(100);

    size = sizeof(buffer);
    pSerialPort->ReceiveData(buffer, &size);

    return true;
}

int _tmain(int argc, _TCHAR* argv[])
{
    DWORD baudRate = 57600;
    SerialPort comPort;

    HRESULT hr = comPort.Open(L"COM3", baudRate);
    if(FAILED(hr))
    {
        printf("Cannot open COM3 port\n");
        return 0;
    }

    SendTossModeCommand(&comPort);

    while(1)
    {
        printf( "Enter dynamixel ID and goal position:\n" );

        int id = 0;
        int position = 0;
        scanf("%d %d", &id, &position);

        SetDynamixelPosition(&comPort, id, position);

        printf("Press ESC to terminate, otherwise press any other key to continue\n");
        if(getch() == 0x1b)
        {
            break;
        }
    }

    comPort.Close();

    return 0;
}
Advertisements

Avatar Kinect

May 19, 2012 — 4 Comments

Some time ago (summer 2011), my group at Microsoft shipped an interesting Computer Vision app/mini game for XBox 360 called “Avatar Kinect” (I worked on the face tracking technology). You can pose in front of the Kinect camera and the application tracks movements of your head and facial features (lips, eyebrows) and renders you as an XBox avatar. Pretty cool app if you want to record little videos of yourself as an animated cartoonish avatar and then post them to YouTube. Or if you want to talk to your friends as an avatar (it allows multiparty avatar chat). For example you can make:

The face tracking technology demo can be seen here:

We used a combination of Active Appearance Models  on “steroids” plus few other things like neural network, face detector and various classifiers to make it stable and robust. You can read more about Active Appearance Models here, here  and here . Off course the usage of Kinect camera improved precision and robustness a lot (due to its depth camera)

And the next week, you’ll see our Face Tracking technology in a different, better and easier to use form 🙂

 

Separable Subsurface Scattering allows making pretty imressive human faces. The next thing is to animate them with some level of realism and we finally can cross the “uncanny valley” 🙂

Found a pretty interesting article about CS progress over the last 20 years – http://tekkie.wordpress.com/2009/08/12/does-computer-science-have-a-future/

The results are not that great, which is also clearly demonstrated in this video –

 

    Microsoft just demoed a very cool game Kinectimals at E3 that uses Kinect – computer vision and natural UI system (formerly known as “Natal”). The game is for kids, but I think adults will enjoy it as well. See this video:

    Looks pretty awesome to me. But oddly enough, some online comments in some articles called it “creepy” or “scary” or people could not believe that Kinect/Natal actually works. Well, I know how to adress the last concern and Kinect does work. But, I wonder why some people think of virtual pets with AI and Kinect type interaction as “scary” or “creepy”… I am working on one project that might get similar response and I wonder if people are really that scared of AI and virtual worlds? May be older folks are scared, but kids are not? To me, it looks very exciting and awesome, but I am an engineer. We’ll see in November 2010 if Kinect becomes a hit. It probably will be a killer platform with killer games and it is very bad news for Sony and Nintendo, because it will be very very hard to replicate anything like this.

Here is the walking quadruped robot that I made some time ago –

It turned out that 4 legged motion is pretty hard to get right. It took me 2 weeks to fix all the balancing and motion issues, so it can walk well. Turning left and right was especially challenging since it does not have enough levels of freedom in some directions. It uses 12 servos, but still can work pretty long time. It seems like Bioloid servos are pretty efficient. Makes lots of noise when walking though. Biolod coding language also does not scale well when you try to implement complex logic. I guess the best approach is to control the robot from a PC (or netbook) and use onboard controller only for low level motion control since it is pretty limited.

    Today, I saw the live broadcast of the demo of the new XBox interface (project “Natal”) at E3 conference. It stunned me! This is a groundbreaking event in the interface design and can be compared to the mouse invention (but it goes much much further). Microsoft did something revolutionary!

    This new interface is based on video cameras that watch your movements, calculate positioning of your body and then translate it to the movements of your avatar in a game. Also, it includes speech recognition, facial recognition and probably a set of other features. They demoed sport games where you control your avatar by literally moving in front of a TV and it does exactly what you do (kicks balls, move legs up and down, etc.). It is much better than Wii, since you actually don’t need any device/controller. Your body controls your avatar in the game.

    These demos were pretty impressive and showed that MS just outdid everyone else in this space (I think competition will have big problems catching up), BUT it got even better! Lionhead founder Peter Molyneux kicked off another demo where a woman was interacting with a virtual boy. It looked like a very real conversation where the virtual boy actually knew where the woman was and was talking to her and not to the space like in today’s games. At some point, the woman drew a picture and gave it to the boy (gave it to the front camera) and the boy took a virtual piece of paper with THIS picture on it! The other interesting thing was when the woman was making patterns on the water and you could see her reflected image on it. Oh yeah, and they were also talking in plain English. It seems like speech recognition also works really well.

Virtual boy Milo

Virtual boy Milo

project Natal in action

project Natal in action