Working with Kinect 3D Sensor in Robotics – Setup, Tutorials, Applications

 

A revolutionary 3D sensor from gaming industry designed to capture motion of players is effectively used in the robotic fields for a wide range of applications including objects recognition and tracking, 3D environment mapping, detect distance, or voice recognition and control. All these features make from Kinect the subject of this article where a set of setup and application tutorials are included.

In the following are available a wide range of setup tutorials for different versions of operating systems as well as different operating systems including Windows, Linux and Mac. All features of Kinect sensor listed above are used in different robotic applications and a series of tutorials to learn how to use these features are available in the following.

With endless possibilities for robotic applications, Kinect is a human-robot interaction tool with an RGB camera and an infrared depth camera. If an RGB camera is not a surprise, the depth camera enables the robot to build a 3D view of the environment.

The biggest disadvantage of Kinect is the behavior in outdoor applications where the performances are very poor. Kinect is designed for indoor applications and at least for the moment, the sensor can be used with high accuracy in indoor robotic applications.

Kinect SDK is a powerful tool released by Microsoft, a tool used to build applications like people detection without writing any programming line.

Why Kinect? Because it is an affordable sensor with 3D image capture in real-time while a laser array has a high price and capture 2D images, or stereo camera with high computing power requirements.

Setup Kinect

Kinect sensor can be used with Windows, Linux or Mac devices.
We are very close to start using Kinect for robotic applications, but first we have to set up the device on different operating systems including Windows, Linux, or Mac. Kinect sensor could be used for a wide range of applications including mapping, object recognition and tracking, voice control, or detect the distance between a robot and objects. All of these applications require a lot of processing power and without any choice, we have to use a mobile and powerful device like a laptop.The powerful device connected to Kinect must have a 32-bit or 64-bit processor with at least 2.66-GHz and dual-core, at least 2GB RAM memory, and dedicated USB 2.0 bus. All of these hardware requirements are available for devices with Windows operating system. Hardware requirements are completed with software requirements including here Microsoft Visual Studio 2010 Express or other Visual Studio 2010 edition, .NET Framework 4.0, and Microsoft Speech Platform SDK if the sensor is used for speech applications.

Below are available a set of tutorials with steps and information about how to setup Kinect device on different operating systems including Windows, Linux or Mac.

Kinect Applications in Robotics

Navigation, recognition, tracking, detects distance, or voice commands are most common applications for Kinect sensor in robotics.
From DIY robotic applications and to advanced humanoid robots, Kinect is a precise and cheap tool for indoor navigation, recognition, tracking, detect distance, or voice commands. A long list of innovative robots came up with this advanced sensor including here a Kinect Quadrotor Bolting robot with ability to navigate and avoid obstacles autonomously as well as creating a 3D map, or an iRobot AVA telepresence robot with two Kinect sensors located into the body aiming to help the robot for self-navigating and detect motion or gestures. In the same category we can include other technologically advanced projects like DaVinci robotic surgical system that uses Kinect sensor for hand gesture control or industrial application where the sensor is used for 3D object scanning.

The same sensor with the same features is used in educational and research projects as well as for hobbyists from where started first applications in the field. A two-armed robot that replicates the movements of the user is the subject of Rensselaer Polytechnic Institute (RPI) engineers, or the researchers from Southwest Research Institute with an autonomous project for a robot designed to grasp objects using Kinect sensor and an Adaptive Robot Gripper.

Mapping, detecting, tracking is just few robotic applications where the sensor can be used. The sensor has to be attached to a robot like in this example, a process that requires technical skills while interface the sensor require software skills. A series of tools were developed and are very useful to start working with Kinect. All these tools include especially middleware packages to perform input/output communication for the device.

A high accuracy in machine vision implies a good calibration of tools used. The calibration of visual sensor is an advanced technical task, and at the same time is a crucial step to ensure a high accuracy and a good performance in 3D vision. Kinect camera calibration is a process to determine the relation between physical dimension of an object and the digital image of this object. Using OpenCV calibration routines, a comprehensive guide for device calibration is available here.

Tracking and Finding Objects

Detect Distance to An Object

  • Measuring using Kinect – guide how to use the sensor to measure the distance to an object;
  • Getting distance-data from the Depth Sensor – comprehensive tutorial how to convert the distance into an image to to express the distance into a map;
  • Working with Depth Data (Beta 2 SDK) – using C# or Visual Basic programming lines, this tutorial show you how to calculate distance between sensor and an object;
  • Kinect SDK C++ – 2. Kinect Depth Data -code to get depth data from the sensor;
  • Kinect Part 4 – Kinect Depth Camera – simple method how to calculate the distance using an infrared projector and infrared camera from the sensor;

Mapping With Kinect 3D

An autonomous robot requires a mapping system. A cheap sensor like Kinect is the perfect tool for a robot to build maps for localization. Using different software tools, we can use the Microsoft vision sensor for our autonomous robots and in the following we have a series of tools Kinect compatible to start mapping the rooms.

  • Kinect 3D Mapping – tutorial to learn how to use the open-source software Kinect RGB and start building maps in 3D space;
  • Skanect – powerful tool compatible with Microsoft sensor and designed to create 3D meshes of the environment;
  • KinFu – a tool for real-time 3D scanning with the ability to save files in different formats;

Kinect Voice Commands for a Robot

Robotics community only grows by sharingShare on FacebookShare on Google+Tweet about this on TwitterShare on RedditShare on StumbleUponDigg this

 

Share:

Related Posts

Don't Miss Out!

Get the latest news, tutorials, reviews and more direct to your inbox when you subscribe!