ROBOTC.net Blog  

ROBOTC News

Archive for the ‘Cool projects’ Category

VEX Balancing Robot

with 2 comments

[Thanks to hmoor14 for submitting this project!]

hmoor14 put together a fun little (Ok, it’s not THAT little… ) robot. It’s a VEX robot that is able to keep upright while simultaneously acting as a punching bag! Take a look:

I asked hmoor14 a few questions about his robot:

1) What inspired you to build this robot?

I wanted to start learning about robots and how to control them. So, when I saw a video on a balancing robot, I decided I would try that project.

2) How long did it take you to make this?

This was my first robot, so it probably took longer than it should have!
I pretty much did it over the Christmas holidays and then some. So about a month part time. Most of the time was not actually spent building the actual robot but learning how to design it and test the pieces. Just getting around the deadzone in the motors took me a few days.

3) What are your future plans with the robot?

I’m fixing to take it apart, I need the parts for my next robot :( But, I am going to keep what I’ve learned (which was so, so much).

Close up of the robot:

Great job hmoor14!

Written by Vu Nguyen

February 9th, 2012 at 11:17 am

Mindsensors RCX Multiplexer controlled via Android and ROBOTC

with one comment

[All work done by Burf, original link: http://www.burf.org.uk/2012/01/01/mindsensors-rcx-multiplexer-controlled-via-android-and-robotc/]

We found another one of Burf’s work on his blog. If you don’t know Burf, he was the creator of a previous Cool Project on our blog, LEGO George.

Here’s another amazing post from his work that utilizes the RCX Multiplexer and an Android phone!

His blog reads,

——————————————————————————————————————————————

As you may be aware I have been building a Robot called Wheeler out of old parts (old grey and RCX 9V motors etc).  I was hoping to have it finished over the Christmas break but had hit a small issue with driving the wheels with the new weight of the body.  Anyway what I did managed to get up and running is the top half of Wheeer and the controller which is a Android phone (Dell Streak).

Mindsensors RCX Multiplexer

I was utterly impressed with the Mindsensors.com RCX Multiplexer and using Xanders driver suite (check BotBench) how fast I was up and running.  I wish there was a way to run the RCX Multiplexer off the NXT power supply but thats a small thing compared to how useful it is.  I wish I had 3 more of them so that I could control 16 RCX motors!

Android NXT Remote Control

So to try and work out how to control the NXT via Android, I stumbled across the NXT Remote Control project which is free to download.  This uses Lego’s Direct Commands to control the 3 motor ports on the NXT.  This means it bypasses your own code and you have no control over it.  However, what I managed to do is reduced it down to a very simple program that sends messages to the NXT which you can deal with in your own program.  In RobotC, it sends messages that are compatible with the MessageParam command and so you can send a message ID and 2 params to the NXT and deal with them in robotC anyway you want to.  Code will be available soon once I have tidied it up.

Written by Vu Nguyen

January 20th, 2012 at 2:24 pm

Posted in Cool projects,NXT

Skype-Controlled Mindstorms NXT Car

without comments

First of all, let me introduce myself: I’m Leon (aka dimastero/ dimasterooo), and I was recently invited to contribute to this blog. So as, my first post, I’d like to tell you about my new Skype-controlled LEGO Mindstorms NXT Car.

I’ve been creating websites for a while now, and I was trying to think of a way to combine it with Mindstorms NXT. This project is the result of that. The project’s webpage is fairly simple – it’s got three arrows (one forward, two to the sides), a start button, and a stop button. It’s also got instructions on it. Clicking the start arrow will begin a Skype conversation with my computer, after which you should share your screen; the NXT standing in front of my computer can then “see” the webpage with the arrows via your computer.

That’s where the cool part kicks in – when you any one of the arrows or the stop button, the page will change to a different shade of gray. This shade of gray is then picked up by the NXT, which turns it into a Bluetooth message for the other NXT on the car. The car then drives in the direction the user tells it to, while remaining within a fenced off area where the webcam can see it.

So, until January the 18th, you can drive a LEGO Mindstorms NXT car, from the comfort of your own home. To learn how and find out more about this project, click the link below:

http://worldofmindstorms.com/2012/01/04/interactive-skype-controlled-mindstorms-nxt-car/

Written by DiMastero

January 10th, 2012 at 9:00 am

Posted in Cool projects,NXT

Facial recognition using an NXT and an iPhone

with 2 comments

This is a robot that uses Face Recognition  in order to follow around a human. It uses an iPhone in conjunction with an NXT. Take a look!

You can download the Xcode Project and ROBOTC code here:  http://code.google.com/p/follow-me-robot/

How it works

The iOS code uses iOS 5′s face detection algorithm to find the position of the face within the video frame.  I then needed a way to communicate with the NXT robot and steer it.  Since I didn’t want to go through the trouble of communicating through bluetooth with it (and I don’t know how to do it!), I chose to communicate with the NXT using the Light Sensor that comes with the NXT.

If I want the robot to go to the left, I dim the lower portion of the iPhone screen and if I want it to go to the right I increase its intensity.  Also, when the phone does not see a face, I turn the lower portion of the screen black.  This tells the robot that it needs to not move forward and spin in-place until it finds a face.

In the ROBOTC code, I also make use of the sound sensor to start and stop the robot.  A loud sound is used to toggle between start and stop.

The ROBOTC and iOS code is very simple.

ROBOTC code

(Code subject to change. Download the latest version of the code!)


#pragma config(Sensor, S1,     lightSensor,         sensorLightInactive)
#pragma config(Sensor, S2,     soundSensor,         sensorSoundDB)
#pragma config(Motor,  motorA,          mA,            tmotorNormal, PIDControl, encoder)

task main()
{
wait1Msec(50);                         // The program waits 50 milliseconds to initialize the light sensor.
/*
float x;
while (1)
x = SensorValue[lightSensor];
*/

float minLight, maxLight, d, a, c, v, alpha = 0.01, stopGo=0.0;
int l, sound, startMotors = 0, lostFace, faceFound = 0;

a = 0.60;
minLight = 9;
maxLight = 34;
lostFace = 5;
v=20;

c = (minLight+maxLight)/2.0;

while (1) {

sound = SensorValue[soundSensor];
if(sound > 85) {
startMotors++;
startMotors %= 2;
wait10Msec(50);
}

l = SensorValue[lightSensor];
d = a*(l-c);

faceFound = (l > lostFace) ? 1:0;

stopGo = alpha*faceFound + (1-alpha)*stopGo;

motor[motorB] = (-d+v*stopGo)*startMotors;
motor[motorC] = (d+v*stopGo)*startMotors;
}

Written by ramin

January 9th, 2012 at 8:58 am

Posted in Cool projects,NXT

Line tracking and book climbing NXT robot

without comments

Here’s a video that a ROBOTC user shared with us. The NXT robot is able to line track and also climb a book that sits along the path. Take a look:

 

Written by Vu Nguyen

December 8th, 2011 at 3:00 pm

Posted in Cool projects,NXT

LEGO Street View Car v2.0

with one comment

Thanks To Mark over at www.mastincrosbie.com for creating this incredible project and providing the information. Also Thanks to Xander for providing the community with drivers to use the mentioned sensors in ROBOTC.

You might remember the original Lego Street View Car I built in April. It was very popular at the Google Zeitgeist event earlier this year.

I wanted to re-build the car to only use the Lego Mindstorms NXT motors. I was also keen to make it look more….car-like. The result, after 4 months of experimentation, is version 2.0 of the Lego Street View Car.

As you can see this version of the car is styled to look realistic. I also decided to use my iPhone to capture images on the car. With iOS 5 the iPhone will upload any photos to PhotoStream so I can access them directly in iPhoto.

The car uses the Dexter Industries dGPS sensor to record the current GPS coordinates.

The KML file that records the path taken by the car is transmitted using the Dexter Industries Wifi sensor once the car is within wireless network range.

Design details

The LEGO Street View Car is controlled manually using a second NXT acting as a Bluetooth remote. The remote control allows me to control the drive speed and steering of the car. I can also brake the car to stop it from colliding with obstacles. Finally pressing a button on the remote

Every time an image is captured the current latitude and longitude are recorded from the dGPS. The NXT creates a KML format file in the flash filesystem which is then uploaded from the NXT to a PC. Opening the KML file in Google Earth shows the path that the car drove, and also has placemarks for every picture you took along the way. Click on the placemark to see the picture.

For each GPS coordinate I create a KML Placemark entry that embeds descriptive HTML code using the CDATA tag. The image link in the HTML refers to the last image captured on disk.

The images are captured by triggering the camera on my iPhone. I use an app called SoundSnap which triggers the camera when a loud sound is heard by the phone. By placing the iPhone over the NXT speaker I can trigger the iPhone camera by playing a loud tone on the NXT. While this is not ideal (Bluetooth would be better) it does the job for now.

To get the photos from the iPhone I use the PhotoStream feature in iOS 5. I select the pictures in iPhoto and export them to my laptop. The iPhone will only upload photos when I am in range of a wireless network.

Finally the Dexter Industries Wifi sensor is used to wirelessly transmit the KML file to my laptop over the wireless network.


<Placemark>

<name>LSVC Snapshot 1</name>

<description><![CDATA[<img src='Images/IMG_1.jpg' width=640 height=480> ]]></description>

<Point>

<coordinates> -6.185952, 53.446190, 0</coordinates>

</Point>

</Placemark>

<Placemark>

<name>LSVC Snapshot 2</name>

<description><![CDATA[<img src='Images/IMG_2.jpg' width=640 height=480> ]]></description>

<Point>

<coordinates> -6.185952, 53.446190, 0</coordinates>

</Point>

</Placemark>

The snippet from the KML file gives you an idea of what each placemark should look like.

Once the car has finished driving press the orange button on the NXT to save the KML file. This writes a <pathstring> entry which records the actual path of the car. A path string is simply a list of coordinates that define a path in Google Earth along the Earth’s surface. For example:


<Placemark>

<name>LSVC Path</name>

<description>LSVC Path</description>

<styleUrl>#yellowLineGreenPoly</styleUrl>

<LineString>

<extrude>10</extrude>

<tessellate>10</tessellate>

<altitudeMode>clampToGround</altitudeMode>

<coordinates>

-6.185952, 53.446190, 0

-6.185952, 53.446180, 0

</coordinates>

</LineString>

</Placemark>

Is a path two coordinates not far from where I live.

From the NXT to Google Earth

How do we get the pictures and KML file from the NXT and into Google Earth? First of all we need to get all the data in one place. The KML file refers to the relative path of each image, so we can package the KML file and the images into a single directory.

An example of the output produced is shown below. In this test case I started indoors in my house and took a few pictures. As you can see the dGPS has trouble getting an accurate reading and so the pictures appear to be scattered around the map. I then drove the car outside and started to capture pictures as I drove. From Snapshot 10 onwards the images become more realistic based on where the car actually is.

Video

I shot some video of the car driving outside my house. It was a windy dull day, so the video is a little dark. The fun part is seeing the view from on-board the car!

More videos are coming soon…

Photos

[nggallery id=1]

Written by Vu Nguyen

November 14th, 2011 at 1:11 pm

Lego George the Giant Robot

with 3 comments

[Thank you burf2000 from our forums for contributing this project!]

LEGO George the Giant Robot

LEGO George the Giant Robot

I present to you…

LEGO George the Giant Robot!

He moves, he dances, he can grab things… What CAN’T HE DO!?

This latest creation from burf2000 stands 5’7″ tall, and is a fully functional 5 foot 7″ robot.

He is controlled via a PlayStation 2 controller, he can move about, rotate his upper body, move his arms / shoulders and grab on to items. His head also rotates, moves up and down and if you get too close, his eyes will rotate.

Video of LEGO George:

I asked burf2000 some questions about his robot:

What inspired you to build this robot?

“I have always loved robotics and so Lego for me was a medium to build it in, I built another large robot last year but was not so successful. That one was based off T1 from Terminator 3. I wanted to keep things simple on this one due to size. It weights around 20KG. I also loved the Short-circuit films (johnny 5).”

How long did it take to make?

“This one took around 3 months of the odd evenings and days, We just had a baby (my wife) so getting time has been quite hard. However my wife is very supportive and knew I needed to build this for a show. (http://www.greatwesternlegoshow.com/).”

What are your future plans with the robot?

“Glad you asked this, currently I am improving certain parts of this which I am not happy with like shoulder joints, main bearing and turning. Once they are done, I am going to build a second robot to keep him company. Its going to be another large one, using more NXT’s and hopefully will go round on his own. My aim is to get a whole display of large robots moving around and interacting with each other.”

I thank you, burf2000, for submitting LEGO George. We can’t wait to see his successor!

More Photos

LEGO George's neckClose up

The whole photo set can be found on burf2000′s Flickr page

Written by Vu Nguyen

October 6th, 2011 at 1:04 pm

Posted in Cool projects,NXT

ROBOTC Advanced Training

without comments

The ROBOTC curriculum covers quite a bit of material ranging from basic movement to automatic thresholds and advanced remote control. This is plenty of material for the average robotics class. However, it is not enough for some ambitious teachers and students who have mastered the basics. For those individuals who strive to learn the ins and outs of ROBOTC, we offered a pilot course called “ROBOTC Advanced Training” in late July.

The focus of the class is on advanced programming concepts with ROBOTC. Trainees learn to make use of the NXT’s processing power and third-party sensors which expand its capabilities. The class began with a review of the basic ROBOTC curriculum. It then moved into arrays, multi-tasking, custom user interfaces using the NXT LCD screen and buttons, and file input/output. The class worked together to write a custom I²C sensor driver for the Mindsensors Acceleration sensor seen here. Mindsensors Acceleration Sensor

The capstone project for the course involves autonomous navigation on a grid world. The program allows the NXT to find the most efficient path to its goal while avoiding obstacles. The class learned the concept of a “wavefront algorithm”, which enabled autonomous path planning in a world delineated by a grid field. The algorithm assumes that the robot will only use three movements: forward for one block, right turn and left turn. Based on these assumptions, each grid block has four neighbors. They are north, south, east and west of the current block.

The grid world (for our project it was a 10×5 grid) is represented in ROBOTC by a 2-Dimensional array of integers. Integer representations are as follows: robot = 99, goal = 2, obstacle = 1, empty space = 0. The wavefront begins at the goal and propagates outwards until all positions have a value other than zero. Each empty space neighbor of the goal is assigned a value of 3. Each empty space neighbor of the 3’s is assigned a value of 4. This pattern continues until there are no more empty spaces on the map. The robot then follows the most efficient path by moving to its neighbor with the lowest value until it reaches the goal.

It is very exciting to see autonomous path planning implemented in ROBOTC because this is similar to the way full scale autonomous vehicles work. Check out the video of the path planning in action and the full ROBOTC code below. Our future plans are to incorporate these lessons into a new curriculum including multi-robot communications. If this seems like the type of project you would like to bring to your classroom, check back throughout the year for updates and also in the spring for availability for next summer’s ROBOTC Advanced Class.

Written by: Steve Comer


YouTube Direct Link 

Code for the first run of the program seen in the video:

Note that the only difference in the code for the second program is another obstacle in the 2D integer array.

//GLOBAL VARIABLES grid world dimensions
const int x_size = 10;
const int y_size = 5;

//GLOBAL ARRAY representation of grid world using a 2-Dimensional array
//0  = open space
//1  = barrier
//2  = goal
//99 = robot
int map[x_size][y_size] =
 {{0,0,0,0,0},
  {0,1,99,1,0},
  {0,1,1,1,0},
  {0,0,0,0,0},
  {0,0,0,0,0},
  {0,0,0,0,0},
  {0,0,0,0,0},
  {0,0,2,0,0},
  {0,0,0,0,0},
  {0,0,0,0,0}};

//FUNCTION move forward for a variable number of grid blocks
void moveForward(int blocks)
{
  //convert number of blocks to encoder counts
  //wheel circumference = 17.6 cm
  //one block = 23.7 cm
  int countsToTravel = (23.7/17.6)*(360)*blocks;

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = 50;
  motor[motorC] = 50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;
  wait1Msec(500);
}

//FUNCTION left point turn 90 degrees
void turnLeft90()
{
  //distance one wheel must travel for 90 degree point turn = 10.68 cm
  //wheel circumference = 17.6 cm
  int countsToTravel = (8.6/17.6)*(360);

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = 50;
  motor[motorC] = -50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;
  wait1Msec(500);
}

//FUNCTION right point turn 90 degrees
void turnRight90()
{
  //distance one wheel must travel for 90 degree point turn = 10.68 cm
  //wheel circumference = 17.6 cm
  int countsToTravel = (8.6/17.6)*(360);

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = -50;
  motor[motorC] = 50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;
  wait1Msec(500);
}

//FUNCTION print wavefront map to NXT screen
void PrintWavefrontMap()
{
  int printLine = y_size-1;
  for(int y = 0; y < y_size; y++)
  {
    string printRow = "";
    for(int x=0; x < x_size; x++)
    {
      if(map[x][y] == 99)
        printRow = printRow + "R ";
      else if(map[x][y] == 2)
        printRow = printRow + "G ";
      else if(map[x][y] == 1)
        printRow = printRow + "X ";
      else if(map[x][y] < 10)
        printRow = printRow + map[x][y] + " ";
      else if(map[x][y] == '*')
        printRow = printRow + "* ";
      else
        printRow = printRow + map[x][y];
    }
    nxtDisplayString(printLine, printRow);
    printLine--;
  }
}

//FUNCTION wavefront algorithm to find most efficient path to goal
void WavefrontSearch()
{
  int goal_x, goal_y;
  bool foundWave = true;
  int currentWave = 2; //Looking for goal first

  while(foundWave == true)
  {
    foundWave = false;
    for(int y=0; y < y_size; y++)
    {
      for(int x=0; x < x_size; x++)
      {
        if(map[x][y] == currentWave)
        {
          foundWave = true;
          goal_x = x;
          goal_y = y;

          if(goal_x > 0) //This code checks the array bounds heading WEST
            if(map[goal_x-1][goal_y] == 0)  //This code checks the WEST direction
              map[goal_x-1][goal_y] = currentWave + 1;

          if(goal_x < (x_size - 1)) //This code checks the array bounds heading EAST
            if(map[goal_x+1][goal_y] == 0)//This code checks the EAST direction
              map[goal_x+1][goal_y] = currentWave + 1;

          if(goal_y > 0)//This code checks the array bounds heading SOUTH
            if(map[goal_x][goal_y-1] == 0) //This code checks the SOUTH direction
              map[goal_x][goal_y-1] = currentWave + 1;

          if(goal_y < (y_size - 1))//This code checks the array bounds heading NORTH
            if(map[goal_x][goal_y+1] == 0) //This code checks the NORTH direction
              map[goal_x][goal_y+1] = currentWave + 1;
        }
      }
    }
    currentWave++;
    PrintWavefrontMap();
    wait1Msec(500);
  }
}

//FUNCTION follow most efficient path to goal
//and update screen map as robot moves
void NavigateToGoal()
{
  //Store our Robots Current Position
  int robot_x, robot_y;

  //First - Find Goal and Target Locations
  for(int x=0; x < x_size; x++)
  {
    for(int y=0; y < y_size; y++)
    {
      if(map[x][y] == 99)
      {
        robot_x = x;
        robot_y = y;
      }
    }
  }

  //Found Goal and Target, start deciding our next path
  int current_x = robot_x;
  int current_y = robot_y;
  int current_facing = 0;
  int next_Direction = 0;
  int current_low = 99;

  while(current_low > 2)
  {
    current_low = 99; //Every time, reset to highest number (robot)
    next_Direction = current_facing;
    int Next_X = 0;
    int Next_Y = 0;

    //Check Array Bounds West
    if(current_x > 0)
      if(map[current_x-1][current_y] < current_low && map[current_x-1][current_y] != 1) //Is current space occupied?
    {
      current_low = map[current_x-1][current_y];  //Set next number
      next_Direction = 3; //Set Next Direction as West
      Next_X = current_x-1;
      Next_Y = current_y;
    }

    //Check Array Bounds East
    if(current_x < (x_size -1))
      if(map[current_x+1][current_y] < current_low && map[current_x+1][current_y] != 1) //Is current space occupied?
    {
      current_low = map[current_x+1][current_y];  //Set next number
      next_Direction = 1; //Set Next Direction as East
      Next_X = current_x+1;
      Next_Y = current_y;
    }

    //Check Array Bounds South
    if(current_y > 0)
      if(map[current_x][current_y-1] < current_low && map[current_x][current_y-1] != 1)
    {
      current_low = map[current_x][current_y-1];  //Set next number
      next_Direction = 2; //Set Next Direction as South
      Next_X = current_x;
      Next_Y = current_y-1;
    }

    //Check Array Bounds North
    if(current_y < (y_size - 1))
      if(map[current_x][current_y+1] < current_low && map[current_x][current_y+1] != 1) //Is current space occupied?
    {
      current_low = map[current_x][current_y+1];  //Set next number
      next_Direction = 0; //Set Next Direction as North
      Next_X = current_x;
      Next_Y = current_y+1;
    }

    //Okay - We know the number we're heading for, the direction and the coordinates.
    current_x = Next_X;
    current_y = Next_Y;
    map[current_x][current_y] = '*';

    //Track the robot's heading
    while(current_facing != next_Direction)
    {
      if (current_facing > next_Direction)
      {
        turnLeft90();
        current_facing--;
      }
      else if(current_facing < next_Direction)
      {
        turnRight90();
        current_facing++;
      }
    }
    moveForward(1);
    PrintWavefrontMap();
    wait1Msec(500);
  }
}

task main()
{
  WavefrontSearch();	//Build map of route with wavefront algorithm
  NavigateToGoal();	//Follow most efficient path to goal
  wait1Msec(5000);	//Leave time to view the LCD screen
}

Written by Vu Nguyen

August 8th, 2011 at 9:22 am

Eric’s “Project Scout”

with 6 comments

[Thanks to ericsmalls for posting this project!]

The concept

The robots are ready

Project Scout is a project that Eric has been working on for months. Originally, he wanted to combine obstacle avoidance with multi-robot communication.

The goal of Project Scout is to have one “scout” robot, outfitted with sensors, find its way out of a maze, and then tell a second, “blind” robot, not outfitted with sensors, how to solve the maze. The end result would be two robots  finding their way out of a maze by communicating and working together.

The result

Here is the video of a successful run with two robots:

Proof of Concept

Project Scout did come with several milestones. Here’s one of the first videos of the project. Robot1 (on the left) chooses a random number greater than 720 encoder clicks, and then sets that number as the encoder target. Robot1 then goes forward for that X amount of encoder clicks and upon completion sends its recorded encoder values to Robot2(on the right). Finally, just as Robot1 did, Robot2 then travels forward for the same X amount of encoder clicks sent to it by Robot1. Thus both robots travel the same distance, which proves that robot to robot communication as well as the coordination of forward movement is possible.

Continuing on…

Eric says “But there’s still some work to be done. I am currently working on transferring the communication in the code to utilize ROBOTC’s new multi-robot library and Dexter Industries’ NXTBee radios, which will allow a lot more capabilities and add a lot of versatility to Project Scout. In the future, I plan on adding an additional robot so I can have 3 robots solve the maze!”

Great project and keep up the great work!

Click here to visit the Project Scout page

Written by Vu Nguyen

August 1st, 2011 at 12:43 pm

Posted in Cool projects,NXT

ROBOTC Multi-Robot Communication

with 8 comments

We all know that the LEGO MINDSTORMS NXT and ROBOTC are a powerful combination. Together they are able to perform advanced tasks such as PID auto-straightening, line tracking, and even thermal imaging. Imagine what would be possible if multiple NXT’s could work together! Two heads are better than one, right?

Multi-robot communication is possible and it has already been implemented using ROBOTC. During a recent ROBOTC training session, the final day and a half focused on learning how to make use of the XBee wireless radio for communication between multiple robots.

The NXT is able to send and receive messages over a wireless network in the form of string-type data. There are a few simple commands added to ROBOTC with the “XBeeTools.h” header file. The commands are quite user friendly even though multi-robot communication is typically a graduate level concept.

Multi-robot communication is an advanced topic that users can explore after mastering a single robot. It is important to understand how to program a single robot. However, the future of robotics centers on robots working in teams to accomplish complex tasks. Areas of exploration include team based sports such as soccer and putting autonomous vehicles on our roads.

Check out the video of the challenge given in ROBOTC training, where six NXT robots cooperate to surround a single robot which broadcasts its position to the rest of the group.

Written by Steve Comer

July 8th, 2011 at 2:19 pm