Archive for the ‘Cool projects’ Category

Mindsensors RCX Multiplexer controlled via Android and ROBOTC

with one comment

[All work done by Burf, original link:]

We found another one of Burf’s work on his blog. If you don’t know Burf, he was the creator of a previous Cool Project on our blog, LEGO George.

Here’s another amazing post from his work that utilizes the RCX Multiplexer and an Android phone!

His blog reads,


As you may be aware I have been building a Robot called Wheeler out of old parts (old grey and RCX 9V motors etc).  I was hoping to have it finished over the Christmas break but had hit a small issue with driving the wheels with the new weight of the body.  Anyway what I did managed to get up and running is the top half of Wheeer and the controller which is a Android phone (Dell Streak).

Mindsensors RCX Multiplexer

I was utterly impressed with the RCX Multiplexer and using Xanders driver suite (check BotBench) how fast I was up and running.  I wish there was a way to run the RCX Multiplexer off the NXT power supply but thats a small thing compared to how useful it is.  I wish I had 3 more of them so that I could control 16 RCX motors!

Android NXT Remote Control

So to try and work out how to control the NXT via Android, I stumbled across the NXT Remote Control project which is free to download.  This uses Lego’s Direct Commands to control the 3 motor ports on the NXT.  This means it bypasses your own code and you have no control over it.  However, what I managed to do is reduced it down to a very simple program that sends messages to the NXT which you can deal with in your own program.  In RobotC, it sends messages that are compatible with the MessageParam command and so you can send a message ID and 2 params to the NXT and deal with them in robotC anyway you want to.  Code will be available soon once I have tidied it up.

Written by Vu Nguyen

January 20th, 2012 at 2:24 pm

Posted in Cool projects,NXT

Skype-Controlled Mindstorms NXT Car

without comments

First of all, let me introduce myself: I’m Leon (aka dimastero/ dimasterooo), and I was recently invited to contribute to this blog. So as, my first post, I’d like to tell you about my new Skype-controlled LEGO Mindstorms NXT Car.

I’ve been creating websites for a while now, and I was trying to think of a way to combine it with Mindstorms NXT. This project is the result of that. The project’s webpage is fairly simple – it’s got three arrows (one forward, two to the sides), a start button, and a stop button. It’s also got instructions on it. Clicking the start arrow will begin a Skype conversation with my computer, after which you should share your screen; the NXT standing in front of my computer can then “see” the webpage with the arrows via your computer.

That’s where the cool part kicks in – when you any one of the arrows or the stop button, the page will change to a different shade of gray. This shade of gray is then picked up by the NXT, which turns it into a Bluetooth message for the other NXT on the car. The car then drives in the direction the user tells it to, while remaining within a fenced off area where the webcam can see it.

So, until January the 18th, you can drive a LEGO Mindstorms NXT car, from the comfort of your own home. To learn how and find out more about this project, click the link below:

Written by DiMastero

January 10th, 2012 at 9:00 am

Posted in Cool projects,NXT

Facial recognition using an NXT and an iPhone

with 2 comments

This is a robot that uses Face Recognition  in order to follow around a human. It uses an iPhone in conjunction with an NXT. Take a look!

You can download the Xcode Project and ROBOTC code here:

How it works

The iOS code uses iOS 5’s face detection algorithm to find the position of the face within the video frame.  I then needed a way to communicate with the NXT robot and steer it.  Since I didn’t want to go through the trouble of communicating through bluetooth with it (and I don’t know how to do it!), I chose to communicate with the NXT using the Light Sensor that comes with the NXT.

If I want the robot to go to the left, I dim the lower portion of the iPhone screen and if I want it to go to the right I increase its intensity.  Also, when the phone does not see a face, I turn the lower portion of the screen black.  This tells the robot that it needs to not move forward and spin in-place until it finds a face.

In the ROBOTC code, I also make use of the sound sensor to start and stop the robot.  A loud sound is used to toggle between start and stop.

The ROBOTC and iOS code is very simple.


(Code subject to change. Download the latest version of the code!)

#pragma config(Sensor, S1,     lightSensor,         sensorLightInactive)
#pragma config(Sensor, S2,     soundSensor,         sensorSoundDB)
#pragma config(Motor,  motorA,          mA,            tmotorNormal, PIDControl, encoder)

task main()
wait1Msec(50);                         // The program waits 50 milliseconds to initialize the light sensor.
float x;
while (1)
x = SensorValue[lightSensor];

float minLight, maxLight, d, a, c, v, alpha = 0.01, stopGo=0.0;
int l, sound, startMotors = 0, lostFace, faceFound = 0;

a = 0.60;
minLight = 9;
maxLight = 34;
lostFace = 5;

c = (minLight+maxLight)/2.0;

while (1) {

sound = SensorValue[soundSensor];
if(sound > 85) {
startMotors %= 2;

l = SensorValue[lightSensor];
d = a*(l-c);

faceFound = (l > lostFace) ? 1:0;

stopGo = alpha*faceFound + (1-alpha)*stopGo;

motor[motorB] = (-d+v*stopGo)*startMotors;
motor[motorC] = (d+v*stopGo)*startMotors;

Written by ramin

January 9th, 2012 at 8:58 am

Posted in Cool projects,NXT

Line tracking and book climbing NXT robot

without comments

Here’s a video that a ROBOTC user shared with us. The NXT robot is able to line track and also climb a book that sits along the path. Take a look:


Written by Vu Nguyen

December 8th, 2011 at 3:00 pm

Posted in Cool projects,NXT

LEGO Street View Car v2.0

with one comment

Thanks To Mark over at for creating this incredible project and providing the information. Also Thanks to Xander for providing the community with drivers to use the mentioned sensors in ROBOTC.

You might remember the original Lego Street View Car I built in April. It was very popular at the Google Zeitgeist event earlier this year.

I wanted to re-build the car to only use the Lego Mindstorms NXT motors. I was also keen to make it look more….car-like. The result, after 4 months of experimentation, is version 2.0 of the Lego Street View Car.

As you can see this version of the car is styled to look realistic. I also decided to use my iPhone to capture images on the car. With iOS 5 the iPhone will upload any photos to PhotoStream so I can access them directly in iPhoto.

The car uses the Dexter Industries dGPS sensor to record the current GPS coordinates.

The KML file that records the path taken by the car is transmitted using the Dexter Industries Wifi sensor once the car is within wireless network range.

Design details

The LEGO Street View Car is controlled manually using a second NXT acting as a Bluetooth remote. The remote control allows me to control the drive speed and steering of the car. I can also brake the car to stop it from colliding with obstacles. Finally pressing a button on the remote

Every time an image is captured the current latitude and longitude are recorded from the dGPS. The NXT creates a KML format file in the flash filesystem which is then uploaded from the NXT to a PC. Opening the KML file in Google Earth shows the path that the car drove, and also has placemarks for every picture you took along the way. Click on the placemark to see the picture.

For each GPS coordinate I create a KML Placemark entry that embeds descriptive HTML code using the CDATA tag. The image link in the HTML refers to the last image captured on disk.

The images are captured by triggering the camera on my iPhone. I use an app called SoundSnap which triggers the camera when a loud sound is heard by the phone. By placing the iPhone over the NXT speaker I can trigger the iPhone camera by playing a loud tone on the NXT. While this is not ideal (Bluetooth would be better) it does the job for now.

To get the photos from the iPhone I use the PhotoStream feature in iOS 5. I select the pictures in iPhoto and export them to my laptop. The iPhone will only upload photos when I am in range of a wireless network.

Finally the Dexter Industries Wifi sensor is used to wirelessly transmit the KML file to my laptop over the wireless network.


<name>LSVC Snapshot 1</name>

<description><![CDATA[<img src='Images/IMG_1.jpg' width=640 height=480> ]]></description>


<coordinates> -6.185952, 53.446190, 0</coordinates>




<name>LSVC Snapshot 2</name>

<description><![CDATA[<img src='Images/IMG_2.jpg' width=640 height=480> ]]></description>


<coordinates> -6.185952, 53.446190, 0</coordinates>



The snippet from the KML file gives you an idea of what each placemark should look like.

Once the car has finished driving press the orange button on the NXT to save the KML file. This writes a <pathstring> entry which records the actual path of the car. A path string is simply a list of coordinates that define a path in Google Earth along the Earth’s surface. For example:


<name>LSVC Path</name>

<description>LSVC Path</description>







-6.185952, 53.446190, 0

-6.185952, 53.446180, 0




Is a path two coordinates not far from where I live.

From the NXT to Google Earth

How do we get the pictures and KML file from the NXT and into Google Earth? First of all we need to get all the data in one place. The KML file refers to the relative path of each image, so we can package the KML file and the images into a single directory.

An example of the output produced is shown below. In this test case I started indoors in my house and took a few pictures. As you can see the dGPS has trouble getting an accurate reading and so the pictures appear to be scattered around the map. I then drove the car outside and started to capture pictures as I drove. From Snapshot 10 onwards the images become more realistic based on where the car actually is.


I shot some video of the car driving outside my house. It was a windy dull day, so the video is a little dark. The fun part is seeing the view from on-board the car!

More videos are coming soon…


[nggallery id=1]

Written by Vu Nguyen

November 14th, 2011 at 1:11 pm

Lego George the Giant Robot

with 3 comments

[Thank you burf2000 from our forums for contributing this project!]

LEGO George the Giant Robot

LEGO George the Giant Robot

I present to you…

LEGO George the Giant Robot!

He moves, he dances, he can grab things… What CAN’T HE DO!?

This latest creation from burf2000 stands 5’7″ tall, and is a fully functional 5 foot 7″ robot.

He is controlled via a PlayStation 2 controller, he can move about, rotate his upper body, move his arms / shoulders and grab on to items. His head also rotates, moves up and down and if you get too close, his eyes will rotate.

Video of LEGO George:

I asked burf2000 some questions about his robot:

What inspired you to build this robot?

“I have always loved robotics and so Lego for me was a medium to build it in, I built another large robot last year but was not so successful. That one was based off T1 from Terminator 3. I wanted to keep things simple on this one due to size. It weights around 20KG. I also loved the Short-circuit films (johnny 5).”

How long did it take to make?

“This one took around 3 months of the odd evenings and days, We just had a baby (my wife) so getting time has been quite hard. However my wife is very supportive and knew I needed to build this for a show. (”

What are your future plans with the robot?

“Glad you asked this, currently I am improving certain parts of this which I am not happy with like shoulder joints, main bearing and turning. Once they are done, I am going to build a second robot to keep him company. Its going to be another large one, using more NXT’s and hopefully will go round on his own. My aim is to get a whole display of large robots moving around and interacting with each other.”

I thank you, burf2000, for submitting LEGO George. We can’t wait to see his successor!

More Photos

LEGO George's neckClose up

The whole photo set can be found on burf2000’s Flickr page

Written by Vu Nguyen

October 6th, 2011 at 1:04 pm

Posted in Cool projects,NXT

ROBOTC Advanced Training

without comments

The ROBOTC curriculum covers quite a bit of material ranging from basic movement to automatic thresholds and advanced remote control. This is plenty of material for the average robotics class. However, it is not enough for some ambitious teachers and students who have mastered the basics. For those individuals who strive to learn the ins and outs of ROBOTC, we offered a pilot course called “ROBOTC Advanced Training” in late July.

The focus of the class is on advanced programming concepts with ROBOTC. Trainees learn to make use of the NXT’s processing power and third-party sensors which expand its capabilities. The class began with a review of the basic ROBOTC curriculum. It then moved into arrays, multi-tasking, custom user interfaces using the NXT LCD screen and buttons, and file input/output. The class worked together to write a custom I²C sensor driver for the Mindsensors Acceleration sensor seen here. Mindsensors Acceleration Sensor

The capstone project for the course involves autonomous navigation on a grid world. The program allows the NXT to find the most efficient path to its goal while avoiding obstacles. The class learned the concept of a “wavefront algorithm”, which enabled autonomous path planning in a world delineated by a grid field. The algorithm assumes that the robot will only use three movements: forward for one block, right turn and left turn. Based on these assumptions, each grid block has four neighbors. They are north, south, east and west of the current block.

The grid world (for our project it was a 10×5 grid) is represented in ROBOTC by a 2-Dimensional array of integers. Integer representations are as follows: robot = 99, goal = 2, obstacle = 1, empty space = 0. The wavefront begins at the goal and propagates outwards until all positions have a value other than zero. Each empty space neighbor of the goal is assigned a value of 3. Each empty space neighbor of the 3’s is assigned a value of 4. This pattern continues until there are no more empty spaces on the map. The robot then follows the most efficient path by moving to its neighbor with the lowest value until it reaches the goal.

It is very exciting to see autonomous path planning implemented in ROBOTC because this is similar to the way full scale autonomous vehicles work. Check out the video of the path planning in action and the full ROBOTC code below. Our future plans are to incorporate these lessons into a new curriculum including multi-robot communications. If this seems like the type of project you would like to bring to your classroom, check back throughout the year for updates and also in the spring for availability for next summer’s ROBOTC Advanced Class.

Written by: Steve Comer

YouTube Direct Link 

Code for the first run of the program seen in the video:

Note that the only difference in the code for the second program is another obstacle in the 2D integer array.

//GLOBAL VARIABLES grid world dimensions
const int x_size = 10;
const int y_size = 5;

//GLOBAL ARRAY representation of grid world using a 2-Dimensional array
//0  = open space
//1  = barrier
//2  = goal
//99 = robot
int map[x_size][y_size] =

//FUNCTION move forward for a variable number of grid blocks
void moveForward(int blocks)
  //convert number of blocks to encoder counts
  //wheel circumference = 17.6 cm
  //one block = 23.7 cm
  int countsToTravel = (23.7/17.6)*(360)*blocks;

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = 50;
  motor[motorC] = 50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;

//FUNCTION left point turn 90 degrees
void turnLeft90()
  //distance one wheel must travel for 90 degree point turn = 10.68 cm
  //wheel circumference = 17.6 cm
  int countsToTravel = (8.6/17.6)*(360);

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = 50;
  motor[motorC] = -50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;

//FUNCTION right point turn 90 degrees
void turnRight90()
  //distance one wheel must travel for 90 degree point turn = 10.68 cm
  //wheel circumference = 17.6 cm
  int countsToTravel = (8.6/17.6)*(360);

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = -50;
  motor[motorC] = 50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;

//FUNCTION print wavefront map to NXT screen
void PrintWavefrontMap()
  int printLine = y_size-1;
  for(int y = 0; y < y_size; y++)
    string printRow = "";
    for(int x=0; x < x_size; x++)
      if(map[x][y] == 99)
        printRow = printRow + "R ";
      else if(map[x][y] == 2)
        printRow = printRow + "G ";
      else if(map[x][y] == 1)
        printRow = printRow + "X ";
      else if(map[x][y] < 10)
        printRow = printRow + map[x][y] + " ";
      else if(map[x][y] == '*')
        printRow = printRow + "* ";
        printRow = printRow + map[x][y];
    nxtDisplayString(printLine, printRow);

//FUNCTION wavefront algorithm to find most efficient path to goal
void WavefrontSearch()
  int goal_x, goal_y;
  bool foundWave = true;
  int currentWave = 2; //Looking for goal first

  while(foundWave == true)
    foundWave = false;
    for(int y=0; y < y_size; y++)
      for(int x=0; x < x_size; x++)
        if(map[x][y] == currentWave)
          foundWave = true;
          goal_x = x;
          goal_y = y;

          if(goal_x > 0) //This code checks the array bounds heading WEST
            if(map[goal_x-1][goal_y] == 0)  //This code checks the WEST direction
              map[goal_x-1][goal_y] = currentWave + 1;

          if(goal_x < (x_size - 1)) //This code checks the array bounds heading EAST
            if(map[goal_x+1][goal_y] == 0)//This code checks the EAST direction
              map[goal_x+1][goal_y] = currentWave + 1;

          if(goal_y > 0)//This code checks the array bounds heading SOUTH
            if(map[goal_x][goal_y-1] == 0) //This code checks the SOUTH direction
              map[goal_x][goal_y-1] = currentWave + 1;

          if(goal_y < (y_size - 1))//This code checks the array bounds heading NORTH
            if(map[goal_x][goal_y+1] == 0) //This code checks the NORTH direction
              map[goal_x][goal_y+1] = currentWave + 1;

//FUNCTION follow most efficient path to goal
//and update screen map as robot moves
void NavigateToGoal()
  //Store our Robots Current Position
  int robot_x, robot_y;

  //First - Find Goal and Target Locations
  for(int x=0; x < x_size; x++)
    for(int y=0; y < y_size; y++)
      if(map[x][y] == 99)
        robot_x = x;
        robot_y = y;

  //Found Goal and Target, start deciding our next path
  int current_x = robot_x;
  int current_y = robot_y;
  int current_facing = 0;
  int next_Direction = 0;
  int current_low = 99;

  while(current_low > 2)
    current_low = 99; //Every time, reset to highest number (robot)
    next_Direction = current_facing;
    int Next_X = 0;
    int Next_Y = 0;

    //Check Array Bounds West
    if(current_x > 0)
      if(map[current_x-1][current_y] < current_low && map[current_x-1][current_y] != 1) //Is current space occupied?
      current_low = map[current_x-1][current_y];  //Set next number
      next_Direction = 3; //Set Next Direction as West
      Next_X = current_x-1;
      Next_Y = current_y;

    //Check Array Bounds East
    if(current_x < (x_size -1))
      if(map[current_x+1][current_y] < current_low && map[current_x+1][current_y] != 1) //Is current space occupied?
      current_low = map[current_x+1][current_y];  //Set next number
      next_Direction = 1; //Set Next Direction as East
      Next_X = current_x+1;
      Next_Y = current_y;

    //Check Array Bounds South
    if(current_y > 0)
      if(map[current_x][current_y-1] < current_low && map[current_x][current_y-1] != 1)
      current_low = map[current_x][current_y-1];  //Set next number
      next_Direction = 2; //Set Next Direction as South
      Next_X = current_x;
      Next_Y = current_y-1;

    //Check Array Bounds North
    if(current_y < (y_size - 1))
      if(map[current_x][current_y+1] < current_low && map[current_x][current_y+1] != 1) //Is current space occupied?
      current_low = map[current_x][current_y+1];  //Set next number
      next_Direction = 0; //Set Next Direction as North
      Next_X = current_x;
      Next_Y = current_y+1;

    //Okay - We know the number we're heading for, the direction and the coordinates.
    current_x = Next_X;
    current_y = Next_Y;
    map[current_x][current_y] = '*';

    //Track the robot's heading
    while(current_facing != next_Direction)
      if (current_facing > next_Direction)
      else if(current_facing < next_Direction)

task main()
  WavefrontSearch();	//Build map of route with wavefront algorithm
  NavigateToGoal();	//Follow most efficient path to goal
  wait1Msec(5000);	//Leave time to view the LCD screen

Written by Vu Nguyen

August 8th, 2011 at 9:22 am

Eric’s “Project Scout”

with 6 comments

[Thanks to ericsmalls for posting this project!]

The concept

The robots are ready

Project Scout is a project that Eric has been working on for months. Originally, he wanted to combine obstacle avoidance with multi-robot communication.

The goal of Project Scout is to have one “scout” robot, outfitted with sensors, find its way out of a maze, and then tell a second, “blind” robot, not outfitted with sensors, how to solve the maze. The end result would be two robots  finding their way out of a maze by communicating and working together.

The result

Here is the video of a successful run with two robots:

Proof of Concept

Project Scout did come with several milestones. Here’s one of the first videos of the project. Robot1 (on the left) chooses a random number greater than 720 encoder clicks, and then sets that number as the encoder target. Robot1 then goes forward for that X amount of encoder clicks and upon completion sends its recorded encoder values to Robot2(on the right). Finally, just as Robot1 did, Robot2 then travels forward for the same X amount of encoder clicks sent to it by Robot1. Thus both robots travel the same distance, which proves that robot to robot communication as well as the coordination of forward movement is possible.

Continuing on…

Eric says “But there’s still some work to be done. I am currently working on transferring the communication in the code to utilize ROBOTC’s new multi-robot library and Dexter Industries’ NXTBee radios, which will allow a lot more capabilities and add a lot of versatility to Project Scout. In the future, I plan on adding an additional robot so I can have 3 robots solve the maze!”

Great project and keep up the great work!

Click here to visit the Project Scout page

Written by Vu Nguyen

August 1st, 2011 at 12:43 pm

Posted in Cool projects,NXT

ROBOTC Multi-Robot Communication

with 8 comments

We all know that the LEGO MINDSTORMS NXT and ROBOTC are a powerful combination. Together they are able to perform advanced tasks such as PID auto-straightening, line tracking, and even thermal imaging. Imagine what would be possible if multiple NXT’s could work together! Two heads are better than one, right?

Multi-robot communication is possible and it has already been implemented using ROBOTC. During a recent ROBOTC training session, the final day and a half focused on learning how to make use of the XBee wireless radio for communication between multiple robots.

The NXT is able to send and receive messages over a wireless network in the form of string-type data. There are a few simple commands added to ROBOTC with the “XBeeTools.h” header file. The commands are quite user friendly even though multi-robot communication is typically a graduate level concept.

Multi-robot communication is an advanced topic that users can explore after mastering a single robot. It is important to understand how to program a single robot. However, the future of robotics centers on robots working in teams to accomplish complex tasks. Areas of exploration include team based sports such as soccer and putting autonomous vehicles on our roads.

Check out the video of the challenge given in ROBOTC training, where six NXT robots cooperate to surround a single robot which broadcasts its position to the rest of the group.

Written by Steve Comer

July 8th, 2011 at 2:19 pm

Bionic NXTPod 3.0 by DiMastero

with one comment

[Thanks to DiMastero for submitting this project!]


Festo, founded in 1925, is a German engineering-driven company based in Esslingen am Neckar. Festo sells both pneumatic and electric actuators, and provides solutions from assembly lines to fully automated full automation solutions utilizing Festo and third party components. It also has a kind of R&D department, the Bionic Learning Network, where they’ve created some amazing projects including SmartBird (“bird flight deciphered”), AquaJelly, Robotino XT and much more. [source]

They also created the Bionic Tripod 3.0, an arm-like robot based on four flexible rods actuated from below. By moving the actuators to different positions, the rods bend and move the adaptive gripper to any position quickly and energy efficiently.

“Festo – Bionic Tripod 3.0″ demonstration video

The tripod has been partially replicated before, but I’ve found no evidence about it being done entirely of Lego MindStorms. Cue the Bionic NXTPod 3.0

The following image cannot be displayed: Bionic NXTPod 3.0


  • 2 Lego Mindstorms NXT intelligent bricks – one 1.0 and one 2.0
  • 5 Lego Mindstorms NXT motors
  • 4 Lego Mindstorms NXT touch sensors
  • 1 Lego Mindstorms NXT 1.0 light sensor
  • 1 Lego Power Functions (PF) LED light
  • 1 Lego pneumatic actuator, switch and pump

The robot itself consists of these parts:

  • 4 actuators
  • 4 flexible rods
  • the pneumatic grabber
  • the main structure
  • PF LEDs and a light sensor for communication

Mechanically, the NXTPod’s most important parts are the four actuators. They’re made up of a single NXT servo motor, which spins a worm wheel up a four-part gear rack, moving a sledge up or down a 14 studs axle. It can move up to 19 rotations up or down, in about 11 seconds at the default speed of 75%.

The following image cannot be displayed: Bionic NXTPod 3.0's actuators initial designinitial design of one of the four linear actuators; has improved since

The last motor serves a double function, to perform a single task: it moves the pneumatic switch and pumps the pump, opening or closing the gripper.

The following image cannot be displayed: Bionic NXTPod 3.0's gripper with its motor attachedgripper and its motor, final design

Both NXTs take care of two of the actuators, which are color coded to make programming easier – the master controls the red and blue motors, while the slave takes care of the black and beige ones. The slave also controls the pneumatic gripper at the tripod’s top.

The following image cannot be displayed: Bionic NXTPod 3.0's four color coded actuators (highlighted)the four color coded motors (red, blue, black, beige)

To connect the NXTs, the master has a NXT-PF cable connected to motor port B to control the LEDs in front of the slave’s light sensor. The problem with this setup is that the master can’t get any feedback from the slave. Therefore, it’s got to take the time the slave takes to perform certain actions into account to avoid overlapping commands.


Generally, the NXTs are set up in a master-slave configuration, where the master sends commands to the slave using LEDs and a light sensor and then waits for the slave to finish to send a new task. This is how it works:

  1. The slave is started up by the user
  2. The master is started up by the user, and turns the LEDs on for a tenth of a second
  3. Both the master and slave calibrate their motors ´by moving the actuators down until the touch sensors are pressed once the light is turned off again
  4. Once its calibrated, the slave waits for the LEDs to turn on again, so it knows a command is coming
  5. The master calibrates and waits the remaining of the eleven seconds to make sure the slave has calibrated as well, so it doesn’t send any commands before the slave is ready
  6. The master reads the block of code containing the positions for all four actuators and the gripper, and converts this into 12 binary bytes
  7. The LEDs are turned on, and after a short wait, the master turns the lights on and off ten times a second, taking a total of 1300 mSecs, or 1.3 seconds per full message.
  8. When the slave receive the bytes, it decodes them
  9. Both of the NXTs start simultaneously, and go to their positions.
  10. At the same time, the master calculates how many degrees the slave has to turn, and converts it into the approximate waiting time to, again, avoid overlapping commands
  11. Steps 6-10 repeat until the master has run through all of the blocks of code, after which it shuts down. The slave has to be turned off manually; the slave must be restarted every time the master finishes, or it will interpret the calibration command incorrectly.

The robot is very easy to program, and the user only has to provide 6 lines of code per tripod position:

1 motor_blue_rotations = 1;
2 motor_red_rotations = 1;
3 motor_black_rotations = 10;
4 motor_beige_rotations = 10;
5 gripper_operation = 0;
6 wait_for_completion();
7 wait10mSec(100);

The program does the rest of the work, also making sure the robot doesn’t ever overrun anything, and calibrates as much as possible. You can download the RobotC code at my downloads page, over here. Below is a short demonstration video:

“Bionic NXTPod 3.0″ demonstration video

Completion date: 2011/06/12

Last updated: 2011/06/12

Disclaimer: This site is neither owned nor endorsed by Festo Group. The Bionic Learning Network, SmartBird, AquaJelly and Robotino are all copyrighted by Festo. The Bionic Tripod 3.0, on which this project is based, is also copyrighted by Festo.

Written by Vu Nguyen

June 17th, 2011 at 10:00 am

Posted in Cool projects,NXT