Archive for the ‘Cool projects’ Category

Eric’s “Project Scout”

with 6 comments

[Thanks to ericsmalls for posting this project!]

The concept

The robots are ready

Project Scout is a project that Eric has been working on for months. Originally, he wanted to combine obstacle avoidance with multi-robot communication.

The goal of Project Scout is to have one “scout” robot, outfitted with sensors, find its way out of a maze, and then tell a second, “blind” robot, not outfitted with sensors, how to solve the maze. The end result would be two robots  finding their way out of a maze by communicating and working together.

The result

Here is the video of a successful run with two robots:

Proof of Concept

Project Scout did come with several milestones. Here’s one of the first videos of the project. Robot1 (on the left) chooses a random number greater than 720 encoder clicks, and then sets that number as the encoder target. Robot1 then goes forward for that X amount of encoder clicks and upon completion sends its recorded encoder values to Robot2(on the right). Finally, just as Robot1 did, Robot2 then travels forward for the same X amount of encoder clicks sent to it by Robot1. Thus both robots travel the same distance, which proves that robot to robot communication as well as the coordination of forward movement is possible.

Continuing on…

Eric says “But there’s still some work to be done. I am currently working on transferring the communication in the code to utilize ROBOTC’s new multi-robot library and Dexter Industries’ NXTBee radios, which will allow a lot more capabilities and add a lot of versatility to Project Scout. In the future, I plan on adding an additional robot so I can have 3 robots solve the maze!”

Great project and keep up the great work!

Click here to visit the Project Scout page

Written by Vu Nguyen

August 1st, 2011 at 12:43 pm

Posted in Cool projects,NXT

ROBOTC Multi-Robot Communication

with 8 comments

We all know that the LEGO MINDSTORMS NXT and ROBOTC are a powerful combination. Together they are able to perform advanced tasks such as PID auto-straightening, line tracking, and even thermal imaging. Imagine what would be possible if multiple NXT’s could work together! Two heads are better than one, right?

Multi-robot communication is possible and it has already been implemented using ROBOTC. During a recent ROBOTC training session, the final day and a half focused on learning how to make use of the XBee wireless radio for communication between multiple robots.

The NXT is able to send and receive messages over a wireless network in the form of string-type data. There are a few simple commands added to ROBOTC with the “XBeeTools.h” header file. The commands are quite user friendly even though multi-robot communication is typically a graduate level concept.

Multi-robot communication is an advanced topic that users can explore after mastering a single robot. It is important to understand how to program a single robot. However, the future of robotics centers on robots working in teams to accomplish complex tasks. Areas of exploration include team based sports such as soccer and putting autonomous vehicles on our roads.

Check out the video of the challenge given in ROBOTC training, where six NXT robots cooperate to surround a single robot which broadcasts its position to the rest of the group.

Written by Steve Comer

July 8th, 2011 at 2:19 pm

Bionic NXTPod 3.0 by DiMastero

with one comment

[Thanks to DiMastero for submitting this project!]


Festo, founded in 1925, is a German engineering-driven company based in Esslingen am Neckar. Festo sells both pneumatic and electric actuators, and provides solutions from assembly lines to fully automated full automation solutions utilizing Festo and third party components. It also has a kind of R&D department, the Bionic Learning Network, where they’ve created some amazing projects including SmartBird (“bird flight deciphered”), AquaJelly, Robotino XT and much more. [source]

They also created the Bionic Tripod 3.0, an arm-like robot based on four flexible rods actuated from below. By moving the actuators to different positions, the rods bend and move the adaptive gripper to any position quickly and energy efficiently.

“Festo – Bionic Tripod 3.0″ demonstration video

The tripod has been partially replicated before, but I’ve found no evidence about it being done entirely of Lego MindStorms. Cue the Bionic NXTPod 3.0

The following image cannot be displayed: Bionic NXTPod 3.0


  • 2 Lego Mindstorms NXT intelligent bricks – one 1.0 and one 2.0
  • 5 Lego Mindstorms NXT motors
  • 4 Lego Mindstorms NXT touch sensors
  • 1 Lego Mindstorms NXT 1.0 light sensor
  • 1 Lego Power Functions (PF) LED light
  • 1 Lego pneumatic actuator, switch and pump

The robot itself consists of these parts:

  • 4 actuators
  • 4 flexible rods
  • the pneumatic grabber
  • the main structure
  • PF LEDs and a light sensor for communication

Mechanically, the NXTPod’s most important parts are the four actuators. They’re made up of a single NXT servo motor, which spins a worm wheel up a four-part gear rack, moving a sledge up or down a 14 studs axle. It can move up to 19 rotations up or down, in about 11 seconds at the default speed of 75%.

The following image cannot be displayed: Bionic NXTPod 3.0's actuators initial designinitial design of one of the four linear actuators; has improved since

The last motor serves a double function, to perform a single task: it moves the pneumatic switch and pumps the pump, opening or closing the gripper.

The following image cannot be displayed: Bionic NXTPod 3.0's gripper with its motor attachedgripper and its motor, final design

Both NXTs take care of two of the actuators, which are color coded to make programming easier – the master controls the red and blue motors, while the slave takes care of the black and beige ones. The slave also controls the pneumatic gripper at the tripod’s top.

The following image cannot be displayed: Bionic NXTPod 3.0's four color coded actuators (highlighted)the four color coded motors (red, blue, black, beige)

To connect the NXTs, the master has a NXT-PF cable connected to motor port B to control the LEDs in front of the slave’s light sensor. The problem with this setup is that the master can’t get any feedback from the slave. Therefore, it’s got to take the time the slave takes to perform certain actions into account to avoid overlapping commands.


Generally, the NXTs are set up in a master-slave configuration, where the master sends commands to the slave using LEDs and a light sensor and then waits for the slave to finish to send a new task. This is how it works:

  1. The slave is started up by the user
  2. The master is started up by the user, and turns the LEDs on for a tenth of a second
  3. Both the master and slave calibrate their motors ´by moving the actuators down until the touch sensors are pressed once the light is turned off again
  4. Once its calibrated, the slave waits for the LEDs to turn on again, so it knows a command is coming
  5. The master calibrates and waits the remaining of the eleven seconds to make sure the slave has calibrated as well, so it doesn’t send any commands before the slave is ready
  6. The master reads the block of code containing the positions for all four actuators and the gripper, and converts this into 12 binary bytes
  7. The LEDs are turned on, and after a short wait, the master turns the lights on and off ten times a second, taking a total of 1300 mSecs, or 1.3 seconds per full message.
  8. When the slave receive the bytes, it decodes them
  9. Both of the NXTs start simultaneously, and go to their positions.
  10. At the same time, the master calculates how many degrees the slave has to turn, and converts it into the approximate waiting time to, again, avoid overlapping commands
  11. Steps 6-10 repeat until the master has run through all of the blocks of code, after which it shuts down. The slave has to be turned off manually; the slave must be restarted every time the master finishes, or it will interpret the calibration command incorrectly.

The robot is very easy to program, and the user only has to provide 6 lines of code per tripod position:

1 motor_blue_rotations = 1;
2 motor_red_rotations = 1;
3 motor_black_rotations = 10;
4 motor_beige_rotations = 10;
5 gripper_operation = 0;
6 wait_for_completion();
7 wait10mSec(100);

The program does the rest of the work, also making sure the robot doesn’t ever overrun anything, and calibrates as much as possible. You can download the RobotC code at my downloads page, over here. Below is a short demonstration video:

“Bionic NXTPod 3.0″ demonstration video

Completion date: 2011/06/12

Last updated: 2011/06/12

Disclaimer: This site is neither owned nor endorsed by Festo Group. The Bionic Learning Network, SmartBird, AquaJelly and Robotino are all copyrighted by Festo. The Bionic Tripod 3.0, on which this project is based, is also copyrighted by Festo.

Written by Vu Nguyen

June 17th, 2011 at 10:00 am

Posted in Cool projects,NXT

Bring on the Heat: Thermal Imaging with the NXT

with 2 comments

Lookin' Hot!I built a pan and tilt rig for the Dexter Industries Thermal IR Sensor with a great deal of gearing down to allow me to take a lot of measurements as the rig moved around. Initially I had it set for about 40×40 measurements but those didn’t look that great and I wanted a bit more. I reprogrammed it and made it spit out data at a resolution of about 90×80.

The data from the thermal sensor was streamed to the debug output console in ROBOTC from which I copy and pasted it to an Excel worksheet.  I made some 3D graphs from the thermal data and it looked pretty cool.

Excel graph for cold glass Excel graph for candle flame

The left one is a cold glass and the right one is a candle.  I wasn’t really happy with the results of the graphs so I decided to quickly whip up a .Net app to read my CSV data and make some more traditional thermal images.  A few hours later, the results really did look very cool.

Thermal image for cold glass Thermal image for candle flame

Again, the left one is the cold glass and the right one is the candle.  Now that you have a thermal image, you can see the heat from the candle a lot more clearly. I made a quick video of the whole rig so you can get an idea.

A few days after the initial post about my thermal imaging system using the Thermal Infrared Sensor, I made some improvements with both the speed and accuracy of the whole thing. I made the sensor sampling interval time based, rather than encoder value based. This proved to be a lot better at getting consistent sampling rates. I also doubled the horizontal motor speed so I would be more likely to be still awake by the time it was done taking an image.

The left image was made with the old system, the right one with the new system. It’s a lot less fuzzy and there are no black gaps where the number of samples were fewer than the maximum number of samples in a row.

image_thumb7 image_thumb8

Perhaps there are other ways to improve the program but I am quite happy with how this has turned out.

The driver and program will be part of the next Driver Suite version. You can download a preliminary driver and this program from here: [LINK].  The .Net program and CSV files can be downloaded here: [LINK]. You will need Visual Studio to compile it.  You can download a free (Express) version of C# from the Microsoft website.

Written by Xander Soldaat

June 16th, 2011 at 4:47 pm

Controlling the MINDS-i Lunar Rover with a VEX Cortex

with one comment

Article written by Steve Comer

Remote control cars are great for having fun. They can be driven off-road, taken off jumps, and raced among other things. VEX robots are great for learning. They can be used to teach programming, math, problem solving, and other engineering skills. What do you get if you put them together??

Well, I can tell you. You get a rugged 4WD truck that is still tons of fun to drive around outside, but can also be used as a teaching tool.

Follow this link for more photos:

I started off with a MINDS-i Lunar Rover kit which is driven by a 7.2V DC motor and steered with a standard hobby servo. I removed the solar panel from the Rover and in its place put a VEX Cortex microcontroller and an LCD screen. On each side, I attached a VEX flashlight from the VEXplorer kit and I mounted an ultrasonic sensor to the front. It just so happens that VEX bolts and nuts fit quite easily into the beams of the MINDS-i rover.

I did all the programming in RobotC. See bottom of the page to view my RobotC code.

In order to control the stock motor and servo with the Cortex, I had to make a few modifications. I soldered the two wires to a 2-pin head which I then connected to the Cortex with a VEX motor controller.

For the servo, I used three single male-to-male jumper cables.

The video demonstrates the rover in autonomous mode where it makes use of the ultrasonic sensor to avoid bumping into walls. Remote control is also demonstrated using the VEXnet controller over Wi-Fi.

This is just a small sampling of the possibilities with this combination type of platform. Don’t let my initial direction limit you. It would be great to see some new combination robots. Get out there and start building!

This is my RobotC Code for the behaviors seen in the video.


#pragma config(UART_Usage, UART2, VEX_2x16_LCD)
#pragma config(Sensor, dgtl1,  sonar,               sensorSONAR_inch)
#pragma config(Motor,  port1,           L,             tmotorServoStandard, openLoop)
#pragma config(Motor,  port2,           servo,         tmotorNormal, openLoop)
#pragma config(Motor,  port3,           drive,         tmotorNormal, openLoop)
#pragma config(Motor,  port10,          R,             tmotorNormal, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//
//  Author  : Steven Comer//  Program : Rover drives straight until near an object, it then slows,
//            stops, then backs up and turns.
//  Updated : 8 June 2011 @ 10:20 AM
task main()

  //pause at start and turn on headlights
  motor[R] = -127;
motor[L] = -127;
    displayNextLCDNumber(SensorValue(sonar), 3);

if( SensorValue(sonar) > 20 || SensorValue(sonar) == -1 )
motor[servo] = -2;
motor[drive] = 50;
else if( SensorValue(sonar) <= 20 && SensorValue(sonar) > 15 )
motor[drive] = SensorValue(sonar) + 25; //power decreases
//+++++++++++++++++++++++++++STOP AND TURN+++++++++++++++++++++++++++
else if( SensorValue(sonar) <= 15 )
motor[drive] = 0;
//back and turn
motor[servo] = random[50] + 60;
     //random degree
motor[drive] = -50;  //random power


#pragma config(UART_Usage, UART2, VEX_2x16_LCD)
#pragma config(Sensor, dgtl1,  sonar,               sensorSONAR_inch)
#pragma config(Motor,  port1,           L,             tmotorServoStandard, openLoop)
#pragma config(Motor,  port2,           servo,         tmotorNormal, openLoop)
#pragma config(Motor,  port3,           drive,         tmotorNormal, openLoop)
#pragma config(Motor,  port10,          R,             tmotorNormal, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//
//  Author  : Steven Comer
//  Program : Remote control rover with VEXnet controller
//  Notes   : Throttle is Ch 2 (right joystick)
//            Steering is Ch 4 (left joystick)
//            Headlights are Ch 5U/5D and 6U/6D (on/off)
//  Updated : 10 June 2011 @ 12:30 PM
task main ()
//right headlight
if(vexRT[Btn6U] == 1)
motor[R] = -127;
if(vexRT[Btn6D] == 1)
motor[R] = 0;
//left headlight
if(vexRT[Btn5U] == 1)
motor[L] = -127;
if(vexRT[Btn5D] == 1)
motor[L] = 0;
motor[servo] = vexRT[Ch4];   //steering
motor[drive] = vexRT[Ch2];   //throttle
//LCD screen

Written by Jesse Flot

June 9th, 2011 at 3:31 pm

Posted in Cool projects,VEX

Tagged with , ,

Mars Rover NXT/VEX Robot with Rocker-Bogie Suspension

with one comment

[Submitted by Fealves78 from the ROBOTC forums]

Fealves78 submitted an incredible looking Mars Rover robot using (eek!) a combination of both NXT and VEX parts.

Both the robot and the the joystick are controlled by NXT bricks. The robot also uses 6 regular motors, 6 servos, a VEX Camera, and a set of Vex Lights.

Here is a new video of the robot being demonstrated at the National Space Foundation. The robot can go over rocks too!

Rocker-Bogie Suspension

Rocker-Bogie Suspension (Source

The Rocker-Bogie suspension is actually a popular setup, and was used for the Mars Rover (hence, the robot name!) robot. It’s still favored by NASA in their MARS robots.

The suspension is coined “Rocker” because of its rocking aspect of the larger links in the system. The two sides of the chassis are connected via a differential. This allows each “rocker” to be able to move up and down independent of each other. Thus, this system allows the robot to drive over uneven terrain as well as rocks.

The word “Bogie” actually refers to the links that have a drive wheel at each end. Bogies were commonly used as load wheels in the tracks of army tanks as idlers distributing the load over the terrain. Bogies were also quite commonly used on the trailers of semi trailer trucks.

[See more at the Rocker-Bogie page at Wikipedia]

Here’s a video of the robot running over a grassy area:

What inspired you to build the robot?

I am a graduate Computer Science student and robotics is one of my interests. I am teaching robotics for kids in Colorado Springs through Trailblazer Elementary School. My students’ age range from 6 to 11 years old, and this is the first year that they were studying with me. We were inspired to build the Mars Rover robot by the Space Foundation – SF, which is located in Colorado Springs, CO. Through a grant with Boeing, the Space Foundation has donated 2 NXT robotics kit to our school, and I myself gave the Vex Kit for the students. Then, the SF challenged us to build a demo robot using some of the materials they have provided and the Mars Rover was our first big project.

How long did it take to build the robot?

The students spent about a week researching the design of the robot structure. It took 2 weeks to put it together and 2 more weeks to program the robot using Robot C. We also used the NXTSERVO-V2 form to control the robot’s 12 motors, 2 Lights, and camera.

What are your future plans with the robot?

All the work that we are doing is volunteer work. We started with one teacher and one school (Trailblazer) and 16 kids in the beginning of 2011. Now, with the help of graduate students from Colorado Technical University – CTU, the IEEE chapter from that school, and help from companies like the Space Foundation and MITRE, we are expanding to 40 kids and 3 schools by the end of the year. We are also willing to help teachers from Elementary, Middle, and High schools, who are willing to take robotics to the classroom as means to facilitate science to their students and to motivate them towards STEM education. Most of the schools have neither materials nor budget to start a robotics club. We are surviving with small donations and volunteer work. If you or anyone is interested in helping, please let us know.

In the little time we have been working with these kids both their regular teachers and parents are noticing improvement in the kid’s interest towards science and in their grades. For us, the CTU volunteers (students and IEEE members), this is a way to gain work experience and give back to the community.

Written by Vu Nguyen

June 6th, 2011 at 10:57 am

Posted in Cool projects,NXT,VEX

LEGO Quad Delta Robot System

without comments

[Many thanks to Shep for contributing this amazing project! Description and Source is all from Shep’s blog]

Years of development, months of building and programming.  Here it is.

YouTube Direct Link 

About the Lego Quad Delta Robot System.

This system uses four Lego parallel robots which are fed by two conveyor belts.  As items flow down the conveyor belt toward the robots, each item passes by a light/color sensor mounted on each conveyor.  When the item is detected, a signal is sent to the robots telling them information such as the color of the object, which belt the object is on and the position of the object on the belt.  The robot reaches out and grabs the item from the moving conveyor belt when each item gets close enough and moves it to a location based on the color of the item.

The cell is capable of picking and placing objects at a rate of 48 items per minute.  Each robot can move 12 items per minute, or it can move an item in 5 seconds!


Delta Robots, also known as Parallel robots are commercially available from several manufacturers.  They go by names such as ABB Flexpicker, Bosch Paloma D2, Fanuc M-1iA, Kawasaki YF03N, and Adept Quattro s650H.  They are known for moving small objects very quickly, usually at two hundred or more moves per minute.  Parallel robots are often used in many industries such as the food industry where the payload is small and light and the production rates are very high.  Many times a series of parallel robots are used to do things like assemble cookies, package small items, stack pancakes and much, much more.


Each robot operates independently.  The robots receive a signal from the master, which in this case is the NXT that controls the light sensors.  The signal contains information about the color, lane, and position of each object.  When the signal is received, the data is stored in a chronological array.  When the object gets close enough, the robot goes through a preprogrammed series of movements based on the information in the array.


At the beginning of each run, all three arms move slowly upward until they each hit a touch sensor.  After all three arms have reached the top they all move down together to a predetermined zero position and the encoders are reset.  At that point all the robots wait for the first signal which will be the master sending the belt speed signal.  The robots can automatically adjust movements such as where they pick up the objects based on the belt speed.

Immediately after the belt speed information has been received, each NXT brick will sound off in a timed sequence with their respective brick number.  This is an error checking technique.  If the operator doesn’t hear the full “ONE, TWO, THREE, FOUR, FIVE, SIX” there is a problem and the run should be terminated and restarted.


The signal is an eight bit binary light signal that takes about 170 milliseconds to transmit.  The master NXT blinks the LEDs that are attached to each robot on and off at an interval about 20 milliseconds each flash.  Each robot is equipped with a Lego light sensor that easily sees the short flashes.  The same signal is sent to all the NXT bricks, but data encoded in the signal determines which robot will move the item.  The robot’s NXT brick decode the message and sends that information to a procedure that does the appropriate movements.

The binary signal is converted to a three digit number such as 132 or 243.  The first digit is the lane.  Possible values are 1 and 2 corresponding to conveyor 1 and conveyor 2 respectively.  The second digit is the robot number and the possible values are 1 through 4 corresponding to each of the four robots.  The third digit is the color of the object.  The possible values are 1 through 6, i.e.  BLACK=1, BLUE=2, GREEN=3. YELLOW=4. RED=5, WHITE=6.  The position of the brick is noted by the time that the light signal is received.  The robots calculate the position of each object by using the time when the signal was received relative to the current, dynamic time.  The belt moves precisely at 100 inches per minute so based on this, the position of the item on the belt can be precisely calculated.

A few signals other than brick information and belt speed are programmed to be sent.  The master can send an emergency shut down message in which all robots immediately stop what they are doing, drop their bricks and go to their home position as well as stop the conveyors.  Signals can also be sent to make the robots dance, play sound files and music files concurrently.


The precise kinematics for the movements of the robots are dynamically calculated using detailed formulas that convert the Cartesian coordinates (x,y,z) of the location of the brick into the angles of the servo motors (theta1, theta2 and theta3) and vice versa.   This is the heart and soul of the robot.  Without precise calculations, this project would be nearly impossible.

As the gripper or “end effector” is moved around, it becomes necessary to calculate the best route for it to move.  The best route is usually a straight line.  This is done by locating the start point (x1, y1, z1) and the end point (x2, y2, z2) and then calculating a discrete number of points that lie on the line between the two points.  For each and every movement, the robot first creates an array for all the points in between and then moves nonstop from point to point to point through the array until it reaches the end point.

As the robot moves around, each motor speed is adjusted relative to the other motors speed in a manner that all three motors arrive at their target position at the same time.  This makes all the movements very smooth and the robot doesn’t shake too much.  The motor speeds are adjusted so that the robot moves as fast as possible.

Since the objects on the conveyors are moving at all times, the robot actually moves to a position where the object will be rather than where the object is actually at.  Also, when the robot grasps an object, it doesn’t lift it straight up, but up and forward slightly so that any objects behind the object on the conveyor belt won’t hit the object that is being moved.
It is possible for the robot to be overwhelmed by having too many objects to pick up.  Once an object goes past a limit point where it is too far to reach, it is removed from the queue and will not be picked up by any robot.

As the robots place items in the bins, the release point is shifted slightly so that the items won’t pile up.


The grippers are each driven using a single pneumatic cylinder.  The cylinder is cycled by a valve equipped with a medium PF motor connected to an IR receiver.  Each NXT is equipped with a HiTechnic IRLink sensor.  The NXT controls the gripper by sending a signal to the motor through the IRLink sensor.  The motor then rotates clockwise or counterclockwise for one quarter of a second to switch the pneumatic valve.  This is a very effective way of controlling Lego pneumatics with a NXT.


The air system must be robust because the pneumatic cylinders on the grippers move about 96 times a minute.  This requires a great deal of air.  The air compressor consists of six pumps (with the springs removed) turned by three XL PF motors.  The pressure is measured using a MindSensors Pressure sensor.  The pressure is kept between 10 and 13 psi to maintain good operational speed and gripping capacity.  The whole system will not start until air pressure is up to a minimum of 8 psi, and an audible alarm sounds if the pressure drops below 8 psi.  At this point, the operator can help the compressor by manually pumping up the system to the required pressure.

The three XL-PF motors are powered using a 9v train controller.  This is done so that consistent power is transmitted to the motors.  Air compressors tend to use batteries very quickly and using a train controller avoids that cost.
There are also six air tanks for storage, a manual pump, a pressure gage, and a pressure release valve to purge the system of pressure.  The manual pump is primarily used to assist the compressor if it can’t keep up.

The compressor motors are turned on and off using a Lego servo motor and a PF switch.  As the pressure sensor senses the pressure going above or below the thresholds, the motor moves the switch back and forth to add air or turn off the compressor.


The conveyors are controlled by a dedicated NXT brick.  The timing and speed of the conveyors is critical so that the items will be positioned accurately.  The speed of the conveyors is governed by a proportional controller.  They were originally controlled using a PID controller, but it turns out that a proportional control was adequate.   The speed of the conveyor can be vary from zero inches per minute up to two hundred inches per minute, but one hundred inches per minutes is the best for all the robots.

The NXT brick that controls the conveyors reads the same light signal information as all of the robots, but ignores most of the signals.

Each conveyor is ten feet long.


The light/color sensors mounted on the conveyor do double duty.  Their default mode is as an ambient light sensor but they are frequently changed to color sensor.  A PF LED light is mounted opposite to the light sensor to give a high value of light detected.  When an item passes between the LED and the light sensor, a low light condition is detected and the sensor immediately switches mode to a color sensor.  This can be seen when the sensor briefly emits an RGB light as a brick passes in front of the sensor.  As soon as the color is correctly read, it immediately switches mode back to an ambient light sensor and waits for the next item.  When the color is determined, the brick then sends a signal to all of the slave bricks and an audible color sound is played.

There is a condition when two bricks pass by both light sensors at the same time.  It is impossible to send two signals at the same time, so the first item to be detected takes priority and the second brick signal is sent 400 milliseconds later.  A special signal is sent to tell the robot to adjust the position timing to account for the 400 ms delay when the brick comes to be picked up.


The frame structure holding the robots is highly engineered.  The combination of the weight of all the robots as well as the constant movement is a considerable problem.  The main horizontal member is achieved by layering Technic bricks with plates.  This configuration is very strong and has very little sag.  Movement is also minimized, but not completely eliminated.

The two main posts in the middle carry most of the weight and do a great deal to stop the structure from moving while the robots are operating.  The four outside posts help, but are mostly for support.  The diagonal braces are quite small relative to the size of the other members, but actually do a great deal to stop movement.

All of the posts are made from standard Lego bricks with Technic beams attached around to lock them together.  The structure is completely tied together as one piece, but can be broken down into eight parts for transport.


I have a personal fascination with this type of robot.  I find the movements mesmerizing and extremely interesting. The movements of the actual robots are extremely fast and accurate and defy belief.  I especially like the fact that the location of the end effector can be precisely calculated from the angular location of the three servo motors positioned at one hundred and twenty degrees from each other.

This is not the first parallel robot that I have built.  My first delta robot was built in 2004 using the Mindstorms RCX and was very crude and not very useful.  After several more attempts, I finally found a design using the Mindstorms NXT system that worked well.  At that time I still hadn’t worked out the kinematics but I found a way to fake the movements by positioning the end effector by hand and reading the encoder values.  Then I used those values to create a series of movements that closely resembled an actual robot.

I have researched for about six years and built this project many times.  This project took about five months to build and program.  It was purely a labor of love for this robot.

I don’t know how to improve on the current design.  As you can tell if you have read this description of the robot, I have exhaustively researched and built to every goal I have.  Sadly, I believe that I have reached the limit of what can be built using only Lego building elements.

Written by Vu Nguyen

April 20th, 2011 at 4:39 pm

Posted in Cool projects,NXT

I2C on the VEX Cortex

with one comment

The VEX Cortex is a nice platform made by VEX Robotics. It is supported in two programming environments, one of which is ROBOTC. Much to my dismay, the master firmware does not support I2C, which is why ROBOTC does not support it. I don’t really like it when someone tells me I can’t do something, so I went ahead and remedied the situation.

Mindsensor Magic Wand controlled by VEX CortexMotor MUX and Servo Controller controlled by VEX Cortex

I spent a few evenings writing and tinkering in ROBOTC to write my own bit-banged I2C implementation, which much to my surprise, worked very well.  First I tested it with the Mindsensors Magic Wand (above left) and later also with the Holit Data Systems Motor MUX and Mindsensors NXT Servo Controller (above right).  Jesse Flot from Robotics Academy was kind enough to send me some old VEX cables so I could splice two of them into an NXT cable for I2C. I will post a HOWTO for that at a later date.

As you can see in the right picture, I was already contemplating controlling the omniwheeled robot with the Motor MUX and so I did.

The robot is remote controlled via VEXnet over Wifi (which is a totally awesome feature which I wish the NXT had). The short video was taken at the RobotMC meeting of 19 March 2011, which happened to coincide with an information day for the technical university where we hold our meetings.

The coolest part about it is that my driver suite is almost completely transparently portable to the VEX Cortex platform once you switch out the NXT I2C subsystem functions for the Cortex specific ones. Some NXT dependencies do need to be removed and made more generic.  I intend to work on that in the next few weeks.  That would make a very wide range of new sensors available to the VEX Cortex platform.

Original article: [LINK]

Written by Xander Soldaat

March 19th, 2011 at 12:59 pm

NXT Robot: PID Line Follower

with one comment

DiMastero is at it again…

This time he has created a robot that does some very fast line following.

Watch it in action

YouTube Direct Link 


This line follower is equipped with three sensors: one light (port 3), one magnetic field (port 2) and one IR link, the last of which are by HiTechnic. The light sensor is used for the robot’s main purpose: line following, while the magnetic field sensor detects whether the robot needs to pause or keep going. The IR Link doesn’t have any function; it’s just there to keep the whole thing symmetrical.

rear view

Zoom on the magnet and magnetic field sensorside view

The robot moves using two independently moving motors connected to ports B and C. They form the follower’s back and sides. At the front, next to the light sensor is a caster wheel.

Detail of caster and light sensor


The robot was programmed in RobotC and runs on PID control.The motors’ built-in PID is off. When the code starts, it takes the black (line) and white (background) light values and averages them to get an offset. It then sets the bias of the HiTechnic magnetic field sensor to 0, while the magnet is in front of it. That way, the sensor’s value will change when the magnet is (re)moved.

Magnet is out of sensor's reach

Next, after a short wait, it starts driving around the NXT test paper, guided by the PID. It keeps on doing so until the magnet is removed, in which case it pauses the program and turns off the motors. Once the magnet is back in place, the robot keeps going, even if it’s been moved to somewhere else on the line. If the magnet stays away for too long (more than four seconds), the program shuts down.

You can download the latest version of the code on the downloads page or download version 2.1 (which was the latest version when this page was last updated) below.

Setup and Performance

To start the robot, place it above the middle of the black line you want to follow right after starting the program on the NXT. Then, when it bleeps, move the robot to the left of the line. Make sure you neither touch nor move the magnet.

After a split second, the robot will start to follow the line. To pause it, lift the “tail”, moving the magnet. To get it back on line, let go of the tail. To abort the entire program, hold the tail for four seconds, or until you see the light sensor turn off (it’s in active mode, so the LED will be on all the time when it’s operative).

The robot follows the black line pretty quickly and smoothly.

Written by Vu Nguyen

March 18th, 2011 at 12:03 pm

Posted in Cool projects,NXT

Dancing VEX robot: Bear Bot [Team 4542]

with 2 comments

Thanks to magiccode from the forums for posting this!


Our robotics team made a semi-humanoid dancing VEX robot with a holonomic drive in place of legs. It has full range of motion in both arms and two planes of motion in its head. It can bend at the waist, and strafe or turn in any direction.

It dances. plays the piano and beats little kids up. He is an all around entertainer. We only had about a day to program him, so bear with us… pun intended”


YouTube Direct Link 


There were 3 “cool” things that were done with ROBOTC:

  1. The robot mimics the movements of a human arm which is holding the VexNet joystick or Vex accelerometer. This will not work in all directions if the joytick is being used because the joystick lacks a z-axis, but it will work in all directions if the accelerometer is being used.
  2. There were too many motors to be controlled by one cortex, so we linked two together by running a male to male pwm wire from the digital output port of one to the digital input port of the other.
  3. Programming was made easier by writing a function called moveServo(). The function would accept 3 parameters: the servo to move, the position to which it should move, and the amount of time it should take (does not take into account changes in battery power)


moveServo(tMotor servoName, int posToMove, int timeToTake);

How it works

YouTube Direct Link 

Written by Vu Nguyen

March 1st, 2011 at 10:18 am