ROBOTC.net Blog  

ROBOTC News

Archive for the ‘Cool projects’ Category

LEGO Street View Car v2.0

with one comment

Thanks To Mark over at www.mastincrosbie.com for creating this incredible project and providing the information. Also Thanks to Xander for providing the community with drivers to use the mentioned sensors in ROBOTC.

You might remember the original Lego Street View Car I built in April. It was very popular at the Google Zeitgeist event earlier this year.

I wanted to re-build the car to only use the Lego Mindstorms NXT motors. I was also keen to make it look more….car-like. The result, after 4 months of experimentation, is version 2.0 of the Lego Street View Car.

As you can see this version of the car is styled to look realistic. I also decided to use my iPhone to capture images on the car. With iOS 5 the iPhone will upload any photos to PhotoStream so I can access them directly in iPhoto.

The car uses the Dexter Industries dGPS sensor to record the current GPS coordinates.

The KML file that records the path taken by the car is transmitted using the Dexter Industries Wifi sensor once the car is within wireless network range.

Design details

The LEGO Street View Car is controlled manually using a second NXT acting as a Bluetooth remote. The remote control allows me to control the drive speed and steering of the car. I can also brake the car to stop it from colliding with obstacles. Finally pressing a button on the remote

Every time an image is captured the current latitude and longitude are recorded from the dGPS. The NXT creates a KML format file in the flash filesystem which is then uploaded from the NXT to a PC. Opening the KML file in Google Earth shows the path that the car drove, and also has placemarks for every picture you took along the way. Click on the placemark to see the picture.

For each GPS coordinate I create a KML Placemark entry that embeds descriptive HTML code using the CDATA tag. The image link in the HTML refers to the last image captured on disk.

The images are captured by triggering the camera on my iPhone. I use an app called SoundSnap which triggers the camera when a loud sound is heard by the phone. By placing the iPhone over the NXT speaker I can trigger the iPhone camera by playing a loud tone on the NXT. While this is not ideal (Bluetooth would be better) it does the job for now.

To get the photos from the iPhone I use the PhotoStream feature in iOS 5. I select the pictures in iPhoto and export them to my laptop. The iPhone will only upload photos when I am in range of a wireless network.

Finally the Dexter Industries Wifi sensor is used to wirelessly transmit the KML file to my laptop over the wireless network.


<Placemark>

<name>LSVC Snapshot 1</name>

<description><![CDATA[<img src='Images/IMG_1.jpg' width=640 height=480> ]]></description>

<Point>

<coordinates> -6.185952, 53.446190, 0</coordinates>

</Point>

</Placemark>

<Placemark>

<name>LSVC Snapshot 2</name>

<description><![CDATA[<img src='Images/IMG_2.jpg' width=640 height=480> ]]></description>

<Point>

<coordinates> -6.185952, 53.446190, 0</coordinates>

</Point>

</Placemark>

The snippet from the KML file gives you an idea of what each placemark should look like.

Once the car has finished driving press the orange button on the NXT to save the KML file. This writes a <pathstring> entry which records the actual path of the car. A path string is simply a list of coordinates that define a path in Google Earth along the Earth’s surface. For example:


<Placemark>

<name>LSVC Path</name>

<description>LSVC Path</description>

<styleUrl>#yellowLineGreenPoly</styleUrl>

<LineString>

<extrude>10</extrude>

<tessellate>10</tessellate>

<altitudeMode>clampToGround</altitudeMode>

<coordinates>

-6.185952, 53.446190, 0

-6.185952, 53.446180, 0

</coordinates>

</LineString>

</Placemark>

Is a path two coordinates not far from where I live.

From the NXT to Google Earth

How do we get the pictures and KML file from the NXT and into Google Earth? First of all we need to get all the data in one place. The KML file refers to the relative path of each image, so we can package the KML file and the images into a single directory.

An example of the output produced is shown below. In this test case I started indoors in my house and took a few pictures. As you can see the dGPS has trouble getting an accurate reading and so the pictures appear to be scattered around the map. I then drove the car outside and started to capture pictures as I drove. From Snapshot 10 onwards the images become more realistic based on where the car actually is.

Video

I shot some video of the car driving outside my house. It was a windy dull day, so the video is a little dark. The fun part is seeing the view from on-board the car!

More videos are coming soon…

Photos

[nggallery id=1]

Written by Vu Nguyen

November 14th, 2011 at 1:11 pm

Lego George the Giant Robot

with 3 comments

[Thank you burf2000 from our forums for contributing this project!]

LEGO George the Giant Robot

LEGO George the Giant Robot

I present to you…

LEGO George the Giant Robot!

He moves, he dances, he can grab things… What CAN’T HE DO!?

This latest creation from burf2000 stands 5’7″ tall, and is a fully functional 5 foot 7″ robot.

He is controlled via a PlayStation 2 controller, he can move about, rotate his upper body, move his arms / shoulders and grab on to items. His head also rotates, moves up and down and if you get too close, his eyes will rotate.

Video of LEGO George:

I asked burf2000 some questions about his robot:

What inspired you to build this robot?

“I have always loved robotics and so Lego for me was a medium to build it in, I built another large robot last year but was not so successful. That one was based off T1 from Terminator 3. I wanted to keep things simple on this one due to size. It weights around 20KG. I also loved the Short-circuit films (johnny 5).”

How long did it take to make?

“This one took around 3 months of the odd evenings and days, We just had a baby (my wife) so getting time has been quite hard. However my wife is very supportive and knew I needed to build this for a show. (http://www.greatwesternlegoshow.com/).”

What are your future plans with the robot?

“Glad you asked this, currently I am improving certain parts of this which I am not happy with like shoulder joints, main bearing and turning. Once they are done, I am going to build a second robot to keep him company. Its going to be another large one, using more NXT’s and hopefully will go round on his own. My aim is to get a whole display of large robots moving around and interacting with each other.”

I thank you, burf2000, for submitting LEGO George. We can’t wait to see his successor!

More Photos

LEGO George's neckClose up

The whole photo set can be found on burf2000′s Flickr page

Written by Vu Nguyen

October 6th, 2011 at 1:04 pm

Posted in Cool projects,NXT

ROBOTC Advanced Training

without comments

The ROBOTC curriculum covers quite a bit of material ranging from basic movement to automatic thresholds and advanced remote control. This is plenty of material for the average robotics class. However, it is not enough for some ambitious teachers and students who have mastered the basics. For those individuals who strive to learn the ins and outs of ROBOTC, we offered a pilot course called “ROBOTC Advanced Training” in late July.

The focus of the class is on advanced programming concepts with ROBOTC. Trainees learn to make use of the NXT’s processing power and third-party sensors which expand its capabilities. The class began with a review of the basic ROBOTC curriculum. It then moved into arrays, multi-tasking, custom user interfaces using the NXT LCD screen and buttons, and file input/output. The class worked together to write a custom I²C sensor driver for the Mindsensors Acceleration sensor seen here. Mindsensors Acceleration Sensor

The capstone project for the course involves autonomous navigation on a grid world. The program allows the NXT to find the most efficient path to its goal while avoiding obstacles. The class learned the concept of a “wavefront algorithm”, which enabled autonomous path planning in a world delineated by a grid field. The algorithm assumes that the robot will only use three movements: forward for one block, right turn and left turn. Based on these assumptions, each grid block has four neighbors. They are north, south, east and west of the current block.

The grid world (for our project it was a 10×5 grid) is represented in ROBOTC by a 2-Dimensional array of integers. Integer representations are as follows: robot = 99, goal = 2, obstacle = 1, empty space = 0. The wavefront begins at the goal and propagates outwards until all positions have a value other than zero. Each empty space neighbor of the goal is assigned a value of 3. Each empty space neighbor of the 3’s is assigned a value of 4. This pattern continues until there are no more empty spaces on the map. The robot then follows the most efficient path by moving to its neighbor with the lowest value until it reaches the goal.

It is very exciting to see autonomous path planning implemented in ROBOTC because this is similar to the way full scale autonomous vehicles work. Check out the video of the path planning in action and the full ROBOTC code below. Our future plans are to incorporate these lessons into a new curriculum including multi-robot communications. If this seems like the type of project you would like to bring to your classroom, check back throughout the year for updates and also in the spring for availability for next summer’s ROBOTC Advanced Class.

Written by: Steve Comer


YouTube Direct Link 

Code for the first run of the program seen in the video:

Note that the only difference in the code for the second program is another obstacle in the 2D integer array.

//GLOBAL VARIABLES grid world dimensions
const int x_size = 10;
const int y_size = 5;

//GLOBAL ARRAY representation of grid world using a 2-Dimensional array
//0  = open space
//1  = barrier
//2  = goal
//99 = robot
int map[x_size][y_size] =
 {{0,0,0,0,0},
  {0,1,99,1,0},
  {0,1,1,1,0},
  {0,0,0,0,0},
  {0,0,0,0,0},
  {0,0,0,0,0},
  {0,0,0,0,0},
  {0,0,2,0,0},
  {0,0,0,0,0},
  {0,0,0,0,0}};

//FUNCTION move forward for a variable number of grid blocks
void moveForward(int blocks)
{
  //convert number of blocks to encoder counts
  //wheel circumference = 17.6 cm
  //one block = 23.7 cm
  int countsToTravel = (23.7/17.6)*(360)*blocks;

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = 50;
  motor[motorC] = 50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;
  wait1Msec(500);
}

//FUNCTION left point turn 90 degrees
void turnLeft90()
{
  //distance one wheel must travel for 90 degree point turn = 10.68 cm
  //wheel circumference = 17.6 cm
  int countsToTravel = (8.6/17.6)*(360);

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = 50;
  motor[motorC] = -50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;
  wait1Msec(500);
}

//FUNCTION right point turn 90 degrees
void turnRight90()
{
  //distance one wheel must travel for 90 degree point turn = 10.68 cm
  //wheel circumference = 17.6 cm
  int countsToTravel = (8.6/17.6)*(360);

  //encoder target for countsToTravel
  nMotorEncoder[motorB] = 0;
  nMotorEncoder[motorC] = 0;
  nMotorEncoderTarget[motorB] = countsToTravel;
  nMotorEncoderTarget[motorC] = countsToTravel;
  motor[motorB] = -50;
  motor[motorC] = 50;
  while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {}

  //stop for half second at end of movement
  motor[motorB] = 0;
  motor[motorC] = 0;
  wait1Msec(500);
}

//FUNCTION print wavefront map to NXT screen
void PrintWavefrontMap()
{
  int printLine = y_size-1;
  for(int y = 0; y < y_size; y++)
  {
    string printRow = "";
    for(int x=0; x < x_size; x++)
    {
      if(map[x][y] == 99)
        printRow = printRow + "R ";
      else if(map[x][y] == 2)
        printRow = printRow + "G ";
      else if(map[x][y] == 1)
        printRow = printRow + "X ";
      else if(map[x][y] < 10)
        printRow = printRow + map[x][y] + " ";
      else if(map[x][y] == '*')
        printRow = printRow + "* ";
      else
        printRow = printRow + map[x][y];
    }
    nxtDisplayString(printLine, printRow);
    printLine--;
  }
}

//FUNCTION wavefront algorithm to find most efficient path to goal
void WavefrontSearch()
{
  int goal_x, goal_y;
  bool foundWave = true;
  int currentWave = 2; //Looking for goal first

  while(foundWave == true)
  {
    foundWave = false;
    for(int y=0; y < y_size; y++)
    {
      for(int x=0; x < x_size; x++)
      {
        if(map[x][y] == currentWave)
        {
          foundWave = true;
          goal_x = x;
          goal_y = y;

          if(goal_x > 0) //This code checks the array bounds heading WEST
            if(map[goal_x-1][goal_y] == 0)  //This code checks the WEST direction
              map[goal_x-1][goal_y] = currentWave + 1;

          if(goal_x < (x_size - 1)) //This code checks the array bounds heading EAST
            if(map[goal_x+1][goal_y] == 0)//This code checks the EAST direction
              map[goal_x+1][goal_y] = currentWave + 1;

          if(goal_y > 0)//This code checks the array bounds heading SOUTH
            if(map[goal_x][goal_y-1] == 0) //This code checks the SOUTH direction
              map[goal_x][goal_y-1] = currentWave + 1;

          if(goal_y < (y_size - 1))//This code checks the array bounds heading NORTH
            if(map[goal_x][goal_y+1] == 0) //This code checks the NORTH direction
              map[goal_x][goal_y+1] = currentWave + 1;
        }
      }
    }
    currentWave++;
    PrintWavefrontMap();
    wait1Msec(500);
  }
}

//FUNCTION follow most efficient path to goal
//and update screen map as robot moves
void NavigateToGoal()
{
  //Store our Robots Current Position
  int robot_x, robot_y;

  //First - Find Goal and Target Locations
  for(int x=0; x < x_size; x++)
  {
    for(int y=0; y < y_size; y++)
    {
      if(map[x][y] == 99)
      {
        robot_x = x;
        robot_y = y;
      }
    }
  }

  //Found Goal and Target, start deciding our next path
  int current_x = robot_x;
  int current_y = robot_y;
  int current_facing = 0;
  int next_Direction = 0;
  int current_low = 99;

  while(current_low > 2)
  {
    current_low = 99; //Every time, reset to highest number (robot)
    next_Direction = current_facing;
    int Next_X = 0;
    int Next_Y = 0;

    //Check Array Bounds West
    if(current_x > 0)
      if(map[current_x-1][current_y] < current_low && map[current_x-1][current_y] != 1) //Is current space occupied?
    {
      current_low = map[current_x-1][current_y];  //Set next number
      next_Direction = 3; //Set Next Direction as West
      Next_X = current_x-1;
      Next_Y = current_y;
    }

    //Check Array Bounds East
    if(current_x < (x_size -1))
      if(map[current_x+1][current_y] < current_low && map[current_x+1][current_y] != 1) //Is current space occupied?
    {
      current_low = map[current_x+1][current_y];  //Set next number
      next_Direction = 1; //Set Next Direction as East
      Next_X = current_x+1;
      Next_Y = current_y;
    }

    //Check Array Bounds South
    if(current_y > 0)
      if(map[current_x][current_y-1] < current_low && map[current_x][current_y-1] != 1)
    {
      current_low = map[current_x][current_y-1];  //Set next number
      next_Direction = 2; //Set Next Direction as South
      Next_X = current_x;
      Next_Y = current_y-1;
    }

    //Check Array Bounds North
    if(current_y < (y_size - 1))
      if(map[current_x][current_y+1] < current_low && map[current_x][current_y+1] != 1) //Is current space occupied?
    {
      current_low = map[current_x][current_y+1];  //Set next number
      next_Direction = 0; //Set Next Direction as North
      Next_X = current_x;
      Next_Y = current_y+1;
    }

    //Okay - We know the number we're heading for, the direction and the coordinates.
    current_x = Next_X;
    current_y = Next_Y;
    map[current_x][current_y] = '*';

    //Track the robot's heading
    while(current_facing != next_Direction)
    {
      if (current_facing > next_Direction)
      {
        turnLeft90();
        current_facing--;
      }
      else if(current_facing < next_Direction)
      {
        turnRight90();
        current_facing++;
      }
    }
    moveForward(1);
    PrintWavefrontMap();
    wait1Msec(500);
  }
}

task main()
{
  WavefrontSearch();	//Build map of route with wavefront algorithm
  NavigateToGoal();	//Follow most efficient path to goal
  wait1Msec(5000);	//Leave time to view the LCD screen
}

Written by Vu Nguyen

August 8th, 2011 at 9:22 am

Eric’s “Project Scout”

with 6 comments

[Thanks to ericsmalls for posting this project!]

The concept

The robots are ready

Project Scout is a project that Eric has been working on for months. Originally, he wanted to combine obstacle avoidance with multi-robot communication.

The goal of Project Scout is to have one “scout” robot, outfitted with sensors, find its way out of a maze, and then tell a second, “blind” robot, not outfitted with sensors, how to solve the maze. The end result would be two robots  finding their way out of a maze by communicating and working together.

The result

Here is the video of a successful run with two robots:

Proof of Concept

Project Scout did come with several milestones. Here’s one of the first videos of the project. Robot1 (on the left) chooses a random number greater than 720 encoder clicks, and then sets that number as the encoder target. Robot1 then goes forward for that X amount of encoder clicks and upon completion sends its recorded encoder values to Robot2(on the right). Finally, just as Robot1 did, Robot2 then travels forward for the same X amount of encoder clicks sent to it by Robot1. Thus both robots travel the same distance, which proves that robot to robot communication as well as the coordination of forward movement is possible.

Continuing on…

Eric says “But there’s still some work to be done. I am currently working on transferring the communication in the code to utilize ROBOTC’s new multi-robot library and Dexter Industries’ NXTBee radios, which will allow a lot more capabilities and add a lot of versatility to Project Scout. In the future, I plan on adding an additional robot so I can have 3 robots solve the maze!”

Great project and keep up the great work!

Click here to visit the Project Scout page

Written by Vu Nguyen

August 1st, 2011 at 12:43 pm

Posted in Cool projects,NXT

ROBOTC Multi-Robot Communication

with 8 comments

We all know that the LEGO MINDSTORMS NXT and ROBOTC are a powerful combination. Together they are able to perform advanced tasks such as PID auto-straightening, line tracking, and even thermal imaging. Imagine what would be possible if multiple NXT’s could work together! Two heads are better than one, right?

Multi-robot communication is possible and it has already been implemented using ROBOTC. During a recent ROBOTC training session, the final day and a half focused on learning how to make use of the XBee wireless radio for communication between multiple robots.

The NXT is able to send and receive messages over a wireless network in the form of string-type data. There are a few simple commands added to ROBOTC with the “XBeeTools.h” header file. The commands are quite user friendly even though multi-robot communication is typically a graduate level concept.

Multi-robot communication is an advanced topic that users can explore after mastering a single robot. It is important to understand how to program a single robot. However, the future of robotics centers on robots working in teams to accomplish complex tasks. Areas of exploration include team based sports such as soccer and putting autonomous vehicles on our roads.

Check out the video of the challenge given in ROBOTC training, where six NXT robots cooperate to surround a single robot which broadcasts its position to the rest of the group.

Written by Steve Comer

July 8th, 2011 at 2:19 pm

Bionic NXTPod 3.0 by DiMastero

with one comment

[Thanks to DiMastero for submitting this project!]

Introduction

Festo, founded in 1925, is a German engineering-driven company based in Esslingen am Neckar. Festo sells both pneumatic and electric actuators, and provides solutions from assembly lines to fully automated full automation solutions utilizing Festo and third party components. It also has a kind of R&D department, the Bionic Learning Network, where they’ve created some amazing projects including SmartBird (“bird flight deciphered”), AquaJelly, Robotino XT and much more. [source]

They also created the Bionic Tripod 3.0, an arm-like robot based on four flexible rods actuated from below. By moving the actuators to different positions, the rods bend and move the adaptive gripper to any position quickly and energy efficiently.

“Festo – Bionic Tripod 3.0″ demonstration video

The tripod has been partially replicated before, but I’ve found no evidence about it being done entirely of Lego MindStorms. Cue the Bionic NXTPod 3.0

The following image cannot be displayed: Bionic NXTPod 3.0

Hardware

  • 2 Lego Mindstorms NXT intelligent bricks – one 1.0 and one 2.0
  • 5 Lego Mindstorms NXT motors
  • 4 Lego Mindstorms NXT touch sensors
  • 1 Lego Mindstorms NXT 1.0 light sensor
  • 1 Lego Power Functions (PF) LED light
  • 1 Lego pneumatic actuator, switch and pump

The robot itself consists of these parts:

  • 4 actuators
  • 4 flexible rods
  • the pneumatic grabber
  • the main structure
  • PF LEDs and a light sensor for communication

Mechanically, the NXTPod’s most important parts are the four actuators. They’re made up of a single NXT servo motor, which spins a worm wheel up a four-part gear rack, moving a sledge up or down a 14 studs axle. It can move up to 19 rotations up or down, in about 11 seconds at the default speed of 75%.

The following image cannot be displayed: Bionic NXTPod 3.0's actuators initial designinitial design of one of the four linear actuators; has improved since

The last motor serves a double function, to perform a single task: it moves the pneumatic switch and pumps the pump, opening or closing the gripper.

The following image cannot be displayed: Bionic NXTPod 3.0's gripper with its motor attachedgripper and its motor, final design

Both NXTs take care of two of the actuators, which are color coded to make programming easier – the master controls the red and blue motors, while the slave takes care of the black and beige ones. The slave also controls the pneumatic gripper at the tripod’s top.

The following image cannot be displayed: Bionic NXTPod 3.0's four color coded actuators (highlighted)the four color coded motors (red, blue, black, beige)

To connect the NXTs, the master has a NXT-PF cable connected to motor port B to control the LEDs in front of the slave’s light sensor. The problem with this setup is that the master can’t get any feedback from the slave. Therefore, it’s got to take the time the slave takes to perform certain actions into account to avoid overlapping commands.

Programming

Generally, the NXTs are set up in a master-slave configuration, where the master sends commands to the slave using LEDs and a light sensor and then waits for the slave to finish to send a new task. This is how it works:

  1. The slave is started up by the user
  2. The master is started up by the user, and turns the LEDs on for a tenth of a second
  3. Both the master and slave calibrate their motors ´by moving the actuators down until the touch sensors are pressed once the light is turned off again
  4. Once its calibrated, the slave waits for the LEDs to turn on again, so it knows a command is coming
  5. The master calibrates and waits the remaining of the eleven seconds to make sure the slave has calibrated as well, so it doesn’t send any commands before the slave is ready
  6. The master reads the block of code containing the positions for all four actuators and the gripper, and converts this into 12 binary bytes
  7. The LEDs are turned on, and after a short wait, the master turns the lights on and off ten times a second, taking a total of 1300 mSecs, or 1.3 seconds per full message.
  8. When the slave receive the bytes, it decodes them
  9. Both of the NXTs start simultaneously, and go to their positions.
  10. At the same time, the master calculates how many degrees the slave has to turn, and converts it into the approximate waiting time to, again, avoid overlapping commands
  11. Steps 6-10 repeat until the master has run through all of the blocks of code, after which it shuts down. The slave has to be turned off manually; the slave must be restarted every time the master finishes, or it will interpret the calibration command incorrectly.

The robot is very easy to program, and the user only has to provide 6 lines of code per tripod position:

1 motor_blue_rotations = 1;
2 motor_red_rotations = 1;
3 motor_black_rotations = 10;
4 motor_beige_rotations = 10;
5 gripper_operation = 0;
6 wait_for_completion();
7 wait10mSec(100);

The program does the rest of the work, also making sure the robot doesn’t ever overrun anything, and calibrates as much as possible. You can download the RobotC code at my downloads page, over here. Below is a short demonstration video:

“Bionic NXTPod 3.0″ demonstration video

Completion date: 2011/06/12

Last updated: 2011/06/12

Disclaimer: This site is neither owned nor endorsed by Festo Group. The Bionic Learning Network, SmartBird, AquaJelly and Robotino are all copyrighted by Festo. The Bionic Tripod 3.0, on which this project is based, is also copyrighted by Festo.

Written by Vu Nguyen

June 17th, 2011 at 10:00 am

Posted in Cool projects,NXT

Bring on the Heat: Thermal Imaging with the NXT

with 2 comments

Lookin' Hot!I built a pan and tilt rig for the Dexter Industries Thermal IR Sensor with a great deal of gearing down to allow me to take a lot of measurements as the rig moved around. Initially I had it set for about 40×40 measurements but those didn’t look that great and I wanted a bit more. I reprogrammed it and made it spit out data at a resolution of about 90×80.

The data from the thermal sensor was streamed to the debug output console in ROBOTC from which I copy and pasted it to an Excel worksheet.  I made some 3D graphs from the thermal data and it looked pretty cool.

Excel graph for cold glass Excel graph for candle flame

The left one is a cold glass and the right one is a candle.  I wasn’t really happy with the results of the graphs so I decided to quickly whip up a .Net app to read my CSV data and make some more traditional thermal images.  A few hours later, the results really did look very cool.

Thermal image for cold glass Thermal image for candle flame

Again, the left one is the cold glass and the right one is the candle.  Now that you have a thermal image, you can see the heat from the candle a lot more clearly. I made a quick video of the whole rig so you can get an idea.

A few days after the initial post about my thermal imaging system using the Thermal Infrared Sensor, I made some improvements with both the speed and accuracy of the whole thing. I made the sensor sampling interval time based, rather than encoder value based. This proved to be a lot better at getting consistent sampling rates. I also doubled the horizontal motor speed so I would be more likely to be still awake by the time it was done taking an image.

The left image was made with the old system, the right one with the new system. It’s a lot less fuzzy and there are no black gaps where the number of samples were fewer than the maximum number of samples in a row.

image_thumb7 image_thumb8

Perhaps there are other ways to improve the program but I am quite happy with how this has turned out.

The driver and program will be part of the next Driver Suite version. You can download a preliminary driver and this program from here: [LINK].  The .Net program and CSV files can be downloaded here: [LINK]. You will need Visual Studio to compile it.  You can download a free (Express) version of C# from the Microsoft website.

Written by Xander Soldaat

June 16th, 2011 at 4:47 pm

Controlling the MINDS-i Lunar Rover with a VEX Cortex

with one comment

Article written by Steve Comer

Remote control cars are great for having fun. They can be driven off-road, taken off jumps, and raced among other things. VEX robots are great for learning. They can be used to teach programming, math, problem solving, and other engineering skills. What do you get if you put them together??

Well, I can tell you. You get a rugged 4WD truck that is still tons of fun to drive around outside, but can also be used as a teaching tool.

Follow this link for more photos: http://s1081.photobucket.com/albums/j353/comeste10/VEX%20Rover%20Extras/

I started off with a MINDS-i Lunar Rover kit which is driven by a 7.2V DC motor and steered with a standard hobby servo. I removed the solar panel from the Rover and in its place put a VEX Cortex microcontroller and an LCD screen. On each side, I attached a VEX flashlight from the VEXplorer kit and I mounted an ultrasonic sensor to the front. It just so happens that VEX bolts and nuts fit quite easily into the beams of the MINDS-i rover.

I did all the programming in RobotC. See bottom of the page to view my RobotC code.

In order to control the stock motor and servo with the Cortex, I had to make a few modifications. I soldered the two wires to a 2-pin head which I then connected to the Cortex with a VEX motor controller.

For the servo, I used three single male-to-male jumper cables.

The video demonstrates the rover in autonomous mode where it makes use of the ultrasonic sensor to avoid bumping into walls. Remote control is also demonstrated using the VEXnet controller over Wi-Fi.

This is just a small sampling of the possibilities with this combination type of platform. Don’t let my initial direction limit you. It would be great to see some new combination robots. Get out there and start building!

This is my RobotC Code for the behaviors seen in the video.

AUTONOMOUS MODE:


#pragma config(UART_Usage, UART2, VEX_2x16_LCD)
#pragma config(Sensor, dgtl1,  sonar,               sensorSONAR_inch)
#pragma config(Motor,  port1,           L,             tmotorServoStandard, openLoop)
#pragma config(Motor,  port2,           servo,         tmotorNormal, openLoop)
#pragma config(Motor,  port3,           drive,         tmotorNormal, openLoop)
#pragma config(Motor,  port10,          R,             tmotorNormal, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//
/////////////////////////////////////////////////////////////////////////
//  Author  : Steven Comer//  Program : Rover drives straight until near an object, it then slows,
//            stops, then backs up and turns.
//  Updated : 8 June 2011 @ 10:20 AM
/////////////////////////////////////////////////////////////////////////
task main()
{

  //pause at start and turn on headlights
  wait1Msec(2000);
  motor[R] = -127;
motor[L] = -127;
while(true)
{
clearLCDLine(0);
    displayNextLCDNumber(SensorValue(sonar), 3);

    //++++++++++++++++++++++++++++++CLEAR++++++++++++++++++++++++++++++++
if( SensorValue(sonar) > 20 || SensorValue(sonar) == -1 )
{
motor[servo] = -2;
motor[drive] = 50;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//+++++++++++++++++++++++++++++APPROACH++++++++++++++++++++++++++++++
else if( SensorValue(sonar) <= 20 && SensorValue(sonar) > 15 )
{
motor[drive] = SensorValue(sonar) + 25; //power decreases
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//+++++++++++++++++++++++++++STOP AND TURN+++++++++++++++++++++++++++
else if( SensorValue(sonar) <= 15 )
{
//stop
motor[drive] = 0;
wait1Msec(500);
//back and turn
motor[servo] = random[50] + 60;
     //random degree
motor[drive] = -50;  //random power
wait1Msec(1000);
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 }
}

REMOTE CONTROL MODE:


#pragma config(UART_Usage, UART2, VEX_2x16_LCD)
#pragma config(Sensor, dgtl1,  sonar,               sensorSONAR_inch)
#pragma config(Motor,  port1,           L,             tmotorServoStandard, openLoop)
#pragma config(Motor,  port2,           servo,         tmotorNormal, openLoop)
#pragma config(Motor,  port3,           drive,         tmotorNormal, openLoop)
#pragma config(Motor,  port10,          R,             tmotorNormal, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//
/////////////////////////////////////////////////////////////////////////
//  Author  : Steven Comer
//  Program : Remote control rover with VEXnet controller
//  Notes   : Throttle is Ch 2 (right joystick)
//            Steering is Ch 4 (left joystick)
//            Headlights are Ch 5U/5D and 6U/6D (on/off)
//  Updated : 10 June 2011 @ 12:30 PM
/////////////////////////////////////////////////////////////////////////
task main ()
{
 while(true)
{
//right headlight
if(vexRT[Btn6U] == 1)
motor[R] = -127;
if(vexRT[Btn6D] == 1)
motor[R] = 0;
//left headlight
if(vexRT[Btn5U] == 1)
motor[L] = -127;
if(vexRT[Btn5D] == 1)
motor[L] = 0;
//driving
motor[servo] = vexRT[Ch4];   //steering
motor[drive] = vexRT[Ch2];   //throttle
//LCD screen
displayLCDCenteredString(0,"VEX");
displayLCDCenteredString(1,"ROBOTICS");
}
}

Written by Jesse Flot

June 9th, 2011 at 3:31 pm

Posted in Cool projects,VEX

Tagged with , ,

Mars Rover NXT/VEX Robot with Rocker-Bogie Suspension

with one comment

[Submitted by Fealves78 from the ROBOTC forums]

Fealves78 submitted an incredible looking Mars Rover robot using (eek!) a combination of both NXT and VEX parts.

Both the robot and the the joystick are controlled by NXT bricks. The robot also uses 6 regular motors, 6 servos, a VEX Camera, and a set of Vex Lights.

Here is a new video of the robot being demonstrated at the National Space Foundation. The robot can go over rocks too!

Rocker-Bogie Suspension

Rocker-Bogie Suspension (Source http://en.wikipedia.org/wiki/File:Rocker-bogie.jpg)

The Rocker-Bogie suspension is actually a popular setup, and was used for the Mars Rover (hence, the robot name!) robot. It’s still favored by NASA in their MARS robots.

The suspension is coined “Rocker” because of its rocking aspect of the larger links in the system. The two sides of the chassis are connected via a differential. This allows each “rocker” to be able to move up and down independent of each other. Thus, this system allows the robot to drive over uneven terrain as well as rocks.

The word “Bogie” actually refers to the links that have a drive wheel at each end. Bogies were commonly used as load wheels in the tracks of army tanks as idlers distributing the load over the terrain. Bogies were also quite commonly used on the trailers of semi trailer trucks.

[See more at the Rocker-Bogie page at Wikipedia]

Here’s a video of the robot running over a grassy area:

What inspired you to build the robot?

I am a graduate Computer Science student and robotics is one of my interests. I am teaching robotics for kids in Colorado Springs through Trailblazer Elementary School. My students’ age range from 6 to 11 years old, and this is the first year that they were studying with me. We were inspired to build the Mars Rover robot by the Space Foundation – SF, which is located in Colorado Springs, CO. Through a grant with Boeing, the Space Foundation has donated 2 NXT robotics kit to our school, and I myself gave the Vex Kit for the students. Then, the SF challenged us to build a demo robot using some of the materials they have provided and the Mars Rover was our first big project.

How long did it take to build the robot?

The students spent about a week researching the design of the robot structure. It took 2 weeks to put it together and 2 more weeks to program the robot using Robot C. We also used the NXTSERVO-V2 form Mindsensors.com to control the robot’s 12 motors, 2 Lights, and camera.

What are your future plans with the robot?

All the work that we are doing is volunteer work. We started with one teacher and one school (Trailblazer) and 16 kids in the beginning of 2011. Now, with the help of graduate students from Colorado Technical University – CTU, the IEEE chapter from that school, and help from companies like the Space Foundation and MITRE, we are expanding to 40 kids and 3 schools by the end of the year. We are also willing to help teachers from Elementary, Middle, and High schools, who are willing to take robotics to the classroom as means to facilitate science to their students and to motivate them towards STEM education. Most of the schools have neither materials nor budget to start a robotics club. We are surviving with small donations and volunteer work. If you or anyone is interested in helping, please let us know.

In the little time we have been working with these kids both their regular teachers and parents are noticing improvement in the kid’s interest towards science and in their grades. For us, the CTU volunteers (students and IEEE members), this is a way to gain work experience and give back to the community.

Written by Vu Nguyen

June 6th, 2011 at 10:57 am

Posted in Cool projects,NXT,VEX

LEGO Quad Delta Robot System

without comments

[Many thanks to Shep for contributing this amazing project! Description and Source is all from Shep's blog]

Years of development, months of building and programming.  Here it is.


YouTube Direct Link 

About the Lego Quad Delta Robot System.

This system uses four Lego parallel robots which are fed by two conveyor belts.  As items flow down the conveyor belt toward the robots, each item passes by a light/color sensor mounted on each conveyor.  When the item is detected, a signal is sent to the robots telling them information such as the color of the object, which belt the object is on and the position of the object on the belt.  The robot reaches out and grabs the item from the moving conveyor belt when each item gets close enough and moves it to a location based on the color of the item.

The cell is capable of picking and placing objects at a rate of 48 items per minute.  Each robot can move 12 items per minute, or it can move an item in 5 seconds!

DELTA ROBOTS

Delta Robots, also known as Parallel robots are commercially available from several manufacturers.  They go by names such as ABB Flexpicker, Bosch Paloma D2, Fanuc M-1iA, Kawasaki YF03N, and Adept Quattro s650H.  They are known for moving small objects very quickly, usually at two hundred or more moves per minute.  Parallel robots are often used in many industries such as the food industry where the payload is small and light and the production rates are very high.  Many times a series of parallel robots are used to do things like assemble cookies, package small items, stack pancakes and much, much more.

THE ROBOTS

Each robot operates independently.  The robots receive a signal from the master, which in this case is the NXT that controls the light sensors.  The signal contains information about the color, lane, and position of each object.  When the signal is received, the data is stored in a chronological array.  When the object gets close enough, the robot goes through a preprogrammed series of movements based on the information in the array.

STARTING UP

At the beginning of each run, all three arms move slowly upward until they each hit a touch sensor.  After all three arms have reached the top they all move down together to a predetermined zero position and the encoders are reset.  At that point all the robots wait for the first signal which will be the master sending the belt speed signal.  The robots can automatically adjust movements such as where they pick up the objects based on the belt speed.

Immediately after the belt speed information has been received, each NXT brick will sound off in a timed sequence with their respective brick number.  This is an error checking technique.  If the operator doesn’t hear the full “ONE, TWO, THREE, FOUR, FIVE, SIX” there is a problem and the run should be terminated and restarted.

THE SIGNAL

The signal is an eight bit binary light signal that takes about 170 milliseconds to transmit.  The master NXT blinks the LEDs that are attached to each robot on and off at an interval about 20 milliseconds each flash.  Each robot is equipped with a Lego light sensor that easily sees the short flashes.  The same signal is sent to all the NXT bricks, but data encoded in the signal determines which robot will move the item.  The robot’s NXT brick decode the message and sends that information to a procedure that does the appropriate movements.

The binary signal is converted to a three digit number such as 132 or 243.  The first digit is the lane.  Possible values are 1 and 2 corresponding to conveyor 1 and conveyor 2 respectively.  The second digit is the robot number and the possible values are 1 through 4 corresponding to each of the four robots.  The third digit is the color of the object.  The possible values are 1 through 6, i.e.  BLACK=1, BLUE=2, GREEN=3. YELLOW=4. RED=5, WHITE=6.  The position of the brick is noted by the time that the light signal is received.  The robots calculate the position of each object by using the time when the signal was received relative to the current, dynamic time.  The belt moves precisely at 100 inches per minute so based on this, the position of the item on the belt can be precisely calculated.

A few signals other than brick information and belt speed are programmed to be sent.  The master can send an emergency shut down message in which all robots immediately stop what they are doing, drop their bricks and go to their home position as well as stop the conveyors.  Signals can also be sent to make the robots dance, play sound files and music files concurrently.

THE MOVEMENTS

The precise kinematics for the movements of the robots are dynamically calculated using detailed formulas that convert the Cartesian coordinates (x,y,z) of the location of the brick into the angles of the servo motors (theta1, theta2 and theta3) and vice versa.   This is the heart and soul of the robot.  Without precise calculations, this project would be nearly impossible.

As the gripper or “end effector” is moved around, it becomes necessary to calculate the best route for it to move.  The best route is usually a straight line.  This is done by locating the start point (x1, y1, z1) and the end point (x2, y2, z2) and then calculating a discrete number of points that lie on the line between the two points.  For each and every movement, the robot first creates an array for all the points in between and then moves nonstop from point to point to point through the array until it reaches the end point.

As the robot moves around, each motor speed is adjusted relative to the other motors speed in a manner that all three motors arrive at their target position at the same time.  This makes all the movements very smooth and the robot doesn’t shake too much.  The motor speeds are adjusted so that the robot moves as fast as possible.

Since the objects on the conveyors are moving at all times, the robot actually moves to a position where the object will be rather than where the object is actually at.  Also, when the robot grasps an object, it doesn’t lift it straight up, but up and forward slightly so that any objects behind the object on the conveyor belt won’t hit the object that is being moved.
It is possible for the robot to be overwhelmed by having too many objects to pick up.  Once an object goes past a limit point where it is too far to reach, it is removed from the queue and will not be picked up by any robot.

As the robots place items in the bins, the release point is shifted slightly so that the items won’t pile up.

THE GRIPPERS

The grippers are each driven using a single pneumatic cylinder.  The cylinder is cycled by a valve equipped with a medium PF motor connected to an IR receiver.  Each NXT is equipped with a HiTechnic IRLink sensor.  The NXT controls the gripper by sending a signal to the motor through the IRLink sensor.  The motor then rotates clockwise or counterclockwise for one quarter of a second to switch the pneumatic valve.  This is a very effective way of controlling Lego pneumatics with a NXT.

THE AIR SYSTEM

The air system must be robust because the pneumatic cylinders on the grippers move about 96 times a minute.  This requires a great deal of air.  The air compressor consists of six pumps (with the springs removed) turned by three XL PF motors.  The pressure is measured using a MindSensors Pressure sensor.  The pressure is kept between 10 and 13 psi to maintain good operational speed and gripping capacity.  The whole system will not start until air pressure is up to a minimum of 8 psi, and an audible alarm sounds if the pressure drops below 8 psi.  At this point, the operator can help the compressor by manually pumping up the system to the required pressure.

The three XL-PF motors are powered using a 9v train controller.  This is done so that consistent power is transmitted to the motors.  Air compressors tend to use batteries very quickly and using a train controller avoids that cost.
There are also six air tanks for storage, a manual pump, a pressure gage, and a pressure release valve to purge the system of pressure.  The manual pump is primarily used to assist the compressor if it can’t keep up.

The compressor motors are turned on and off using a Lego servo motor and a PF switch.  As the pressure sensor senses the pressure going above or below the thresholds, the motor moves the switch back and forth to add air or turn off the compressor.

THE CONVEYORS

The conveyors are controlled by a dedicated NXT brick.  The timing and speed of the conveyors is critical so that the items will be positioned accurately.  The speed of the conveyors is governed by a proportional controller.  They were originally controlled using a PID controller, but it turns out that a proportional control was adequate.   The speed of the conveyor can be vary from zero inches per minute up to two hundred inches per minute, but one hundred inches per minutes is the best for all the robots.

The NXT brick that controls the conveyors reads the same light signal information as all of the robots, but ignores most of the signals.

Each conveyor is ten feet long.

LIGHT CURTAIN/COLOR READER

The light/color sensors mounted on the conveyor do double duty.  Their default mode is as an ambient light sensor but they are frequently changed to color sensor.  A PF LED light is mounted opposite to the light sensor to give a high value of light detected.  When an item passes between the LED and the light sensor, a low light condition is detected and the sensor immediately switches mode to a color sensor.  This can be seen when the sensor briefly emits an RGB light as a brick passes in front of the sensor.  As soon as the color is correctly read, it immediately switches mode back to an ambient light sensor and waits for the next item.  When the color is determined, the brick then sends a signal to all of the slave bricks and an audible color sound is played.

There is a condition when two bricks pass by both light sensors at the same time.  It is impossible to send two signals at the same time, so the first item to be detected takes priority and the second brick signal is sent 400 milliseconds later.  A special signal is sent to tell the robot to adjust the position timing to account for the 400 ms delay when the brick comes to be picked up.

THE STRUCTURE

The frame structure holding the robots is highly engineered.  The combination of the weight of all the robots as well as the constant movement is a considerable problem.  The main horizontal member is achieved by layering Technic bricks with plates.  This configuration is very strong and has very little sag.  Movement is also minimized, but not completely eliminated.

The two main posts in the middle carry most of the weight and do a great deal to stop the structure from moving while the robots are operating.  The four outside posts help, but are mostly for support.  The diagonal braces are quite small relative to the size of the other members, but actually do a great deal to stop movement.

All of the posts are made from standard Lego bricks with Technic beams attached around to lock them together.  The structure is completely tied together as one piece, but can be broken down into eight parts for transport.

DEVELOPMENT

I have a personal fascination with this type of robot.  I find the movements mesmerizing and extremely interesting. The movements of the actual robots are extremely fast and accurate and defy belief.  I especially like the fact that the location of the end effector can be precisely calculated from the angular location of the three servo motors positioned at one hundred and twenty degrees from each other.

This is not the first parallel robot that I have built.  My first delta robot was built in 2004 using the Mindstorms RCX and was very crude and not very useful.  After several more attempts, I finally found a design using the Mindstorms NXT system that worked well.  At that time I still hadn’t worked out the kinematics but I found a way to fake the movements by positioning the end effector by hand and reading the encoder values.  Then I used those values to create a series of movements that closely resembled an actual robot.

I have researched for about six years and built this project many times.  This project took about five months to build and program.  It was purely a labor of love for this robot.

I don’t know how to improve on the current design.  As you can tell if you have read this description of the robot, I have exhaustively researched and built to every goal I have.  Sadly, I believe that I have reached the limit of what can be built using only Lego building elements.

Written by Vu Nguyen

April 20th, 2011 at 4:39 pm

Posted in Cool projects,NXT