## Archive for August, 2011

## New VEX Robot Building Instructions Available!

Tired of the same old robot models? Look no further! We’re proud to present a new set of classroom-ready robot building instructions.

First up, we have the new and improved Squarebot 4.0:

Squarebot 4.0 features:

- 1 VEX Cortex
- 2 Shaft Encoders
- 1 Limit Switch
- 1 Bumper Switch
- 1 Ambient Light Sensor
- 1 Ultrasonic Rangefinder
- 1 Potentiometer
- 1 VEX LCD Screen
- 2 Driving Motors
- 1 Arm Motor

And second, we’ve augmented our popular Swervebot model with an arm!

The Swervebot has a more slender wheelbase and rear caster, making it more suitable for line tracking and fitting in small spaces. It also features:

- 1 VEX Cortex
- 2 Shaft Encoders
- 2 Bumper Switches
- 3 Line Tracking Sensors
- 1 Ultrasonic Rangefinder
- 1 Potentiometer
- 1 VEX LCD Screen
- 2 Driving Motors
- 1 Arm Motor

Note that having all of the sensors isn’t required to take advantage of these great robot models. Build them using the parts that you have – or just use the instructions for your own inspiration.

These models along with many others can always be found at the Robotics Academy site.

## ROBOTC Advanced Training

The ROBOTC curriculum covers quite a bit of material ranging from basic movement to automatic thresholds and advanced remote control. This is plenty of material for the average robotics class. However, it is not enough for some ambitious teachers and students who have mastered the basics. For those individuals who strive to learn the ins and outs of ROBOTC, we offered a pilot course called “ROBOTC Advanced Training” in late July.

The focus of the class is on advanced programming concepts with ROBOTC. Trainees learn to make use of the NXT’s processing power and third-party sensors which expand its capabilities. The class began with a review of the basic ROBOTC curriculum. It then moved into arrays, multi-tasking, custom user interfaces using the NXT LCD screen and buttons, and file input/output. The class worked together to write a custom I²C sensor driver for the Mindsensors Acceleration sensor seen here. Mindsensors Acceleration Sensor

The capstone project for the course involves autonomous navigation on a grid world. The program allows the NXT to find the most efficient path to its goal while avoiding obstacles. The class learned the concept of a “wavefront algorithm”, which enabled autonomous path planning in a world delineated by a grid field. The algorithm assumes that the robot will only use three movements: forward for one block, right turn and left turn. Based on these assumptions, each grid block has four neighbors. They are north, south, east and west of the current block.

The grid world (for our project it was a 10×5 grid) is represented in ROBOTC by a 2-Dimensional array of integers. Integer representations are as follows: robot = 99, goal = 2, obstacle = 1, empty space = 0. The wavefront begins at the goal and propagates outwards until all positions have a value other than zero. Each empty space neighbor of the goal is assigned a value of 3. Each empty space neighbor of the 3’s is assigned a value of 4. This pattern continues until there are no more empty spaces on the map. The robot then follows the most efficient path by moving to its neighbor with the lowest value until it reaches the goal.

It is very exciting to see autonomous path planning implemented in ROBOTC because this is similar to the way full scale autonomous vehicles work. Check out the video of the path planning in action and the full ROBOTC code below. Our future plans are to incorporate these lessons into a new curriculum including multi-robot communications. If this seems like the type of project you would like to bring to your classroom, check back throughout the year for updates and also in the spring for availability for next summer’s ROBOTC Advanced Class.

Written by: Steve Comer

Code for the first run of the program seen in the video:

Note that the only difference in the code for the second program is another obstacle in the 2D integer array.

//GLOBAL VARIABLES grid world dimensions const int x_size = 10; const int y_size = 5; //GLOBAL ARRAY representation of grid world using a 2-Dimensional array //0 = open space //1 = barrier //2 = goal //99 = robot int map[x_size][y_size] = {{0,0,0,0,0}, {0,1,99,1,0}, {0,1,1,1,0}, {0,0,0,0,0}, {0,0,0,0,0}, {0,0,0,0,0}, {0,0,0,0,0}, {0,0,2,0,0}, {0,0,0,0,0}, {0,0,0,0,0}}; //FUNCTION move forward for a variable number of grid blocks void moveForward(int blocks) { //convert number of blocks to encoder counts //wheel circumference = 17.6 cm //one block = 23.7 cm int countsToTravel = (23.7/17.6)*(360)*blocks; //encoder target for countsToTravel nMotorEncoder[motorB] = 0; nMotorEncoder[motorC] = 0; nMotorEncoderTarget[motorB] = countsToTravel; nMotorEncoderTarget[motorC] = countsToTravel; motor[motorB] = 50; motor[motorC] = 50; while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {} //stop for half second at end of movement motor[motorB] = 0; motor[motorC] = 0; wait1Msec(500); } //FUNCTION left point turn 90 degrees void turnLeft90() { //distance one wheel must travel for 90 degree point turn = 10.68 cm //wheel circumference = 17.6 cm int countsToTravel = (8.6/17.6)*(360); //encoder target for countsToTravel nMotorEncoder[motorB] = 0; nMotorEncoder[motorC] = 0; nMotorEncoderTarget[motorB] = countsToTravel; nMotorEncoderTarget[motorC] = countsToTravel; motor[motorB] = 50; motor[motorC] = -50; while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {} //stop for half second at end of movement motor[motorB] = 0; motor[motorC] = 0; wait1Msec(500); } //FUNCTION right point turn 90 degrees void turnRight90() { //distance one wheel must travel for 90 degree point turn = 10.68 cm //wheel circumference = 17.6 cm int countsToTravel = (8.6/17.6)*(360); //encoder target for countsToTravel nMotorEncoder[motorB] = 0; nMotorEncoder[motorC] = 0; nMotorEncoderTarget[motorB] = countsToTravel; nMotorEncoderTarget[motorC] = countsToTravel; motor[motorB] = -50; motor[motorC] = 50; while(nMotorRunState[motorB] != runStateIdle && nMotorRunState[motorC] != runStateIdle) {} //stop for half second at end of movement motor[motorB] = 0; motor[motorC] = 0; wait1Msec(500); } //FUNCTION print wavefront map to NXT screen void PrintWavefrontMap() { int printLine = y_size-1; for(int y = 0; y < y_size; y++) { string printRow = ""; for(int x=0; x < x_size; x++) { if(map[x][y] == 99) printRow = printRow + "R "; else if(map[x][y] == 2) printRow = printRow + "G "; else if(map[x][y] == 1) printRow = printRow + "X "; else if(map[x][y] < 10) printRow = printRow + map[x][y] + " "; else if(map[x][y] == '*') printRow = printRow + "* "; else printRow = printRow + map[x][y]; } nxtDisplayString(printLine, printRow); printLine--; } } //FUNCTION wavefront algorithm to find most efficient path to goal void WavefrontSearch() { int goal_x, goal_y; bool foundWave = true; int currentWave = 2; //Looking for goal first while(foundWave == true) { foundWave = false; for(int y=0; y < y_size; y++) { for(int x=0; x < x_size; x++) { if(map[x][y] == currentWave) { foundWave = true; goal_x = x; goal_y = y; if(goal_x > 0) //This code checks the array bounds heading WEST if(map[goal_x-1][goal_y] == 0) //This code checks the WEST direction map[goal_x-1][goal_y] = currentWave + 1; if(goal_x < (x_size - 1)) //This code checks the array bounds heading EAST if(map[goal_x+1][goal_y] == 0)//This code checks the EAST direction map[goal_x+1][goal_y] = currentWave + 1; if(goal_y > 0)//This code checks the array bounds heading SOUTH if(map[goal_x][goal_y-1] == 0) //This code checks the SOUTH direction map[goal_x][goal_y-1] = currentWave + 1; if(goal_y < (y_size - 1))//This code checks the array bounds heading NORTH if(map[goal_x][goal_y+1] == 0) //This code checks the NORTH direction map[goal_x][goal_y+1] = currentWave + 1; } } } currentWave++; PrintWavefrontMap(); wait1Msec(500); } } //FUNCTION follow most efficient path to goal //and update screen map as robot moves void NavigateToGoal() { //Store our Robots Current Position int robot_x, robot_y; //First - Find Goal and Target Locations for(int x=0; x < x_size; x++) { for(int y=0; y < y_size; y++) { if(map[x][y] == 99) { robot_x = x; robot_y = y; } } } //Found Goal and Target, start deciding our next path int current_x = robot_x; int current_y = robot_y; int current_facing = 0; int next_Direction = 0; int current_low = 99; while(current_low > 2) { current_low = 99; //Every time, reset to highest number (robot) next_Direction = current_facing; int Next_X = 0; int Next_Y = 0; //Check Array Bounds West if(current_x > 0) if(map[current_x-1][current_y] < current_low && map[current_x-1][current_y] != 1) //Is current space occupied? { current_low = map[current_x-1][current_y]; //Set next number next_Direction = 3; //Set Next Direction as West Next_X = current_x-1; Next_Y = current_y; } //Check Array Bounds East if(current_x < (x_size -1)) if(map[current_x+1][current_y] < current_low && map[current_x+1][current_y] != 1) //Is current space occupied? { current_low = map[current_x+1][current_y]; //Set next number next_Direction = 1; //Set Next Direction as East Next_X = current_x+1; Next_Y = current_y; } //Check Array Bounds South if(current_y > 0) if(map[current_x][current_y-1] < current_low && map[current_x][current_y-1] != 1) { current_low = map[current_x][current_y-1]; //Set next number next_Direction = 2; //Set Next Direction as South Next_X = current_x; Next_Y = current_y-1; } //Check Array Bounds North if(current_y < (y_size - 1)) if(map[current_x][current_y+1] < current_low && map[current_x][current_y+1] != 1) //Is current space occupied? { current_low = map[current_x][current_y+1]; //Set next number next_Direction = 0; //Set Next Direction as North Next_X = current_x; Next_Y = current_y+1; } //Okay - We know the number we're heading for, the direction and the coordinates. current_x = Next_X; current_y = Next_Y; map[current_x][current_y] = '*'; //Track the robot's heading while(current_facing != next_Direction) { if (current_facing > next_Direction) { turnLeft90(); current_facing--; } else if(current_facing < next_Direction) { turnRight90(); current_facing++; } } moveForward(1); PrintWavefrontMap(); wait1Msec(500); } } task main() { WavefrontSearch(); //Build map of route with wavefront algorithm NavigateToGoal(); //Follow most efficient path to goal wait1Msec(5000); //Leave time to view the LCD screen }

## Eric’s “Project Scout”

*[Thanks to ericsmalls for posting this project!]*

## The concept

Project Scout is a project that Eric has been working on for months. Originally, he wanted to combine obstacle avoidance with multi-robot communication.

The goal of Project Scout is to have one “scout” robot, outfitted with sensors, find its way out of a maze, and then tell a second, “blind” robot, not outfitted with sensors, how to solve the maze. The end result would be two robots finding their way out of a maze by communicating and working together.

## The result

Here is the video of a successful run with two robots:

## Proof of Concept

Project Scout did come with several milestones. Here’s one of the first videos of the project. Robot1 (on the left) chooses a random number greater than 720 encoder clicks, and then sets that number as the encoder target. Robot1 then goes forward for that X amount of encoder clicks and upon completion sends its recorded encoder values to Robot2(on the right). Finally, just as Robot1 did, Robot2 then travels forward for the same X amount of encoder clicks sent to it by Robot1. Thus both robots travel the same distance, which proves that robot to robot communication as well as the coordination of forward movement is possible.

## Continuing on…

Eric says “But there’s still some work to be done. I am currently working on transferring the communication in the code to utilize ROBOTC’s new multi-robot library and Dexter Industries’ NXTBee radios, which will allow a lot more capabilities and add a lot of versatility to Project Scout. In the future, I plan on adding an additional robot so I can have 3 robots solve the maze!”

Great project and keep up the great work!