ROBOTC.net Blog  

ROBOTC News

Archive for the ‘Cool projects’ Category

Bring on the Heat: Thermal Imaging with the NXT

with 2 comments

Lookin' Hot!I built a pan and tilt rig for the Dexter Industries Thermal IR Sensor with a great deal of gearing down to allow me to take a lot of measurements as the rig moved around. Initially I had it set for about 40×40 measurements but those didn’t look that great and I wanted a bit more. I reprogrammed it and made it spit out data at a resolution of about 90×80.

The data from the thermal sensor was streamed to the debug output console in ROBOTC from which I copy and pasted it to an Excel worksheet.  I made some 3D graphs from the thermal data and it looked pretty cool.

Excel graph for cold glass Excel graph for candle flame

The left one is a cold glass and the right one is a candle.  I wasn’t really happy with the results of the graphs so I decided to quickly whip up a .Net app to read my CSV data and make some more traditional thermal images.  A few hours later, the results really did look very cool.

Thermal image for cold glass Thermal image for candle flame

Again, the left one is the cold glass and the right one is the candle.  Now that you have a thermal image, you can see the heat from the candle a lot more clearly. I made a quick video of the whole rig so you can get an idea.

A few days after the initial post about my thermal imaging system using the Thermal Infrared Sensor, I made some improvements with both the speed and accuracy of the whole thing. I made the sensor sampling interval time based, rather than encoder value based. This proved to be a lot better at getting consistent sampling rates. I also doubled the horizontal motor speed so I would be more likely to be still awake by the time it was done taking an image.

The left image was made with the old system, the right one with the new system. It’s a lot less fuzzy and there are no black gaps where the number of samples were fewer than the maximum number of samples in a row.

image_thumb7 image_thumb8

Perhaps there are other ways to improve the program but I am quite happy with how this has turned out.

The driver and program will be part of the next Driver Suite version. You can download a preliminary driver and this program from here: [LINK].  The .Net program and CSV files can be downloaded here: [LINK]. You will need Visual Studio to compile it.  You can download a free (Express) version of C# from the Microsoft website.

Written by Xander Soldaat

June 16th, 2011 at 4:47 pm

Controlling the MINDS-i Lunar Rover with a VEX Cortex

with one comment

Article written by Steve Comer

Remote control cars are great for having fun. They can be driven off-road, taken off jumps, and raced among other things. VEX robots are great for learning. They can be used to teach programming, math, problem solving, and other engineering skills. What do you get if you put them together??

Well, I can tell you. You get a rugged 4WD truck that is still tons of fun to drive around outside, but can also be used as a teaching tool.

Follow this link for more photos: http://s1081.photobucket.com/albums/j353/comeste10/VEX%20Rover%20Extras/

I started off with a MINDS-i Lunar Rover kit which is driven by a 7.2V DC motor and steered with a standard hobby servo. I removed the solar panel from the Rover and in its place put a VEX Cortex microcontroller and an LCD screen. On each side, I attached a VEX flashlight from the VEXplorer kit and I mounted an ultrasonic sensor to the front. It just so happens that VEX bolts and nuts fit quite easily into the beams of the MINDS-i rover.

I did all the programming in RobotC. See bottom of the page to view my RobotC code.

In order to control the stock motor and servo with the Cortex, I had to make a few modifications. I soldered the two wires to a 2-pin head which I then connected to the Cortex with a VEX motor controller.

For the servo, I used three single male-to-male jumper cables.

The video demonstrates the rover in autonomous mode where it makes use of the ultrasonic sensor to avoid bumping into walls. Remote control is also demonstrated using the VEXnet controller over Wi-Fi.

This is just a small sampling of the possibilities with this combination type of platform. Don’t let my initial direction limit you. It would be great to see some new combination robots. Get out there and start building!

This is my RobotC Code for the behaviors seen in the video.

AUTONOMOUS MODE:


#pragma config(UART_Usage, UART2, VEX_2x16_LCD)
#pragma config(Sensor, dgtl1,  sonar,               sensorSONAR_inch)
#pragma config(Motor,  port1,           L,             tmotorServoStandard, openLoop)
#pragma config(Motor,  port2,           servo,         tmotorNormal, openLoop)
#pragma config(Motor,  port3,           drive,         tmotorNormal, openLoop)
#pragma config(Motor,  port10,          R,             tmotorNormal, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//
/////////////////////////////////////////////////////////////////////////
//  Author  : Steven Comer//  Program : Rover drives straight until near an object, it then slows,
//            stops, then backs up and turns.
//  Updated : 8 June 2011 @ 10:20 AM
/////////////////////////////////////////////////////////////////////////
task main()
{

  //pause at start and turn on headlights
  wait1Msec(2000);
  motor[R] = -127;
motor[L] = -127;
while(true)
{
clearLCDLine(0);
    displayNextLCDNumber(SensorValue(sonar), 3);

    //++++++++++++++++++++++++++++++CLEAR++++++++++++++++++++++++++++++++
if( SensorValue(sonar) > 20 || SensorValue(sonar) == -1 )
{
motor[servo] = -2;
motor[drive] = 50;
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//+++++++++++++++++++++++++++++APPROACH++++++++++++++++++++++++++++++
else if( SensorValue(sonar) <= 20 && SensorValue(sonar) > 15 )
{
motor[drive] = SensorValue(sonar) + 25; //power decreases
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//+++++++++++++++++++++++++++STOP AND TURN+++++++++++++++++++++++++++
else if( SensorValue(sonar) <= 15 )
{
//stop
motor[drive] = 0;
wait1Msec(500);
//back and turn
motor[servo] = random[50] + 60;
     //random degree
motor[drive] = -50;  //random power
wait1Msec(1000);
}
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 }
}

REMOTE CONTROL MODE:


#pragma config(UART_Usage, UART2, VEX_2x16_LCD)
#pragma config(Sensor, dgtl1,  sonar,               sensorSONAR_inch)
#pragma config(Motor,  port1,           L,             tmotorServoStandard, openLoop)
#pragma config(Motor,  port2,           servo,         tmotorNormal, openLoop)
#pragma config(Motor,  port3,           drive,         tmotorNormal, openLoop)
#pragma config(Motor,  port10,          R,             tmotorNormal, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//
/////////////////////////////////////////////////////////////////////////
//  Author  : Steven Comer
//  Program : Remote control rover with VEXnet controller
//  Notes   : Throttle is Ch 2 (right joystick)
//            Steering is Ch 4 (left joystick)
//            Headlights are Ch 5U/5D and 6U/6D (on/off)
//  Updated : 10 June 2011 @ 12:30 PM
/////////////////////////////////////////////////////////////////////////
task main ()
{
 while(true)
{
//right headlight
if(vexRT[Btn6U] == 1)
motor[R] = -127;
if(vexRT[Btn6D] == 1)
motor[R] = 0;
//left headlight
if(vexRT[Btn5U] == 1)
motor[L] = -127;
if(vexRT[Btn5D] == 1)
motor[L] = 0;
//driving
motor[servo] = vexRT[Ch4];   //steering
motor[drive] = vexRT[Ch2];   //throttle
//LCD screen
displayLCDCenteredString(0,"VEX");
displayLCDCenteredString(1,"ROBOTICS");
}
}

Written by Jesse Flot

June 9th, 2011 at 3:31 pm

Posted in Cool projects,VEX

Tagged with , ,

Mars Rover NXT/VEX Robot with Rocker-Bogie Suspension

with one comment

[Submitted by Fealves78 from the ROBOTC forums]

Fealves78 submitted an incredible looking Mars Rover robot using (eek!) a combination of both NXT and VEX parts.

Both the robot and the the joystick are controlled by NXT bricks. The robot also uses 6 regular motors, 6 servos, a VEX Camera, and a set of Vex Lights.

Here is a new video of the robot being demonstrated at the National Space Foundation. The robot can go over rocks too!

Rocker-Bogie Suspension

Rocker-Bogie Suspension (Source http://en.wikipedia.org/wiki/File:Rocker-bogie.jpg)

The Rocker-Bogie suspension is actually a popular setup, and was used for the Mars Rover (hence, the robot name!) robot. It’s still favored by NASA in their MARS robots.

The suspension is coined “Rocker” because of its rocking aspect of the larger links in the system. The two sides of the chassis are connected via a differential. This allows each “rocker” to be able to move up and down independent of each other. Thus, this system allows the robot to drive over uneven terrain as well as rocks.

The word “Bogie” actually refers to the links that have a drive wheel at each end. Bogies were commonly used as load wheels in the tracks of army tanks as idlers distributing the load over the terrain. Bogies were also quite commonly used on the trailers of semi trailer trucks.

[See more at the Rocker-Bogie page at Wikipedia]

Here’s a video of the robot running over a grassy area:

What inspired you to build the robot?

I am a graduate Computer Science student and robotics is one of my interests. I am teaching robotics for kids in Colorado Springs through Trailblazer Elementary School. My students’ age range from 6 to 11 years old, and this is the first year that they were studying with me. We were inspired to build the Mars Rover robot by the Space Foundation – SF, which is located in Colorado Springs, CO. Through a grant with Boeing, the Space Foundation has donated 2 NXT robotics kit to our school, and I myself gave the Vex Kit for the students. Then, the SF challenged us to build a demo robot using some of the materials they have provided and the Mars Rover was our first big project.

How long did it take to build the robot?

The students spent about a week researching the design of the robot structure. It took 2 weeks to put it together and 2 more weeks to program the robot using Robot C. We also used the NXTSERVO-V2 form Mindsensors.com to control the robot’s 12 motors, 2 Lights, and camera.

What are your future plans with the robot?

All the work that we are doing is volunteer work. We started with one teacher and one school (Trailblazer) and 16 kids in the beginning of 2011. Now, with the help of graduate students from Colorado Technical University – CTU, the IEEE chapter from that school, and help from companies like the Space Foundation and MITRE, we are expanding to 40 kids and 3 schools by the end of the year. We are also willing to help teachers from Elementary, Middle, and High schools, who are willing to take robotics to the classroom as means to facilitate science to their students and to motivate them towards STEM education. Most of the schools have neither materials nor budget to start a robotics club. We are surviving with small donations and volunteer work. If you or anyone is interested in helping, please let us know.

In the little time we have been working with these kids both their regular teachers and parents are noticing improvement in the kid’s interest towards science and in their grades. For us, the CTU volunteers (students and IEEE members), this is a way to gain work experience and give back to the community.

Written by Vu Nguyen

June 6th, 2011 at 10:57 am

Posted in Cool projects,NXT,VEX

LEGO Quad Delta Robot System

without comments

[Many thanks to Shep for contributing this amazing project! Description and Source is all from Shep's blog]

Years of development, months of building and programming.  Here it is.


YouTube Direct Link 

About the Lego Quad Delta Robot System.

This system uses four Lego parallel robots which are fed by two conveyor belts.  As items flow down the conveyor belt toward the robots, each item passes by a light/color sensor mounted on each conveyor.  When the item is detected, a signal is sent to the robots telling them information such as the color of the object, which belt the object is on and the position of the object on the belt.  The robot reaches out and grabs the item from the moving conveyor belt when each item gets close enough and moves it to a location based on the color of the item.

The cell is capable of picking and placing objects at a rate of 48 items per minute.  Each robot can move 12 items per minute, or it can move an item in 5 seconds!

DELTA ROBOTS

Delta Robots, also known as Parallel robots are commercially available from several manufacturers.  They go by names such as ABB Flexpicker, Bosch Paloma D2, Fanuc M-1iA, Kawasaki YF03N, and Adept Quattro s650H.  They are known for moving small objects very quickly, usually at two hundred or more moves per minute.  Parallel robots are often used in many industries such as the food industry where the payload is small and light and the production rates are very high.  Many times a series of parallel robots are used to do things like assemble cookies, package small items, stack pancakes and much, much more.

THE ROBOTS

Each robot operates independently.  The robots receive a signal from the master, which in this case is the NXT that controls the light sensors.  The signal contains information about the color, lane, and position of each object.  When the signal is received, the data is stored in a chronological array.  When the object gets close enough, the robot goes through a preprogrammed series of movements based on the information in the array.

STARTING UP

At the beginning of each run, all three arms move slowly upward until they each hit a touch sensor.  After all three arms have reached the top they all move down together to a predetermined zero position and the encoders are reset.  At that point all the robots wait for the first signal which will be the master sending the belt speed signal.  The robots can automatically adjust movements such as where they pick up the objects based on the belt speed.

Immediately after the belt speed information has been received, each NXT brick will sound off in a timed sequence with their respective brick number.  This is an error checking technique.  If the operator doesn’t hear the full “ONE, TWO, THREE, FOUR, FIVE, SIX” there is a problem and the run should be terminated and restarted.

THE SIGNAL

The signal is an eight bit binary light signal that takes about 170 milliseconds to transmit.  The master NXT blinks the LEDs that are attached to each robot on and off at an interval about 20 milliseconds each flash.  Each robot is equipped with a Lego light sensor that easily sees the short flashes.  The same signal is sent to all the NXT bricks, but data encoded in the signal determines which robot will move the item.  The robot’s NXT brick decode the message and sends that information to a procedure that does the appropriate movements.

The binary signal is converted to a three digit number such as 132 or 243.  The first digit is the lane.  Possible values are 1 and 2 corresponding to conveyor 1 and conveyor 2 respectively.  The second digit is the robot number and the possible values are 1 through 4 corresponding to each of the four robots.  The third digit is the color of the object.  The possible values are 1 through 6, i.e.  BLACK=1, BLUE=2, GREEN=3. YELLOW=4. RED=5, WHITE=6.  The position of the brick is noted by the time that the light signal is received.  The robots calculate the position of each object by using the time when the signal was received relative to the current, dynamic time.  The belt moves precisely at 100 inches per minute so based on this, the position of the item on the belt can be precisely calculated.

A few signals other than brick information and belt speed are programmed to be sent.  The master can send an emergency shut down message in which all robots immediately stop what they are doing, drop their bricks and go to their home position as well as stop the conveyors.  Signals can also be sent to make the robots dance, play sound files and music files concurrently.

THE MOVEMENTS

The precise kinematics for the movements of the robots are dynamically calculated using detailed formulas that convert the Cartesian coordinates (x,y,z) of the location of the brick into the angles of the servo motors (theta1, theta2 and theta3) and vice versa.   This is the heart and soul of the robot.  Without precise calculations, this project would be nearly impossible.

As the gripper or “end effector” is moved around, it becomes necessary to calculate the best route for it to move.  The best route is usually a straight line.  This is done by locating the start point (x1, y1, z1) and the end point (x2, y2, z2) and then calculating a discrete number of points that lie on the line between the two points.  For each and every movement, the robot first creates an array for all the points in between and then moves nonstop from point to point to point through the array until it reaches the end point.

As the robot moves around, each motor speed is adjusted relative to the other motors speed in a manner that all three motors arrive at their target position at the same time.  This makes all the movements very smooth and the robot doesn’t shake too much.  The motor speeds are adjusted so that the robot moves as fast as possible.

Since the objects on the conveyors are moving at all times, the robot actually moves to a position where the object will be rather than where the object is actually at.  Also, when the robot grasps an object, it doesn’t lift it straight up, but up and forward slightly so that any objects behind the object on the conveyor belt won’t hit the object that is being moved.
It is possible for the robot to be overwhelmed by having too many objects to pick up.  Once an object goes past a limit point where it is too far to reach, it is removed from the queue and will not be picked up by any robot.

As the robots place items in the bins, the release point is shifted slightly so that the items won’t pile up.

THE GRIPPERS

The grippers are each driven using a single pneumatic cylinder.  The cylinder is cycled by a valve equipped with a medium PF motor connected to an IR receiver.  Each NXT is equipped with a HiTechnic IRLink sensor.  The NXT controls the gripper by sending a signal to the motor through the IRLink sensor.  The motor then rotates clockwise or counterclockwise for one quarter of a second to switch the pneumatic valve.  This is a very effective way of controlling Lego pneumatics with a NXT.

THE AIR SYSTEM

The air system must be robust because the pneumatic cylinders on the grippers move about 96 times a minute.  This requires a great deal of air.  The air compressor consists of six pumps (with the springs removed) turned by three XL PF motors.  The pressure is measured using a MindSensors Pressure sensor.  The pressure is kept between 10 and 13 psi to maintain good operational speed and gripping capacity.  The whole system will not start until air pressure is up to a minimum of 8 psi, and an audible alarm sounds if the pressure drops below 8 psi.  At this point, the operator can help the compressor by manually pumping up the system to the required pressure.

The three XL-PF motors are powered using a 9v train controller.  This is done so that consistent power is transmitted to the motors.  Air compressors tend to use batteries very quickly and using a train controller avoids that cost.
There are also six air tanks for storage, a manual pump, a pressure gage, and a pressure release valve to purge the system of pressure.  The manual pump is primarily used to assist the compressor if it can’t keep up.

The compressor motors are turned on and off using a Lego servo motor and a PF switch.  As the pressure sensor senses the pressure going above or below the thresholds, the motor moves the switch back and forth to add air or turn off the compressor.

THE CONVEYORS

The conveyors are controlled by a dedicated NXT brick.  The timing and speed of the conveyors is critical so that the items will be positioned accurately.  The speed of the conveyors is governed by a proportional controller.  They were originally controlled using a PID controller, but it turns out that a proportional control was adequate.   The speed of the conveyor can be vary from zero inches per minute up to two hundred inches per minute, but one hundred inches per minutes is the best for all the robots.

The NXT brick that controls the conveyors reads the same light signal information as all of the robots, but ignores most of the signals.

Each conveyor is ten feet long.

LIGHT CURTAIN/COLOR READER

The light/color sensors mounted on the conveyor do double duty.  Their default mode is as an ambient light sensor but they are frequently changed to color sensor.  A PF LED light is mounted opposite to the light sensor to give a high value of light detected.  When an item passes between the LED and the light sensor, a low light condition is detected and the sensor immediately switches mode to a color sensor.  This can be seen when the sensor briefly emits an RGB light as a brick passes in front of the sensor.  As soon as the color is correctly read, it immediately switches mode back to an ambient light sensor and waits for the next item.  When the color is determined, the brick then sends a signal to all of the slave bricks and an audible color sound is played.

There is a condition when two bricks pass by both light sensors at the same time.  It is impossible to send two signals at the same time, so the first item to be detected takes priority and the second brick signal is sent 400 milliseconds later.  A special signal is sent to tell the robot to adjust the position timing to account for the 400 ms delay when the brick comes to be picked up.

THE STRUCTURE

The frame structure holding the robots is highly engineered.  The combination of the weight of all the robots as well as the constant movement is a considerable problem.  The main horizontal member is achieved by layering Technic bricks with plates.  This configuration is very strong and has very little sag.  Movement is also minimized, but not completely eliminated.

The two main posts in the middle carry most of the weight and do a great deal to stop the structure from moving while the robots are operating.  The four outside posts help, but are mostly for support.  The diagonal braces are quite small relative to the size of the other members, but actually do a great deal to stop movement.

All of the posts are made from standard Lego bricks with Technic beams attached around to lock them together.  The structure is completely tied together as one piece, but can be broken down into eight parts for transport.

DEVELOPMENT

I have a personal fascination with this type of robot.  I find the movements mesmerizing and extremely interesting. The movements of the actual robots are extremely fast and accurate and defy belief.  I especially like the fact that the location of the end effector can be precisely calculated from the angular location of the three servo motors positioned at one hundred and twenty degrees from each other.

This is not the first parallel robot that I have built.  My first delta robot was built in 2004 using the Mindstorms RCX and was very crude and not very useful.  After several more attempts, I finally found a design using the Mindstorms NXT system that worked well.  At that time I still hadn’t worked out the kinematics but I found a way to fake the movements by positioning the end effector by hand and reading the encoder values.  Then I used those values to create a series of movements that closely resembled an actual robot.

I have researched for about six years and built this project many times.  This project took about five months to build and program.  It was purely a labor of love for this robot.

I don’t know how to improve on the current design.  As you can tell if you have read this description of the robot, I have exhaustively researched and built to every goal I have.  Sadly, I believe that I have reached the limit of what can be built using only Lego building elements.

Written by Vu Nguyen

April 20th, 2011 at 4:39 pm

Posted in Cool projects,NXT

I2C on the VEX Cortex

with one comment

The VEX Cortex is a nice platform made by VEX Robotics. It is supported in two programming environments, one of which is ROBOTC. Much to my dismay, the master firmware does not support I2C, which is why ROBOTC does not support it. I don’t really like it when someone tells me I can’t do something, so I went ahead and remedied the situation.

Mindsensor Magic Wand controlled by VEX CortexMotor MUX and Servo Controller controlled by VEX Cortex

I spent a few evenings writing and tinkering in ROBOTC to write my own bit-banged I2C implementation, which much to my surprise, worked very well.  First I tested it with the Mindsensors Magic Wand (above left) and later also with the Holit Data Systems Motor MUX and Mindsensors NXT Servo Controller (above right).  Jesse Flot from Robotics Academy was kind enough to send me some old VEX cables so I could splice two of them into an NXT cable for I2C. I will post a HOWTO for that at a later date.

As you can see in the right picture, I was already contemplating controlling the omniwheeled robot with the Motor MUX and so I did.

The robot is remote controlled via VEXnet over Wifi (which is a totally awesome feature which I wish the NXT had). The short video was taken at the RobotMC meeting of 19 March 2011, which happened to coincide with an information day for the technical university where we hold our meetings.

The coolest part about it is that my driver suite is almost completely transparently portable to the VEX Cortex platform once you switch out the NXT I2C subsystem functions for the Cortex specific ones. Some NXT dependencies do need to be removed and made more generic.  I intend to work on that in the next few weeks.  That would make a very wide range of new sensors available to the VEX Cortex platform.

Original article: [LINK]

Written by Xander Soldaat

March 19th, 2011 at 12:59 pm

NXT Robot: PID Line Follower

with one comment

DiMastero is at it again…

This time he has created a robot that does some very fast line following.

Watch it in action


YouTube Direct Link 

Hardware

This line follower is equipped with three sensors: one light (port 3), one magnetic field (port 2) and one IR link, the last of which are by HiTechnic. The light sensor is used for the robot’s main purpose: line following, while the magnetic field sensor detects whether the robot needs to pause or keep going. The IR Link doesn’t have any function; it’s just there to keep the whole thing symmetrical.

rear view

Zoom on the magnet and magnetic field sensorside view

The robot moves using two independently moving motors connected to ports B and C. They form the follower’s back and sides. At the front, next to the light sensor is a caster wheel.

Detail of caster and light sensor

Programming

The robot was programmed in RobotC and runs on PID control.The motors’ built-in PID is off. When the code starts, it takes the black (line) and white (background) light values and averages them to get an offset. It then sets the bias of the HiTechnic magnetic field sensor to 0, while the magnet is in front of it. That way, the sensor’s value will change when the magnet is (re)moved.

Magnet is out of sensor's reach

Next, after a short wait, it starts driving around the NXT test paper, guided by the PID. It keeps on doing so until the magnet is removed, in which case it pauses the program and turns off the motors. Once the magnet is back in place, the robot keeps going, even if it’s been moved to somewhere else on the line. If the magnet stays away for too long (more than four seconds), the program shuts down.

You can download the latest version of the code on the downloads page or download version 2.1 (which was the latest version when this page was last updated) below.

Setup and Performance

To start the robot, place it above the middle of the black line you want to follow right after starting the program on the NXT. Then, when it bleeps, move the robot to the left of the line. Make sure you neither touch nor move the magnet.

After a split second, the robot will start to follow the line. To pause it, lift the “tail”, moving the magnet. To get it back on line, let go of the tail. To abort the entire program, hold the tail for four seconds, or until you see the light sensor turn off (it’s in active mode, so the LED will be on all the time when it’s operative).

The robot follows the black line pretty quickly and smoothly.

Written by Vu Nguyen

March 18th, 2011 at 12:03 pm

Posted in Cool projects,NXT

Dancing VEX robot: Bear Bot [Team 4542]

with 2 comments

Thanks to magiccode from the forums for posting this!

Overview

Our robotics team made a semi-humanoid dancing VEX robot with a holonomic drive in place of legs. It has full range of motion in both arms and two planes of motion in its head. It can bend at the waist, and strafe or turn in any direction.

It dances. plays the piano and beats little kids up. He is an all around entertainer. We only had about a day to program him, so bear with us… pun intended”

Video


YouTube Direct Link 

Description

There were 3 “cool” things that were done with ROBOTC:

  1. The robot mimics the movements of a human arm which is holding the VexNet joystick or Vex accelerometer. This will not work in all directions if the joytick is being used because the joystick lacks a z-axis, but it will work in all directions if the accelerometer is being used.
  2. There were too many motors to be controlled by one cortex, so we linked two together by running a male to male pwm wire from the digital output port of one to the digital input port of the other.
  3. Programming was made easier by writing a function called moveServo(). The function would accept 3 parameters: the servo to move, the position to which it should move, and the amount of time it should take (does not take into account changes in battery power)

Code:

moveServo(tMotor servoName, int posToMove, int timeToTake);

How it works


YouTube Direct Link 

Written by Vu Nguyen

March 1st, 2011 at 10:18 am

Classic game, Pong, on the NXT

with 4 comments

DiMastero, our recently discovered ROBOTC master, has done it again. He has re-created the classic game Pong on the NXT.

See it in action:


YouTube Direct Link 

How it works:

From DiMastero:

It uses my previous bouncing ball program, combined with a player-controlled bat, different levels, random difficulty increase and point display. The ball starts off at a random position within the playing field, to avoid patterns, after which it bounces off every wall (or bat) it meets, at a 90° angle. The bat is controlled by turning the wheel connected to motor A, and its stopped whenever it tries to exit the allowed area. Once the ball gets one pixel away from the bat, it compares its own position to the bat’s, and bounces away when it’s acceptable, increasing your point-count with one for each hit. In case it’s not, the game freezes and the famous “you lose” appears on the display, after which the program is aborted.

The game has different levels, too; the level you’re playing at is increased by one whenever your points are above 150% of the last level you passed (if the last level was ten, you’d need to get above 15, then 23, then 35, etc.), so it takes longer as you get better. Each time the level increases, the NXT randomly picks one of the following to make your life more difficult:

  • Increasing the ball’s speed, by decreasing the waiting time at the end of the main loop
  • Decreasing the bat’s speed, by increasing the amount of encoder ticks it takes to move one pixel
  • Making the bat smaller, by four pixels

Photos:

Pong Screenshot

Increased Difficulty

Written by Vu Nguyen

January 21st, 2011 at 8:44 am

Posted in Cool projects,NXT

NXT Color sorter using only ONE motor

without comments

All credit goes to ivanseidel of our ROBOTC forums. Great job!

ivanseidel noticed a few projects that used 2, 3, 4, 5, or even 6 motor assemblies that sorted LEGOs into separate containers. He then wondered, “where are the one motor sorters?”. So he went to work.

Below you will find a video of ivanseidel’s amazing creation, the one motor color sorter:


YouTube Direct Link 

On the actual coding algorithm, ivanseidel says, “The code is also a new thing, that adapts to the colors very easy, like a fusy algorithm.”

Code:

Below is the code for the main program:

#pragma config(Sensor, S1,     L,                   sensorLightActive)
#pragma config(Sensor, S4,     T,                   sensorTouch)
#pragma config(Motor,  motorA,          M1,            tmotorNormal, openLoop, encoder)

//Função: separador de cores com lego mindstorms nxt
//Código feito por Ivan Seidel
//http://techlego.blogspot.com/

//Peço que deixem meus créditos apenas!

#include "MV-lib.c"

#define SpeedNormal           56
#define DifMaxLight           50
#define RotateAngleFirst      200
#define RotateAngleMultiple   180
#define RotateAngleBack       150
#define RotateAngleAfterT     0
#define MaxColors             7

int ArrayLights[10];
int ColorsFound = 0;
int LightValue;

//Le o valor do sensor deluz, e checa se esta no range de algum valor do array
//Reads light sensor and checks if that value is in the range of somevalue in the array
int IsInRange(int CheckValue){
 int i=0,found=-1;
 while(i<ColorsFound){
 if(CheckValue<=ArrayLights[i]+DifMaxLight &&  CheckValue>=ArrayLights[i]-DifMaxLight && ArrayLights[i]!=0){
 found=i;
 break;
 }
 i++;
 }
 return found;
}

void InsertValue(int InsertValue){
 ArrayLights[ColorsFound]=InsertValue;
 ColorsFound++;
}

task main(){
 int KeepRuning=true;
 while(KeepRuning){

 //Reseta todos os valores e vai para o comeco
 //Resets position and all states to 0
 while(SensorValue[T]==0){
 motor[M1]=SpeedNormal;
 }
 MV_StopMotors();
 MV_Vira(-SpeedNormal,RotateAngleAfterT,M1);
 wait10Msec(50);
 //Inicia o programaprincial de separar as cores
 //Starts the main program of sorting coloros
 int samples=10,LightValue;
 for(int y=0;y<samples;y++){
 LightValue+=SensorRaw[L];
 }
 LightValue=LightValue/samples;
 MV_Vira(SpeedNormal/1.5,RotateAngleFirst,M1);
 bool looping=true;
 while(looping){
 int foundColorPosition=IsInRange(LightValue);
 if(foundColorPosition!=-1){
 for (int i=0;i < foundColorPosition+1;i++){
 PlayTone(500,5);
 wait10Msec(10);
 }
 wait10Msec(50);
 MV_Vira(SpeedNormal,(foundColorPosition+1)*RotateAngleMultiple,M1);
 MV_Vira(-SpeedNormal*1.6,RotateAngleBack,M1);
 wait10Msec(50);
 looping=false;
 }else{
 InsertValue(LightValue);
 }
 }
 }
}

Below is the code for MV-lib.c:

//----------------------------------------------//
//----------------------------------------------//
//Library que possui as funcoes de mover motores//
//----------------------------------------------//
//----------------------------------------------//
#define MV_EMotor    motorA
#define MV_DMotor    motorC
#define MV_GarraMotor     motorB
#define MV_TimeLimit 70 //5 secs

void MV_StopMotors(){
 motor[MV_EMotor] = 0;
 motor[MV_DMotor] = 0;
 motor[MV_GarraMotor]  = 0;
}

void MV_Reto_Unlimited(int forca,int steering=0){
 motor[MV_EMotor]=forca+steering;
 motor[MV_DMotor]=forca-steering;
}
void MV_Spin_Unlimited(int forca){
 motor[MV_EMotor]=forca;
 motor[MV_DMotor]=-forca;
}

void MV_Vira(int forca, int degree,byte lado, int tempo=0){
 time100[T2]=0;
 if(degree>0){
 nMotorEncoder[lado]=0;
 while(nMotorEncoder[lado]<degree && nMotorEncoder[lado]>-degree && time100[T2]<MV_TimeLimit){
 motor[lado]=forca;
 }
 MV_StopMotors();
 }else if(tempo>0){
 time100[T1]=0;
 while(time100[T1]<tempo){
 motor[lado]=forca;
 motor[lado]=forca;
 }
 MV_StopMotors();
 }
}

void MV_Spin(int forca, int degree, int tempo=0){
 time100[T2]=0;
 nMotorEncoder[MV_EMotor]=0;
 nMotorEncoder[MV_DMotor]=0;
 if(degree>0){
 while(nMotorEncoder[MV_EMotor]<degree &&  nMotorEncoder[MV_EMotor]>-degree &&  time100[T2]<MV_TimeLimit){
 motor[MV_EMotor]=forca;
 motor[MV_DMotor]=-forca;
 }
 MV_StopMotors();
 }else if(tempo>0){
 time100[T1]=0;
 while(time100[T1]<tempo){
 motor[MV_EMotor]=forca;
 motor[MV_DMotor]=-forca;
 }
 MV_StopMotors();
 }
}

Written by Vu Nguyen

January 19th, 2011 at 8:33 am

Posted in Cool projects,NXT

CORTEX-Controlled Roomba robot

without comments

All credit goes to Paul Bouchier for creating this amazing robot!

Roomba Robot

I’m sure you’ve already heard about Roomba robots. You know, those great household vacuum-replacing cleaning robots that autonomously go around your house terrorizing your pets sucking up all of the dirt on your carpets, and even your non-carpeted floors too!

Paul was able to use a Cortex to create a Roomba that he could control on his own. He was able to achieve this with, ahem, ROBOTC!!!! (of course).

You can take a look at the video below to learn more about Paul’s creation:


YouTube Direct Link 

Photos

RoombaVEX

RoombaVEX 2

Written by Vu Nguyen

January 7th, 2011 at 9:44 am