Re: Autonomous robot's navigation
>> Can you explain to us how this could work, on a LEGO NXT platform, or a VEX or any of the platforms discussed on this forum?
First you should connect your robot platform to AVM Navigator with helping of control variables:
Use variable NV_TURRET_BALANCE for camera turning:
NV_TURRET_BALANCE - indicates the turn degree amount.
This value range from -100 to 100 with forward being zero.
Use for motor control NV_L_MOTOR and NV_R_MOTOR variables that have range
from -100 to 100 for motion control ("-100 " - full power backwards,
"100" - full power forwards, "0" - motor off).
You also can use alternative control variables
(motors range from 0 to 255 with 128 being neutral):
NV_L_MOTOR_128, NV_R_MOTOR_128 - motors control
NV_TURRET_128 - control of camera turning
NV_TURRET_INV_128 - inversed control of camera turning
Use for connection “Lego NXT”
or “Vex Controller”
modules of RoboRealm package.
You can find out more information in this topic
or on AVM Navigator help page
If you want to review my experiments with AVM Navigator on your robot platform then let’s try to do that step by step in this thread.>> What camera are you using?
I use in my experiments: Logitech HD Webcam C270.>> Describe how your algorithms work
In our case the visual navigation for robot is just sequence of images with associated coordinates that was memorized inside AVM tree. The navigation map is presented at all as the set of data (such as X, Y coordinates and azimuth) that has associated with images inside AVM tree. We can imagine navigation map as array of pairs: [image -> X,Y and azimuth] because tree data structure needed only for fast image searching. The AVM algorithm can recognize image that was scaled and this image's scaling also is taking into consideration when actual location coordinates is calculating.
Let's call pair: [image -> X,Y and azimuth] as location association.
So, each of location association is indicated at navigation map of AVM Navigator dialog window as the yellow strip with a small red point in the middle. You also can see location association marks in camera view as thin red rectangles in the center of screen.
And so, when you point to target position in "Navigation by map" mode then navigator just builds route from current position to target point as chain of waypoints. Further the navigator chooses nearest waypoints and then starts moving to direction where point is placed. If the current robot direction does not correspond to direction to the actual waypoint then navigator tries to turn robot body to correct direction. When actual waypoint is achieved then navigator take direction to other nearest waypoint and so further until the target position will be achieved.*Odometry / localization
The robot sets the marks (it writes central part of the screen image with associated data to AVM tree). Marker data (inside AVM) contain horizon angle (azimuth), path length from start and X, Y location position (relative to the start position). Information for the marker data is based on marks tracking (horizontal shift for azimuth and change of mark scaling for path length measurement). Generalization of all recognized data of marks in input image gives actual value of azimuth and path length. If we have information about motion direction and value of path length from previous position and x, y coordinates of previous position then we can calculate the next coordinates of the current position. This information will be written to the new mark (inside AVM) when it is created and so forth.
The set of marks inside the AVM gives a map of locations (robot see the marks and recognize its location).
Also you can find the short description on Wikipedia
.>> So that I can implement them and share my results
You should develop your own image recognition algorithm with similar low False Acceptance Rate (about 0.01 %) for navigation in such way. AVM algorithm has memorized and recognized about thousand unique images for successful navigation in video above.
Unfortunately but I do not provide source code or detail documentation about AVM algorithm within AVM Navigator project.
However I found recently open source algorithm and it use template principle like AVM.
-= BiGG – Algorithm
=- DocumentationSource code
I hope that this information could help you in your project.