Autonomous robot's navigation
Page 1 of 2

Author:  EDV [ Tue Sep 28, 2010 7:31 am ]
Post subject:  Autonomous robot's navigation

Since October 2007 I developed new object recognition algorithm "Associative Video Memory" (AVM).

Algorithm AVM uses a principle of multilevel decomposition of recognition matrices, it is steady against noise of the camera and well scaled, simply and quickly for training.

And now I want to introduce my experiment with robot navigation based on visual landmark beacons: "Follow me" and "Walking by gates".

Follow Me

Walking from p2 to p1 and back

I embodied both algorithms to Navigator plugin for using within RoboRealm software.
So, you can try now to review my experiments with using AVM Navigator.

The Navigator module has two base algorithms:

-= Follow me =-
The navigation algorithm do attempts to align position of a tower and the body
of robot on the center of the first recognized object in the list of tracking
and if the object is far will come nearer and if it is too close it will be
rolled away back.

-= Walking by gates =-
The gate data contains weights for the seven routes that indicate importance of this gateway for each route. At the bottom of the screen was added indicator "horizon" which shows direction for adjust the robot's motion for further movement on the route. Field of gates is painted blue if the gates do not participate in this route (weight rate 0), and warmer colors (ending in yellow) show a gradation of "importance" of the gate in the current route.

* The procedure of training on route
For training of the route you have to indicate actual route (button "Walking by way") in "Nova gate" mode and then you must drive the robot manually by route (the gates will be installed automatically). In the end of the route you must click on the button "Set checkpoint" and then robot will turn several times on one spot and mark his current location as a checkpoint.

So, if robot will walk by gates and suddenly will have seen some object that can be recognized then robot will navigate by the "follow me" algorithm.

If robot can't recognize anything (gate/object) then robot will be turning around on the spot
for searching (it may twitch from time to time in a random way).

For more information see also thread: "Autonomous robot's navigation" at Trossen Robotics.

Author:  Aswin [ Wed Sep 29, 2010 9:43 am ]
Post subject:  Re: Autonomous robot's navigation

This is extremely cool!!!

It seems that I'm your candidate to test this out on a NXT. I have a wireless cam, a roborealm lisence and I know how to communicate from a pc to a NXT over BT. I'm also very much interested in autonomous robots and path finding.

Does it recignise 3D objects, a chair in the middle of a room for example or do objects have to have a more or less stable 2D representation?

Author:  EDV [ Wed Sep 29, 2010 10:01 am ]
Post subject:  Re: Autonomous robot's navigation

You can train AVM algorithm (object recognition mode) on some 3D objects under different angles and when AVM will see it at the same angle then it will be recognized.

Author:  EDV [ Wed Sep 29, 2010 10:15 am ]
Post subject:  Re: Autonomous robot's navigation

See also for example:

Target Training

Author:  EDV [ Wed May 04, 2011 3:49 am ]
Post subject:  Re: Autonomous robot's navigation

Now AVM Navigator v0.7 is released and you can download it from RoboRealm website.
In new version is added two modes: "Marker mode" and "Navigate by map".

Marker mode


Marker mode provides a forming of navigation map that will be made automatically by space marking. You just should manually lead the robot along some path and repeat it several times for good map detailing.

Navigation by map


In this mode you should point the target position at the navigation map and further the robot plans the path (maze solving) from current location to the target position (big green circle) and then robot begins automatically walking to the target position.


For external control of "Navigate by map" mode is added new module variables:

NV_LOCATION_X - current location X coordinate;
NV_LOCATION_Y - current location Y coordinate;
NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians);

Target position at the navigation map
NV_IN_TRG_POS_X - target position X coordinate;
NV_IN_TRG_POS_Y - target position Y coordinate;

NV_IN_SUBMIT_POS - submitting of target position (value should be set 0 -> 1 for action).


Quake 3 Odometry Test

Navigation by map

Visual Landmark Navigation

Author:  EDV [ Sat Jun 04, 2011 2:14 pm ]
Post subject:  Re: Autonomous robot's navigation

Quake 3 Mod


Don't have a robot just yet? Then click here to view the manual that explains how to setup RoboRealm
with the AVM module to control the movement and processing of images from the Quake first person video game.
This allows you to work with visual odometry techniques without needing a robot!

The additional software needed for this integration can be downloaded here.

Author:  EDV [ Mon Jun 06, 2011 3:23 pm ]
Post subject:  Re: Autonomous robot's navigation

Is it possible to play with virtual robot in "Navigation by map" mode?



Just look into documentation and download the "AVM Quake 3 mod" installation.

Author:  EDV [ Tue Aug 09, 2011 4:07 pm ]
Post subject:  Re: Autonomous robot's navigation

I have done new plugin for RoboRealm:


Digital Video Recording system (DVR)

You can use the "DVR Client-server" package as a Video Surveillance System in which parametric data
(such as VR_VIDEO_ACTIVITY) from different video cameras will help you search for a video fragment
that you are looking for.

You can use the "DVR Client-server" package as a powerful instrument for debugging your video processing
and control algorithms that provides access to the values of your algorithm variables that were archived
during recording.

Technical Details

- ring video/parametric archive with duration of 1 - 12 months;

- configurable database record (for parametric data) with maximal length of 190 bytes;

- writing of parameters to database with discretization 250 ms;

- the DVR Client can work simultaneously with four databases that can be located at remote computers.


Author:  EDV [ Mon Oct 03, 2011 1:44 pm ]
Post subject:  Re: Autonomous robot's navigation

Mel wrote:
Hey EDV!
I finally got my hands on a Roomba robot that I could try with the Nav programs. I went through all of the items and tutorials. When I placed the robot in the NAV mode, it moved. All others did not move unless I used the arrows to train them. The Nav by map mode showed the progress, but I could not get it to move by clicking on Left mouse. When I clicked on Left mouse it did not do anything. I would like to make that work. Can it work on it's own, or do I have to train it in one of the other modes?

I prepared simple video tutorial "Route training and navigation by map":


See more details about tuning of "Marker mode" and "Navigation by map" modes.



Author:  EDV [ Tue May 08, 2012 11:53 am ]
Post subject:  Re: Autonomous robot's navigation

AVM Navigator help page was updated! :wink:

Author:  miki [ Tue May 08, 2012 1:20 pm ]
Post subject:  Re: Autonomous robot's navigation

Wow! great job !
I just discover you website and there is a lot to read ... and learn !! :-)

Author:  EDV [ Sun Jul 29, 2012 11:09 am ]
Post subject:  Re: Autonomous robot's navigation

It is enough difficult route that was passed by robot
with help AVM Navigator (route training and passing):

Autonomous navigation view from outside:

Author:  Spiked3 [ Sat Aug 04, 2012 9:34 pm ]
Post subject:  Re: Autonomous robot's navigation

I know we almost had this conversation once before, but can we try again?

Can you explain to us how this could work, on a LEGO NXT platform, or a VEX or any of the platforms discussed on this forum?

What camera are you using? How can I achieve your results, preferably using RobotC. If I can not do this using RobotC that's fine, just say so.

If I the project doesn't fit the hardware platforms being discussed here, can you at least, without pointing me to another web site, describe how your algorithms work, so that I can implement them and share my results, in RobotC?

If you can only point me to another web site, please don't - you already have.

Author:  EDV [ Sun Aug 12, 2012 12:37 pm ]
Post subject:  Re: Autonomous robot's navigation

>> Can you explain to us how this could work, on a LEGO NXT platform, or a VEX or any of the platforms discussed on this forum?

First you should connect your robot platform to AVM Navigator with helping of control variables:

Use variable NV_TURRET_BALANCE for camera turning:

NV_TURRET_BALANCE - indicates the turn degree amount.
This value range from -100 to 100 with forward being zero.

Use for motor control NV_L_MOTOR and NV_R_MOTOR variables that have range
from -100 to 100 for motion control ("-100 " - full power backwards,
"100" - full power forwards, "0" - motor off).

You also can use alternative control variables
(motors range from 0 to 255 with 128 being neutral):

NV_L_MOTOR_128, NV_R_MOTOR_128 - motors control
NV_TURRET_128 - control of camera turning
NV_TURRET_INV_128 - inversed control of camera turning

Use for connection “Lego NXT” or “Vex Controller” modules of RoboRealm package.

You can find out more information in this topic or on AVM Navigator help page.

If you want to review my experiments with AVM Navigator on your robot platform then let’s try to do that step by step in this thread.

>> What camera are you using?

I use in my experiments: Logitech HD Webcam C270.

>> Describe how your algorithms work

In our case the visual navigation for robot is just sequence of images with associated coordinates that was memorized inside AVM tree. The navigation map is presented at all as the set of data (such as X, Y coordinates and azimuth) that has associated with images inside AVM tree. We can imagine navigation map as array of pairs: [image -> X,Y and azimuth] because tree data structure needed only for fast image searching. The AVM algorithm can recognize image that was scaled and this image's scaling also is taking into consideration when actual location coordinates is calculating.

Let's call pair: [image -> X,Y and azimuth] as location association.

So, each of location association is indicated at navigation map of AVM Navigator dialog window as the yellow strip with a small red point in the middle. You also can see location association marks in camera view as thin red rectangles in the center of screen.

And so, when you point to target position in "Navigation by map" mode then navigator just builds route from current position to target point as chain of waypoints. Further the navigator chooses nearest waypoints and then starts moving to direction where point is placed. If the current robot direction does not correspond to direction to the actual waypoint then navigator tries to turn robot body to correct direction. When actual waypoint is achieved then navigator take direction to other nearest waypoint and so further until the target position will be achieved.

*Odometry / localization

The robot sets the marks (it writes central part of the screen image with associated data to AVM tree). Marker data (inside AVM) contain horizon angle (azimuth), path length from start and X, Y location position (relative to the start position). Information for the marker data is based on marks tracking (horizontal shift for azimuth and change of mark scaling for path length measurement). Generalization of all recognized data of marks in input image gives actual value of azimuth and path length. If we have information about motion direction and value of path length from previous position and x, y coordinates of previous position then we can calculate the next coordinates of the current position. This information will be written to the new mark (inside AVM) when it is created and so forth.

The set of marks inside the AVM gives a map of locations (robot see the marks and recognize its location).

Also you can find the short description on Wikipedia.

>> So that I can implement them and share my results

You should develop your own image recognition algorithm with similar low False Acceptance Rate (about 0.01 %) for navigation in such way. AVM algorithm has memorized and recognized about thousand unique images for successful navigation in video above.

Unfortunately but I do not provide source code or detail documentation about AVM algorithm within AVM Navigator project.

However I found recently open source algorithm and it use template principle like AVM.

-= BiGG – Algorithm =-


Source code

I hope that this information could help you in your project.

Author:  EDV [ Mon Oct 08, 2012 5:49 am ]
Post subject:  Re: Autonomous robot's navigation

AVM Navigator v0.7.4.2 update


- The indication drawing was carried to ::Annotate method

- Into camera view was added 3D marker of target position of robot


See here about all other changes.

Page 1 of 2 All times are UTC - 5 hours [ DST ]
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group