The AVENUE Project
Fig.1: A 3-D model of a building
AVENUE stands for Autonomous Vehicle for Exploration and
Navigation in Urban Environments
. The project targets the
automation of the urban site modeling process. The main goal is to
build not only realistically looking but also geometrically accurate
and photometrically correct models of complex outdoor urban
environments. These environments are typified by large 3-D structures
that encompass a wide range of geometric shapes and a very large scope
of photometric properties.
The models are needed in a variety of applications, such as city
planning, urban design, historical preservation and archaeology, fire
and police planning, military applications, virtual and augmented
reality, geographic information systems and many others. Currently,
such models are typically created by hand which is extremely slow and
error prone. AVENUE addresses these problems by building a mobile
system that will autonomously navigate around a site and create a
model with minimum human interaction, if any.
Video: AVENUE in action
The task of the mobile robot is to go to desired locations and acquire
requested 3-D scans and images of selected buildings. The locations
are determined by the sensor planning (a.k.a
system and are used by the path planning system to generate reliable
trajectories which the robot then follows. When the robot arrives at
the target location, it uses the sensors to acquire the scans and
images and forwards them to the modeling system. The modeling system
registers and incorporates the new data into the existing partial
model of the site (which in the beginning could be empty). After
that, the view planning system decides upon the next best data
acquisition location and the above steps repeat. The process starts
from a certain location and gradually expands the area it covers until
a complete model of the site is obtained.
The entire task is complex and requires the solution of a number of
- the creation of complete 3-D models of buildings and other large
- the fusion of range and image data
- the automated planning of new viewpoints
- the automated acquisition of range and image data
modeling and view planning aspects have been addressed in the
work of Ioannis Stamos --- a former member of our group.
The problem of the automated data acquisition is further decomposed into:
- a mobile platform to physically transport and operate the acquisition devices
- a software architecture for the mobile system
- localization components to determine the position and orientation of the platform
- a motion control component to control the motion of the platform
path planning (by Paul Blaer)
user interface (by Ethan Gold)
Fig.2: Our mobile platform
The robot that we use is an ATRV-2 model manufactured by Real World
Interface, Inc, which is now iRobot.
It has a maximum payload of 100kg
) and we are
trying to make a good use of that. To the twelve sonars that come
with the robot we have added numerous additional sensors and periphery:
- a pan-tilt unit holding a color CCD camera for navigation and
- a Cyrax laser range scanner with extremely high quality and
100m operating range.
- a GPS receiver working in a carrier-phase differential
(also known as real-time kinematic) mode. The base station is
installed on the roof of one of the tallest buildings on our campus.
It provides differential corrections via a radio link and over the
- an integrated HMR-3000 module consisting of a digital compass and
a 2-axis tilt sensor
- an 11Mb/s IEEE 802.11b wireless network for continuous connectivity
with remote hosts during autonomous operations. Numerous base stations
are being installed on campus to extend the range of network connectivity.
The robot and all devices above are controlled by an on-board dual Pentium
III 500Mhz machine with 512MB RAM running Linux.
Fig.3: Software Architecture
We have designed a distributed object-oriented software architecture
that facilitates the coordination of the various components of the
system. It is based on Mobility
-- a robot integration
software framework developed by iRobot -- and makes heavy use of
The main building blocks are concurrently executing distributed
software components. Components can communicate (via IPC) with one
another within the same process, across processes and even across
physical hosts. Components performing related tasks are grouped into
servers. A server is a multi-threaded program that handles an entire
aspect of the system, such as navigation control or robot interfacing.
Each server has a well-defined interface that allows clients to send
commands, check its status or obtain data.
The hardware is accessed and controlled by seven servers. A
designated server, called NavServer builds on top of the
hardware servers and provides localization and motion control services
as well as a higher-level interface to the robot from remote hosts.
Components that are too computationally intense (e.g the modeling
components) or require user interaction (e.g the user interface)
reside on remote hosts and communicate with the robot over the wireless
Fig.4: Open space localization results
Fig.5: Visual localization results
Our localization system system employs two methods.
The first method uses odometry, the digital compass/pan-tilt module
and the global positioning sensor for localization in open space. An
extended Kalman filter integrates the sensor data and keeps track of
the uncertainty associated with it. When the global positioning data
is reliable, the method is used as the only localization method. When
the data deteriorates, the method detects the increased uncertainty
and seeks additional data by invoking the second localization
The second method, called visual localization, is based on camera pose
estimation. It is heavier computationally, but is only used when it
is needed. When invoked, it stops the robot, chooses a nearby
building to use and takes an image of it. The pose estimation is done
by matching linear features in the image with a simple and compact
model of the building. A database of the models is stored on the
on-board computer. No environmental modifications are required.
Localization Methods for a Mobile Robot in Urban Environments
IEEE Transactions on Robotics, Vol.20, No.5, October 2004, pp.851-864.
(with Peter K. Allen)
Design, Implementation and Localization of a Mobile Robot for
Urban Site Modeling
PhD Thesis, Computer Science Department, Columbia University, New York, NY,
(Advisor: Prof. Peter K. Allen)
Vision for Mobile Robot Localization in Urban Environments
In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems
(IROS'02), Lausanne, Switzerland, October 2002, pp.472-477.
(with Peter K. Allen)
AVENUE: Automated Site Modeling in Urban Environments
In Proc. of the 3rd Conference on Digital Imaging and Modeling (3DIM'01),
Quebec City, Canada, May 2001, pp.357-364.
(with Peter Allen, Ioannis Stamos, Ethan Gold and Paul Blaer)
Design, Architecture and Control of a Mobile Site-Modeling Robot
In Proc. of IEEE Int. Conf. on Robotics and Automation (ICRA'00),
San Francisco, California, April 2000, pp.3266-3271.
(with Peter K. Allen, Ethan Gold and Paul Blaer)