|
|||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
See:
Description
Interface Summary | |
---|---|
RobotAction | Represents an action a robot can choose. |
RobotAgent | An Agent in the RoombaEnvironment. |
RobotPercept | A percept that a RobotAgentProgram can receive. |
RobotPositionPercept | A percept indicating to your agent program the location of the robot in the room. |
RobotRoomRectanglePercept | A percept that your robot may receive, indicating the size and position of the room in a 2-d plane. |
RobotSuccessPercept | A percept indicating whether the last attempted action was successful or not, and if not, why. |
Class Summary | |
---|---|
OfflineMapSolver | Given a complete map and a starting location, finds the length of the shortest possible route to clean all reachable squares. |
RoombaAction | Represents available actions for a RoombaAgent. |
RoombaAgent | This class represents a happy little RoombaVac robot. |
RoombaDirt | A spot of dirt. |
RoombaEnvironment | The environment in which a robo-vac agent lives, spends its days, and dies. |
RoombaEnvironmentObserver | An EnvironmentObserver customized for the RoombaEnvironment. |
RoombaGridCell | GUI class for the cells within a Roomba room. |
RoombaPositionPercept | A percept indicating to your agent program the location of the robot in the room. |
RoombaRapidRunner | Runs repeated rounds of the Roomba environment, noting the number of steps the robot needed to finish (if it finished), the competitive ratio, the amount of time used per step, and the averages of those figures at the end. |
RoombaRoomRectanglePercept | A percept indicating the size and position of the room in a 2-d plane. |
RoombaRunner | Runs a single round of the Roomba environment, loading the agent program from the specified package, and loading the map from the specified text file. |
RoombaSuccessPercept | Represents a success-or-not percept received by the Roomba robot from its environment. |
Enum Summary | |
---|---|
RobotSuccessPercept.ReasonForFailure | Indicates the reason for the failure of an attempted move. |
Assignment 1: The Robot Vacuum environment.
In this environment, your agent is a Roomba vacuum cleaner whose task is to clean a room.
The environment is 2-dimensional grid, which you may assume to be rectangular. Each square in the grid is initially either dirty, or contains an obstacle.
Your robot will begin in a random location in the grid. Your robot's agent program, which you will be responsible for implementing, will initially receive percepts indicating the size of the room, and the starting location of the robot. Afterwards, it will only receive percepts indicating whether it bumped into an obstacle or a wall.
You should think carefully about how to represent a state in this environment: what constitutes a goal state? How can you analyze your AgentState implementation in order to recognize when you've reached a goal state? What kind of heuristic function could you implement to analyze a state and estimate the cost from that state to a goal state?
Once you've followed the directions in the overview page for setting up your Java development environment, the setup for this assignment is easy. You should be able to just decompress the assignment file, run an ant compile, and get started.
The first thing you'll want to do is rename the edu.columbia.yourUNI
package to, well, your actual UNI.
Using my UNI (awh2101) as an example, I'd first rename the source directory:
yourUNI
with awh2101
.
You might choose to do the find-and-replace manually (yuck!), or with a visual editor. I'm a fan of Perl one-liners,
so I'd do the replacement in one fell swoop:
Now you can try your hand at compiling the code!
Platform | Build environment | Instructions |
---|---|---|
Unix, Mac, Windows | Command-line tools |
To clean the project for a fresh start, type ant clean
To compile, type ant compile
To run, type ant run -Duni=yourUNI
|
Eclipse IDE |
Create a new project from an ant buildfile, selecting the build.xml file. What else? Your feedback is appreciated. |
The ant run
task accepts a variety of parameters to customize the robot environment. You may supply these parameters
in any combination:
Now you can get started on the coding! Please remember to:
Document your code in Javadoc format. Essentially, this just means writing comments in the form:
/** * Single-sentence summary. * * <p> Longer explanation, which can contain HTML markup. </p> */before each class and method.
The documentation for each of your agent program classes (RoombaAgentProgram
and GreedyAgentProgram
) should explain what algorithm is used,
how it was implemented, the reasoning behind your heuristic function (if any), and any other special features of your program.
Use, if you want, the supplied interfaces to explicitly specify the structure of your program. You do not needto use the supplied interfaces -- as long as your code is clearly documented, and you explain how your code corresponds to the algorithms we've discussed in class and read in the text, you'll be fine. But feel free to use them as a guide, to help keep your program structure clean and neat.
For example, see the interfaces OnlineAgentProgram,
OnlineDFSAgentProgram, OnlineSearchProblem,
OnlineEvaluationFunction, OnlineHeuristicFunction,
etc.
Before the deadline (11am on September 26th), prepare your submission as follows:
Make sure I can read your extensive and clear documentation about your agent programs, the strategies you employed, and any additional classes you created (e.g. implementations of the HeuristicFunction interface).
Test that your documentation is properly generated by typing
Written.txt
, Written.html
, or Written.pdf
with your answers to the written questions into your yourUNI/ source directory.
|
|||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |