Spring 2012»CSCI 470 Homework 4

CSCI 470 Homework 4

Assigned Date: Tuesday, Apr. 10, 2012
Due Date: Thursday, Apr. 19, 2012
Due Time: 10:50am

Last modified on April 16, 2012, at 03:08 PM (see updates)

This is a pair-programming assignment (i.e., you may work with one partner). You may discuss the assignment only with your partner or the instructor/grader.


This assignment's objectives are:

  • To understand knowledge representation schemes and related issues.
  • To gain experience in programming with a programming language used in AI applications.
  • To develop working knowledge, in terms of understanding the theory and being able to design/implement working applications, of at least one AI area.
    • To develop and apply knowledge on State Space Search.
      • To understand the important of developing an effective representation of state.
      • To learn different search strategies and select the most effective for a given problem.
      • To evaluate via simulation different search strategies.
  • To develop an understanding of AIís impact on today's society.
    • To explore AI techniques used in computer games.
    • To explore AI techniques used in real-life robotics applications (e.g., nuclear disaster assessment and recovery, automobile navigation, etc.)


Download and install the OpenNERO environment. OpenNERO is an open source software platform designed for research and education in Artificial Intelligence. It runs on all major platforms (Windows, Max OS, Linux).

Replace the Resources subdirectory with this cleaner, simpler version.

Develop a smart Roomba agent (call it SmartRomba.py). It should use one (or more) of the search strategies covered in class (or in our AIMA book) to maximize its effectiveness (cleaning behavior or fitness).

  • Note: Realize this could be an "AI" in your favorite computer game going after different targets; a real robot assisting in the cleaning of a nuclear plan after a natural disaster; or an automated vehicle navigating I-26.

Compare your agent to the following agents:

  • Random agent (call it Random.py). This agent moves randomly through the environment.
  • Original Roomba agent (call it HillClimbing.py). This agents aims for the closest crumb (avoiding getting stuck as much as possible).


  1. Your work should be done mainly within the Roomba source files (i.e., you are developing intelligent agents). It is OK to have auxiliary files (e.g., for search strategies), but do this carefully, and only if it will result in cleaner (easier to maintain) code.
  2. It is OK to modify the environment (module.py) and other files, but only minimally and only if absolutely necessary. Again, you are developing intelligent agents (and the agent intelligence should be encapsulated inside the Roomba files).
  3. Answer the following questions. (Hint: Do this first - it will save you enormous time.)
    1. What type of environment are you using (i.e., fully vs. partially observable, deterministic vs. stochastic, episodic vs. sequential, static vs. dynamic, discrete vs. continuous, single-agent vs. multi-agent)? Justify.
    2. What type of agents do you have (e.g., simple reflex agents, reflex agents with state, goal-based agents, utility-based agents, learning agents). Justify.
    3. What is stored in each state?
    4. What is the start state?
    5. What is a goal (childless) state? (Hint: In this problem, some goal states are "better" than others. Your search algorithm should aim to find a good-enough (if not the best) goal state.)
    6. How do you generate new states (i.e., what do transitions (arcs, edges) represent in your state space)? (Hint: New states are generated by expand() - this answer helps you build expand().)
    7. What is the path cost? (Hint: Be careful here - perhaps you want to take into account both distance and rewards., i.e., the most costly path should represent travelling a long distance for small rewards.)
    8. Which search algorithm(s) are you using. Why?
    9. What is your search termination condition?
  4. Currently, the environment creates a list of crumbs (see sense_crumbs() in module.py). You may create an array of clusters of crumbs. One possibility (of many) is to compress (i.e., reduce the resolution) of the original list of crumbs. Perhaps you may pass both the compressed and original lists to each agent. Whatever you do, aim for efficiency and conceptual clarity. Both are equally important.
    • One extreme example of compressing the original list is as follows:
      • Realizing that the original list is actually an nxm map (of the environment) depicting crumb positions and rewards, divide it into four quadrants.
      • Create a 2x2 map (i.e., a list of 4 items, one for each quadrant), where:
        • each item contains (x, y, exists?, reward), where
          • x, y are the cluster's (average) coordinates,
          • exists? is 0 or 1 (is this cluster still available?), and
          • reward is the cumulative reward (i.e., the sum of all crumb rewards in this quadrant).
  5. Plot the agent's fitness (e.g., using plot_server.py, or Excel). Submit an additional document with the graph(s) (e.g., an Excel spreadsheet, or a Word document). It should be obvious the graph(s) originated from data in nero_log.txt (i.e., include data in the document).


  1. Make your agent (become) aware that some crumbs may be unreachable (e.g., under a chair). For this to count as bonus, the agent should realize it on its own (as opposed to, say, modifying the environment to exclude crumbs under chairs from the sensor data).
  2. Create a multi-agent strategy. Your SmartAgent class (stored in SmartRomba.py) may use class variable(s) to communicate information among other SmartAgent instances. Hint: See John Zelle's Teaching Computer Science with Python notes on how a class may remember how many instances of it have been created. You can use this approach to have agents communicate (broadcast?) information to the group (or one another). This will allow you to design a global (collaborative or competitive) strategy, in addition to an individual agent's strategy. Enjoy!
  3. Add an '"HumanAgent.py''' which is controlled through keyboard input (see OpenNERO documentation on how to do this).
  4. Display the score of the agents on the graphical user interface (GUI). Hint: See how the current GUI (e.g., Add Robots, Pause, Resume) is created and add to it.


Follow the Golden Rule of Style: "A program should be as easy for a human being to read and understand as it is for a computer to execute." [1]

In general, you should comment any variable, obscure statement, block of code, etc. you create.

Also, you should comment why something is being done, e.g.,

numStudents += 1   # we have processed one more student

as opposed to how it is done, e.g.,

numStudents += 1   # increment numStudents by one

Finally, your Python code (any .py file you create or modify) should include opening comments as follows.

(NOTE: Angle brackets below signify information that needs to be filled out. Remove the angle brackets and the enclosed instructions from your submission!)

#   Author:     <Your Name(s)>
#   Email:      <Your email address(es)>
#   Class:      CSCI 470, Section 1
#   Assignment: HMWK<number>
#   Due Date:   <The assignment's due date>
#   Certification of Authenticity <remove one of the following>:     
#     I certify that this lab is entirely my own work.
#      I certify that this lab is my own work, but I received
#      some assistance from:  <Name(s)>
#   TASK:   <Provide a simple, yet complete description of the task being
#           performed by this program. It may be several sentences long.>
#   INPUT:  <Describe the input to this program.  Be thorough.>
#   OUTPUT: <Describe the output to this program.  Be thorough.>


Submit your assignment on OAKS by the due date. Your submission must include:

  1. A Resources.zip file. This will contain all your files for this assignment (including Random.py, HillClimbing.py, and SmartRomba.py). Hint: I should be able to unzip this, place it in my OpenNERO folder, and run your work (without any changes on my part).
  2. A README.txt file, which describes your experience (answers to above questions, what changes you made (specifically), what you learned, any ideas you have, anything that caused you difficulty). Make this easy to read and to the point. Avoid verboseness.
    • Also reference the plot results in your description (see below).
  3. An Excel spreadsheet or Word document with one or more plots of the agents' fitness (e.g., using plot_server.py, or Excel). Also include the data from nero_log.txt.


Your grade will be based on the elegance of your design, the quality of your documentation, and the accuracy and simplicity of your implementation. Also, on how well you followed the above instructions. Aim for simplicity and elegance. Design, design, design... before you implement.


  1. Cooper, D. and Clancy, M. (1985) "Oh! Pascal", 2nd ed., W.W. Norton & Company, New York, p. 42.

Relevant Quote

"Any amount of work can be done in any amount of time... only the quality varies." ~Joao Meidanis