Math 176 - Advanced Data Structures
Programming Assignment #4
"Make Like a Search Engine"

(Please watch for updates to the programming assignment.)

New Due Date: Thursday, November 30, 4:00PM. (16 hours extension due to problems with telnetting into ieng9.)

This assignment is covered by the usual Academic Integrity guidelines for programming assignments.

For this assignment, you will write the core part of a program which performs searches on text documents. This will include much of the core functionality of search engines like Melvyl or web search engines. 

Your program that will read in several thousand text files, then create an inverted list of all the (non-common) words that appear in the documents. Your program will then read, from the terminal input, a search queries which consist of two words (implicitly combined by "AND"), then your program will locate documents that contain one or both words, rank documents by how well they match the search query, and/or print extracts from the text documents where the two words appear close together. You will be supplied with code that reads the files, parses the words, discards common words, and provides an easy way to print extracts from the files.  Your program must fully read the documents and create the inverted lists before reading search queries from the terminal.  This is because search engines typically preprocess the documents once (perhaps in a time-consuming, expensive process) and then are able to quickly reply to a wide variety of queries.

The programming assignment is split into four stages or parts. It is strongly suggested that you implement the program stage-by-stage, completing one stage before attempting the next. It is recommended, although not required that you do the stages in the order listed below:

  1. Given two input words: list out the document numbers of document which contain both input words.
  2. Given two input words, compute a "rank" or "score" for each document corresponding to how well the document meets the search criteria. The scoring will give extra importance to documents that contain the two words in close proximity. Report the scores of the ten highest ranked document.
  3. Given two input words, do the work of stage 2, plus, print excerpts from the top ranked five documents. These excerpts will be places where both input words appear in close proximity.
  4. Do all of the above, and allow prefix searches. Thus, one or both of the search words could be suffixed with a "*", indicating that the word is a prefix string that matches any word with that prefix.

For informational purposes and for comparison with your own program, you are provided with the .class files for a sample implementation of the programming assignment. This is the program MainHw4Demo which can be found in the directory ../public/ProgHomework4. You are provided with source code for some of the helper classes and for a skeletal program which shows how to read words from files with the "WordGrabber" class. The skeletal program is and it also contains code for reading in search queries and search commands. In addition, there is javadoc documentation provided for the WordGrabber class and the FileIterator class which are used by MainHw4.

The Demo Program and the Functionality Required in Your Programming Solution
    You should start by trying out the MainHw4Demo program to understand the programming assignment, and to see how your program should act. In general, you program should mimic the behavior of MainHw4Demo very closely since, for grading, your program's output will be compared with the output of MainHw4Demo (compared by hand, not by computer).

Step 1: Copy from ../public/ProgHomework4, all the .class and .java files. Do not try to copy the directory TextFiles as it is much too large (about 120 Mb of disk space, but less than half that of actual data).

Step 2: Run the java program MainHw4Data. When it asks you for a directory, enter MidiData. This is a collection of 82 files, Aesop's fables in fact. The program will then ask you to enter two search words: for your first trial, try entering "lion hunters". This means we wish to search for documents which contain both the words "lion" and "hunters". (Please be sure to use "hunters", not "hunter"!)
      Then enter the command option "d". This means that we are looking for the documents that contain both of the search words. The program will tell you that documents numbers 10 and 48 contain both words.
      Without quiting the program, continue to step #3.

Step 3: Use the same two search words again, but now use command option "r" ("r" is for "rank"). The program lists the 10 highest ranked documents with the words "lion" or "hunters" in them. You will see that document 10 is the highest ranked: it has ten occurrences of the word "lion", two occurrences of the word "hunters", and  four pairs of the words "lion" and "hunter" which are close together.

The definition of an occurence of a pair of words being close together, is that the starting position of the two occurrences of the words are less than 144 symbols apart. (The number 144 is rather arbitrary, but you must use the same convention in your program.) We do not allow pairs that occur across document boundaries.

The formula for the ranking of a document is as follows: let a be the number of occurences of the first word in a given document, let b be the number of occurences of the second search word, and finally let p be the number of occurences of the pairs of two words which are less than 144 symbols apart from each other. Then the rank or score of the document is defined to equal


(This formula is not fully optimal and was obtained with a little trial and error.) For example, in the search you just performed, document 10 is the highest ranked: it has 10 occurences of "lion", 2 of "hunters" with 4 occurences that are near each other. It has a ranking of 51*43 = 2193. Note that the two occurences of "hunters" form four pairs of close occurences to "lion". Thus there is duplication or overlap in the counting of close occurences.
    Continue to step 4, without quitting the program.

Step 4: Use the same two search words, but now chose command option "x" to print out extracts. Extracts are excerpts from files where the two search words appear close to each other in the documents. You should see two extracts from each of documents 10 and 48. Extracts are printed from up to five files, at most two extracts per document.
    Continue to the next step without quitting the program.

Step 5: A limited wild card search capability is provided, namely, by suffixing word with "*" you can seach for words that start with a given prefix. For example, try entering "lion hunt*" as the two search words. This will search for places where "lion" and where any word beginning with "hunt" occurs. In this case, if you choose the "x" option, you will see that the word "hunting" also occurs a number of times near the word "lion".

One thing to notice on printing extracts is that sometimes the extracts overlap badly. For instance if the text "Lion ... Lion ... hunting" occurs as in document #48, the two excerpts will very much duplicate each other. In a commercial search engine, this kind of thing would need to be avoided, but our purposes, it is not a problem that needs to be fixed.

The MidiData files (and other data files) are accessible for you to read, and you may want to look at them to see the kind of text files you are searching.

The Skeletal MainHw4 Program A fair amount of material and software is available in the directory ../public/ProgHomework4.
   You should look at the program closely, as much of the code in the program will be used in your program or adapted for use in your program. Start by looking at the routine readWordsFromFiles. This routine uses a WordGrabber object to open files one at a time and to extract alphabetic words one at a time from the files. (The source code for is available if you wish to examine it.) It first creates a new WordGrabber wg with a root directory and a file of common words. The root directory is recursively searched by the WordGrabber for files which have filename ending with .txt. When readWordsFromFiles calls wg.startNextFile() the next file is opened to be ready for reading. (startNextFile returns either the file number or -1 if no more files left. The file numbers will be sequential except in the case of read errors.) readWordsFromFiles then calls wg.posNextWord() and wg.nextWord() to get the next word from the file and the starting position of the word in the file. wg.posNextWord returns -1 when there is no next word to read. readWordsFromFiles then prints out the file number, the position of the word and the word. Common words, such as "the" and "there", and any word of three or fewer letters are suppressed by the word grabber so as to reduce the amount of work your program has to do. The file of common words is the second parameter to the WordGrabber constructor.

Next examine the main part of MainHw4. This consists of a loop that reads in two search words and a command 'd', 'r' or 'x'.  This loop calls two methods to parse the input lines.  You can probably use this code as is, or with minor modifications.

prettyPrint useful to breaking a long line up into pieces of at most 80 characters for printing. For example, I use this in my demo for the 'd' option, by making a long String of document numbers separated by spaces and then calling prettyPrint to print the string out. I also use this in the demo program to print the file extracts for the 'x' command. This is done with another helper routine printExtractWithTwoWords, in, it takes as parameters a file number, and the position of two words in the file, and prints out a two or three line extract from the file containing the two words.  There is a method of WordGrabber, called getFileInfo, which returns information about the file, often including its title and author.

printFrequentWords is some old code that I used to create the list of common words. This is left in as an illustration of how to use the Java HashMap class. (See below for more on useful Java classes.)  What the printFrequentWords routine does, is form a HashMap, where the keys are distinct words, and the values are the frequency counts of the words.  Each time a new word is read, it is looked up in the HashMap and the corresponding count is incremented.  If it is not found, then the word is added as a key of the HashMap, with a value of 1.

What to do for the programming assignment:
You will want to start by writing code that reads in the files one word at a time and creates inverted lists containing information about where each word appears in the files.  You should probably start with the following kind of structure: create a HashMap where the keys are the words as read from the files, and the values are lists or arrays or word occurence information..  (You can make them ArrayLists, at least to start.)  Each list will consist of a sequence of integers

<f1,  p1,  f2,   p2,  f3,  p3,  ...,  fn,   pn>

this indicates that the word occurs n times, the first occurence was at character position p1 in file number f1, the second occurence at position p2 in file number f2, etc.  As you read in words from the files, look up the words in the HashMap.  If the word is not in the HashMap, make a new list <f1, p1>; alternately, if the word is in the HashMap, append pn+1,fn+1   to the list.  Then add the new key and list pair into the HashMap.

Once you are able to create these lists, you can then start coding  the functionality of the 'd', the 'r' and the 'x' commands.  (It is strongly suggested you implement them in this order, one at a time.)  The basic technique is that the two search words correspond to inverted lists. Then you walk through the inverted lists, incrementing your position in the lists one a time, always incrementing the list in which the next position is the earliest. For the 'd' option, you need only keep track of which document numbers appear in both lists.  For debugging purposes, you may find it handy to write a routine that prints out the lists of occurences of the two search words. 

For the 'r', you must keep track of how many of each word occurs in each document, and how often the two pairs of words occur less than144 symbols apart from each other.  When calculating the distance between occurences, you should use the starting position of each word (otherwise, you will differ from the way the demo program calculates numbers of pairs of close occurences.

For the 'x' command, you collect the same information as for the 'r' command, but you also remember the two pairs of close occurences which are the closest together in each document.

For wild card (prefix) matches, you will need to find the lists for each word that matches the prefix test, and walk through them appropriately.

List incrementing algorithms. As said above, you walk through the inverted lists, incrementing your position in the lists one a time, always incrementing the list in which the next position is the earliest.    There is more detailed description available.

(I earlier suggested a more complicated way of walking through the two lists, in the first version of this posted assignment.  I now suggest you use the way above instead, and I will change the MainHw4Demo program to use the method suggested above.  Functionally, the two methods are almost always equivalent, but they  do differ slightly the way close pairs are detected.)

Debugging: You should start with a small number of files for testing purposes.   This is why I supplied the MicroData and MiniData and MidiData test sets.  Change the value of textRoot in MainHw2to control which files are read.  You can also limit the maximum number of files read (to as low as 1 if you wish) using the variable maxNumFiles as shown by example in the readWordsFromFiles sample method.

For debugging the creation of your inverted lists, you may want to write a simple routine that prints the contents of one or more lists to the terminal.

Algorithms/Code Design
There are many possible ways to write your code
.  But here I will outline how I wrote the code.  You may follow my outline, and create your own algorithms if you wish.  Quite possibly someone will improve on what I suggest.

Java Resources.  You will want to use any of Sun's built-in Java data structures you can.  The most useful can be found in the java.util.* library.  These include the HashSet, HashMap, TreeMap, and TreeSet classes.  They also include the ArrayList class: an implementation of the resizable array (very similar to the Vector class, which you may wish to use instead).  The class Arrays includes helpful routines for sorting and for binary searching.  
    Documentation for these classes can be found on line at   Source code is available for these classes on ieng9 as previously announced.

Turn in: Turn in in two items: README and  Any helper classes for your code must be made inner classes, so that only one source file is needed.

  1. must be setup to read from the MegaData files. If there is a limit on how many files can be read, please include in your code a stopping condition and stop reading files when the limit is reached.  Also, document the limit in your README file.

  2. The README file should explain how much of the above you successfully implemented.  It should also explain how many files it runs on. it should explain the general idea of how you implemented the code, and particularly any significant differences between the implementation described above and your actual implementation.  You should also include some sample search words that your program works well with.  If your program fails our tests, we can still check your suggested sample words for functionality.

The bundleP4 program will be made available to turn in your program and README file.

Grading standards:  The inclusion of the wild card feature was a last minute decision, and should be considered at least partly as an extra credit item.  If you do everything except the wild card features, then this qualifies as at least an "A minus" grade.  (Quite possibly I will be more generous than this.)  Approximate partial standards are:   'd' command worth about 35 points, 'r' command worth about 35 points, 'x' command worth about 25 points, style worth about 10 points, wild card functionality worth about 15 points.   Total points: 120 points.