hldig

Synopsis

hldig [options] [start_url_file]

Description

Hldig retrieves HTML documents using the HTTP or HTTPS protocol and gathers information from these documents which can later be used to search these documents. This program can be referred to as the search robot.

Options

-a
Use alternate work files. Tells hldig to append .work to database files, causing a second copy of the database to be built. This allows the original files to be used by htsearch during the indexing run. When used without the "-i" flag for an update dig, hldig will use any existing .work files for the databases to update.
-c configfile
Use the specified configfile file instead of the default.
-h maxhops
Restrict the dig to documents that are at most maxhops links away from the starting document.
-i
Initial. Do not use any old databases. This is accomplished by first erasing the databases.
-m url_file
Minimal. Index only the URLs listed in url_file and no others. A file name of "-" reads from STDIN. See also the start_url_file argument.
-s
Print statistics about the dig after completion.
-t
Create an ASCII version of the document database. This database is easy to parse with other programs so that information can be extracted from it for purposes other than searching. One could gather some interesting statistics from this database.

Each line in the file starts with the document id followed by a list of \tfieldname:value. The fields always appear in the order listed below:

fieldnamevalue
uURL
tTitle
aState (0 = normal, 1 = not found, 2 = not indexed, 3 = obsolete)
mLast modification time as reported by the server
sSize in bytes
HExcerpt
hMeta description
lTime of last retrieval
LCount of the links in the document (outgoing links)
bCount of the links to the document (incoming links or backlinks)
cHopCount of this document
gSignature of the document used for duplicate-detection
eE-mail address to use for a notification message from hlnotify
nDate to send out a notification e-mail message
SSubject for a notification e-mail message
dThe text of links pointing to this document. (e.g. <a href="docURL">description</a>)
AAnchors in the document (i.e. <A NAME=...)
-u username:password
Tells hldig to send the supplied username and password with each HTTP request. The credentials will be encoded using the 'Basic' authentication scheme. There HAS to be a colon (:) between the username and password.
-v
Verbose mode. This increases the verbosity of the program. Using more than 2 is probably only useful for debugging purposes. The default verbose mode (using only one -v) gives a nice progress report while digging. This progress report can be a bit cryptic, so here is a brief explanation. A line is shown for each URL, with 3 numbers before the URL and some symbols after the URL. The first number is the number of documents parsed so far, the second is the DocID for this document, and the third is the hop count of the document (number of hops from one of the start_url documents). After the URL, it shows a "*" for a link in the document that it already visited, a "+" for a new link it just queued, and a "-" for a link it rejected for any of a number of reasons. To find out what those reasons are, you need to run hldig with at least 3 -v options, i.e. -vvv. If there are no "*", "+" or "-" symbols after the URL, it doesn't mean the document was not parsed or was empty, but only that no links to other documents were found within it. With more verbose output, these symbols will get interspersed in several lines of debugging output.
start_url_file
A file containing a list of URLs to start indexing from, or "-" for STDIN. This will augment the default start_url and override the file supplied to [-m url_file].

Files

CONFIG_DIR/htdig.conf
The default configuration file.
DATABASE_DIR/db.docdb
Stores data about each document (title, url, etc.).
DATABASE_DIR/db.words.db, DATABASE_DIR/db.words.db_weakcmpr
Record which documents each word occurs in.
DATABASE_DIR/db.excerpts
Stores start of each document to show context of matches.

See Also

hlmerge, htsearch, Configuration file format, and A Standard for Robot Exclusion.