This FAQ is compiled by the hl://Dig developers and the most recent version is available at <FAQ.html>. Questions (and answers!) are greatly appreciated. Primary methods of communication:
Note: 2018-01-04 - This document is very out-of-date and will take a very long time to review and update.
No, hl://Dig is a system for indexing and searching a finite (not necessarily small) set of sites or intranet. It is not meant to replace any of the many internet-wide search engines.
1.2. Can I index the internet with hl://Dig?No, as above, hl://Dig is not meant as an internet-wide search engine. While there is theoretically nothing to stop you from indexing as much as you wish, practical considerations (e.g. time, disk space, memory, etc.) will limit this.
1.3. What's the difference between htdig and hl://Dig?The complete hl://Dig package consists of several programs, one of which is called "htdig." This program performs the "digging" or indexing of the web pages. Of course an index doesn't do you much good without a program to sort it, search through it, etc.
1.4. I sent mail to Andrew or Geoff or Gilles, but I never got a response!Andrew no longer does much work on hl://Dig. He has started a company, called Contigo Software and is quite busy with that. To contact any of the current developers, send mail to <htdig-dev>. This list is intended primarily for the discussion of current and future development of the software.
Geoff and Gilles are currently the maintainers of hl://Dig, but they are both volunteers. So while they do read all the e-mail they receive, they may not respond immediately. Questions about hl://Dig in general, and especially questions or requests for help in configuring the software, should be posted to the <htdig-general> mailing list. When posting a followup to a message on the list, you should use the "reply to all" or "group reply" feature of your mail program, to make sure the mailing list address is included in the reply, rather than replying only to the author of the message. See also question 1.16 and the mailing list page.
1.5. I sent a question to the mailing list but I never got a response!Development of hl://Dig is done by volunteers. Since we all have other jobs, it make take a while before someone gets back to you. Please be patient and don't hound the volunteers with direct or repeated requests. If you don't get a response after 3 or 4 days, then a reminder may help. See also question 1.16.
1.6. I have a great idea/patch for hl://Dig!Great! Development of hl://Dig continues through suggestions and improvements from users. If you have an idea (or even better, a patch), please send it to the hl://Dig mailing list so others can use it. For suggestions on how to submit patches, please check the Guidelines for Patch Submissions. If you'd like to make a feature request, you can do so through the hl://Dig bug database
1.7. Is hl://Dig Y2K compliant?hl://Dig should be y2k compliant since it never stores dates as two-digit years. Under hl://Dig's copyright (GPL), there is no warranty whatsoever as permitted by law. If you would like an iron-clad, legally-binding guarantee, feel free to check the source code itself. Versions prior to 3.1.2 did have a problem with the parsing of the Last-Modified header returned by the HTTP server, which will cause incorrect dates to be stored for documents modified after February 28, 2000 (yes, it didn't recognize 2000 as a leap year). Versions prior to 3.1.5 didn't correctly handle servers that return two digit years in the Last-Modified header, for years after 99. These problems are fixed in the current release. If you discover something else, please let us know!
1.8. I think I found a bug. What should I do?Well, there are probably bugs out there. You have two options for bug-reporting. You can either mail the hl://Dig mailing list at <htdig-general@lists.sourceforge.net> or better yet, report it to the bug database, which ensures it won't become lost amongst all of the other mail on the list. Please try to include as much information as possible, including the version of hl://Dig (see question 5.33), the OS, and anything else that might be helpful. Often, running the programs with one "-v" or more (e.g. "-vvv") gives useful debugging information. If you are unsure whether the problem is a bug or a configuration problem, you should discuss the problem on <htdig-general> (after carefully reading the FAQ and searching the mail archive and patch archive, of course) to sort out what it is. The mailing list has a wider audience, so you're more likely to get help with configuration problems there than by reporting them to the bug database.
Whether reporting problems to the bug database or mailing list, we cannot stress enough the importance of always indicating which version of hl://Dig you are running. See question 5.33. There are still a lot of users, ISPs and software distributors using older versions, and there have been a lot of bug fixes and new features added in recent versions. Knowing which version you're running is absolutely essential in helping to find a solution. If you're unsure if your version is current, or what fixes and features have been added in more recent versions, please see the release notes. See also question 2.1.
1.9. Does hl://Dig support phrase or near matching?Phrase searching has been added for the 3.2 release, which is currently in the beta phase (3.2.0b6 as of this writing). Near or proximity matching will probably be added in a future beta.
1.10. What are the practical and/or theoretical limits of hl://Dig?The code itself doesn't put any real limit on the number of pages. There are several sites in the hundreds of thousands of pages. As for practical limits, it depends a lot on how many pages you plan on indexing. Some operating systems limit files to 2 GB in size, which can become a problem with a large database. There are also slightly different limits to each of the programs. Right now hlmerge performs a sort on the words indexed. Most sort programs use a fair amount of RAM and temporary disk space as they assemble the sorted list. The htdig program stores a fair amount of information about the URLs it visits, in part to only index a page once. This takes a fair amount of RAM. With cheap RAM, it never hurts to throw more memory at indexing larger sites. In a pinch, swap will work, but it obviously really slows things down.
The 3.2 development code helps with many of these limitations. In paticular, it generates the databases on the fly, which means you don't have to sort them before searching. Additionally, the new databases are compressed significantly, making them usually around 50% the size of those in previous versions.
1.12. Can I use hl://Dig on a commercial website?Sure! The GNU Library General Public License (LGPL) has no restrictions on use. So you are free to use hl://Dig however you want on your website, personal files, etc. The license only restricts distribution. So if you're planning on a commercial software product that includes hl://Dig, you will have to provide source code including any modifications upon request.
1.13. Why do you use a non-free product to index PDF files?We don't. You can use the "acroread" program to index PDF files, but this is no longer recommended. Initially this program was the only reliable way to extract data from PDF files. However, the xpdf package is a reliable, free software package for indexing and viewing PDF files. See question 4.9 for details on using xpdf to index PDF files. We do not advocate using acroread any longer because it is a proprietary product. Additionally it is no longer reliable at extracting data.
1.14. Why do you have all those SourceForge logos on your website?SourceForge is a new service for open source software. You can host your project on SourceForge servers and use many of their services like bug-tracking and the like. The hl://Dig project currently uses SourceForge for a mirror of the main website at htdig.sourceforge.net as well as a mirror of hl://Dig releases and contributed work.
1.15. My question isn't answered here. Where should I go for help?Before you go anywhere else, think of other ways of phrasing your question. Many times people have questions that are very similar to other FAQ and while we try to phrase the queries in the FAQ closely to the most common questions, we obviously can't get them all! The next place to check is the documentation itself. In particular, take a look at the list of configuration attributes, particularly the list by name and by program. There are a lot of them, but chances are there's something that might fit your needs. You should also take a close look at all of htsearch's documentation, especially the section "HTML form" which describes all the CGI input parameters available for controlling the search, including limiting the search to certain subdirectories. You can find the answer yourself to almost all "how can I..." questions by exploring what the various configuration attributes and search form input parameters can do. Also have a look at our collection of Contributed Guides for help on things like HTML forms and CGI, tutorials on installing, configuring, using, and internationalizing hl://Dig, as well as using PHP with htsearch.
Finally, if you've exhausted all the online documentation, there's the htdig-general mailing list. There are hundreds of users subscribed and chances are good that someone has had a similar problem before or can suggest a solution.
1.16. Why do the developers get annoyed when I e-mail questions directly to them rather than the mailing list?The htdig-general mailing list exists for dealing with questions about the software, its installation, configuration, and problems with it. E-mailing the developers directly circumvents this forum and its benefits. Most annoyingly, it puts the onus on an individual to answer, even if that individual is not the best or most qualified person to answer. This is not a one-man show. It also circumvents the archiving mechanism of the mailing list, so not only do subscribers not see these private messages and replies, but future users who may run into the exact same problems won't see them. Remember that the developers are all volunteers, and they don't work for free for your benefit alone. They volunteer for the benefit of the whole hl://Dig user community, so don't expect extra support from them outside of that community. See also questions 1.4 and 1.5.
Note also that when you reply to a message on the list, you should make sure the reply gets on the list as well, provided your reply is still on-topic. See question 1.17 below.
1.17. Why do replies to messages on the mailing list only go to the sender and not to the list?The simple answer is that, unlike some mailing lists, the lists on SourceForge don't force replies back on the list. This is actually a good thing, because you can reply to the sender directly if you want to, or you can use your mail program's "reply to all" capability (sometimes called "group reply") to reply to the mailing list as well. It does mean you have to think before you post a reply, but some would argue that this is a good thing too. There are some compelling reasons to try to keep on-topic discussions on the list, though (see questions 1.16 and 1.4 above).
The technical answer is SourceForge's policy on Reply-To: munging, where you'll find all the gory details about the pros and cons of the two common ways of setting up a mailing list, and why SourceForge turns off Reply-To munging. It so happens that the hl://Dig maintainers agree with SourceForge's policy on this, even if we did have a say in the matter. So, counterarguments to this policy are rather moot, and it would be better not to waste any more mailing list bandwidth debating them. (We've heard all the arguments anyway.)
1.18. Can I use hl://Dig to index and search an SQL database?You can if your database has a web-based front end that can be "spidered" by hl://Dig. The requirement is that every search result must resolve to a unique URL which can be accessed via HTTP. The htdig program uses these URLs, which you feed it via the start_url attribute, to fetch and index each page of information. The search results will then give a list of URLs for all pages that match the search terms. If you don't have such a front end to your database, or the search results must be given as something other than URLs, then hl://Dig is probably not the best way of dealing with this problem: you may be better off using an SQL query engine that works directly on your own database, rather than building a separate hl://Dig database for searching.
Ted Stresen-Reuter had the following tips: "In my case,
because I like htdig's ability to rank results (and that
ranking can be modified), I created an index page that simply
walks through each record and indexes each record (with
next and previous links so the spider can
read all the records). And then I do one other thing: I make
the <title>
tag start with the unique ID
of each record. Then, when I'm parsing the search results, I
do a lookup on the database using the title tag as the key."
The latest version is 3.1.6 as of this writing. A beta version of the 3.2 code, 3.2.0b6, is also available, for those who wish to test it. You can find out about the latest version by reading the release notes.
Note that if you're running any version older than 3.1.5 (including 3.2.0b1) on a public web site, you should upgrade immediately, as older versions have a rather serious security hole which is explained in detail in this advisory which was sent to the Bugtraq mailing list. Another slightly less serious, but still troubling security hole exists in 3.1.5 and older (including 3.2.0b3 and older), so you should upgrade if you're running one of these. You can view details on this vulnerability from the bugtraq mailing list. If you're unsure of which version you're running, see question 5.33.
2.2. Are there binary distributions of hl://Dig?We're trying to get consistent binary distributions for popular platforms. Contributed binary releases will go in the contributed binaries section and contributions should be mentioned to the htdig-general mailing list.
Anyone who would like to make consistent binary distributions of hl://Dig at least should signup to the htdig-announce mailing list.
2.3. Are there mirror sites for hl://Dig?Yes, see our mirrors listing. If you'd like to mirror the site, please see the mirroring guide.
2.4. Is hl://Dig available by ftp?Yes. You can find the current versions and several older versions at various <mirror sites> as well as the other locations mentioned in the download page.
2.5. Are patches around to upgrade between versions?Most versions are also distributed as a patch to the previous version's source code. The most recent exception to this was version 3.1.0b1. Since this version switched from the GDBM database to DB2, the new database package needed to be shipped with the distribution. This made the potential patch almost as large as the regular distribution. Update patches resumed with version 3.1.0b2. You can also find archives of patches submitted to the htdig mailing lists, to fix specific bugs or add features, at Joe Jah's htdig-patches ftp site.
2.6. Is there a Windows 95/98/2000/NT version of hl://Dig?The hl://Dig package can be built on the Win32 platform when using the Cygwin package. For details, see the contributed guide, Idiot's Guide to Installing hl://Dig on Win32.
As of the 3.2.0b5 beta release, there is also native Win32 support, thanks to Neal Richter. (Installation docs will be written soon...)
2.7. Where can I find the documentation for my version of hl://Dig?The documentation for the most recent stable release is always posted at www.htdig.org. The documentation for the latest beta release can be found at http://www.htdig.org/dev/htdig-3.2/. In all releases, the documentation is included in the htdoc subdirectory of the source distribution, so you always have access to the documentation for your current version.
This usually indicates that either libstdc++ is not installed or is installed incorrectly. To get libstdc++ or any other GNU too, check ftp://ftp.gnu.org/gnu/. Note that the most recent versions of gcc come with libstdc++ included and are available from http://gcc.gnu.org/
3.2. I get an error about -lgThis is due to a bug in the Makefile.config.in of version 3.1.0b1. Remove all flags "-ggdb" in Makefile.config.in. Then type "./config.status" to rebuild the Makefiles and recompile. This bug is fixed in version 3.1.0b2.
3.3. I'm compiling on Digital Unix and I get mesages about "unresolved" and "db_open."Answer contributed by George Adams <learningapache@my-dejanews.com>
What you're seeing are problems related to the Berkeley DB
library. htdig needs a fairly modern version of db, which is
why it ships with one that works. (see that -L../db-2.4.14/dist
line? That's where htdig's db library is).
The solution is to modify the c++ command so it explicity
references the correct libdb.a . You can do this by replacing
the "-ldb" directive in the c++ command with
"../db-2.4.14/dist/libdb.a" This problem has been resolved as of
version 3.1.0.
Answer contributed by Laura Wingerd <laura@perforce.com>
I got a clean build of htdig-3.1.2 on FreeBSD 2.2.8 by taking
-D_THREAD_SAFE out of CPPFLAGS, and setting LIBS to null, in
db/dist/configure.
Answer contributed by Adam Rice <adam@newsquest.co.uk>
The problem is that the Solaris loader can't find the library. The best thing to do is set the LD_RUN_PATH environment variable during compile to the directory where libstdc++.so.2.8.1.1 lives. This tells the linker to search that directory at runtime.
Note that LD_RUN_PATH is not to be confused with LD_LIBRARY_PATH. The latter is parsed at run-time, while LD_RUN_PATH essentially compiles in a library path into the executable, so that it doesn't need a LD_LIBRARY_PATH setting to find its libraries. This allows you to avoid all the complexities of setting an environment variable for a CGI program run from the server. If all else fails, you can always run your programs from wrapper shell scripts that set the LD_LIBRARY_PATH environment variable appropriately.
Note also that while this answer is specific to Solaris, it may
work for other OSes too, so you may want to give it a try. However,
not all versions of the ld
program on all OSes support
the LD_RUN_PATH environment variable, even if these systems support
shared libraries. Try "man ld
" on your system to
find out the best way of setting the runtime search path for shared
libraries. If ld
doesn't support LD_RUN_PATH, but does
support the -R
option, you can add one or more of these
options to LIBDIRS in Makefile.config before running make on a 3.1.x
release. (For a 3.2 beta release, you can add these options to the
LDFLAGS environment variable before you run ./configure.)
It is not entirely clear why these problems occur, though they seem to only happen when older compilers are used. Several people have reported that the problems go away when using the latest version of gcc.
3.8. I'm compiling with gcc 3.2 and getting all sorts of warnings/errors about ostream and such.With versions before 3.2.0b5, you should use the following command to configure the hl://Dig package so it can be built with gcc 3.2:
CXXFLAGS=-Wno-deprecated CPPFLAGS=-Wno-deprecated ./configure
There are a variety of reasons hl://Dig won't index a site. To get to the bottom of things, it's advisable to turn on some debugging output from the htdig program. When running from the command-line, try "-vvv" in addition to any other flags. This will add debugging output, including the responses from the server.
See also questions 5.25, 5.27, 5.16 and 5.18.
4.2. How can I change the output format of htsearch?Answer contributed by: Malki Cymbalista <Malki.Cymbalista@weizmann.ac.il>
You can change the output format of htsearch by creating different header, footer and result files that specify how you want the output to look. You then create a configuration file that specifies which files to use. In the html document that links to the search, you specify which configuration file to use.
So the configuration file would have the lines:
search_results_header: ${common_dir}/ccheader.html search_results_footer: ${common_dir}/ccfooter.html template_map: Long long builtin-long \ Short short builtin-short \ Default default ${common_dir}/ccresult.html template_name: Default
You would also put into the configuration file any other lines from the default configuration file that apply to htsearch.
The files ${common_dir}/ccheader.html and ${common_dir}/ccfooter.html and ${common_dir}/ccresult.html would be tailored to give the output in the desired format.
Assuming your configuration file is called cc.conf, the html file that
links to the search has to set the config parameter equal to cc. The
following line would do it:
<input type="hidden" name="config" value="cc">
Note: Don't just add the line above to your search form without checking if there isn't already a similar line giving the config attribute a different value. The sample search.html form that comes with the package includes a line like this already, giving "config" the default value of "htdig". If it's there, modify it instead of adding another definition. The config input parameter doesn't need to be hidden either, and you may want to define it as a pull-down list to select different databases (see question 4.4).
4.3. How do I index pages that start with '~'?hl://Dig should index pages starting with '~' as if it was another web browser. If you are having problems with this, check your server log files to see what file the server is attempting to return.
4.4. Can I use multiple databases?Yes, though you may find it easier to have one larger
database and use restrict or exclude fields on searches. To use
multiple databases, you will need a config file for each
database. Then each file will set the
database_dir or
database_base attribute to
change the name of the databases. The config file is selected
by the config input field in the search form.
See also questions 4.2 and
4.20.
As of version 3.1.0, you can do this with the -m option to hlmerge.
4.6. Wow, hl://Dig eats up a lot of disk space. How can I cut down?There are several ways to cut down on disk space. One is not to use the "-a" option, which creates work copies of the databases. Naturally this essentially doubles the disk usage. If you don't need to index and search at the same time, you can ignore this flag.
If you are running 3.2.0b5 or higher and don't have compression turned on, then turning that on will also save considerable space.
Changing configuration variables can also help cut down on disk usage. Decreasing max_head_length and max_meta_description_length will cut down on the size of the excerpts stored (in fact, if you don't have use_meta_description set, you can set max_meta_description_length to 0!).
If you are running 3.2.0b6 or higher, you can turn off store_phrases. This cuts the database size by about 60%, at the expense of severely limiting the effectiveness of phrase searches. It also reduces digging time slightly.
Other techniques include removing the db.wordlist file and adding more words to the bad_words file.
The University of Leipzig has published word lists containing the 100, 1000 and 10000 most often used words in English, German, French and Dutch. No copyrights or restrictions seem to be applied to the downloadable files. These can be very handy when putting together a bad_words file. Thanks to Peter Asemann for this tip.
4.7. Can I use SSI or other CGIs in my htsearch results?Not really. Apache will not parse CGI output for SSI statements (See the Apache FAQ). Thus,the htsearch CGI does not understand SSI markup and thus cannot include other CGIs. However, it is possible doing it the other way round: you can have the htsearch results included in your dynamic page.
The Apache project has mentioned that this will be a feature added to the Apache 2.0 version, currently in development.
The easiest approach in the meantime is using SSI with
the help of the script_name configuration
file attribute. See the contrib/scriptname
directory for a small example using SSI.
For CGI and PHP, you need a "wrapper" script to
do that. For perl script examples, see the files in
contrib/ewswrap
. The PHP guide (see contributed
guides) not only describes a wrapper script for PHP, but
also offers a step by step tutorial to the basics of
hl://Dig and is well worth reading.
For other alternatives, see question 4.11.
This must be done with an external parser or converter. A sample of such an external converter is the contrib/doc2html/doc2html.pl Perl script. It will parse Word, PostScript, PDF and other documents, when used with the appropriate document to text converters. It uses catdoc to parse Word documents, and ps2ascii to parse PostScript files. The comments in the Perl script and accompanying documentation indicate where you can obtain these converters.
Versions of htdig before 3.1.4 don't support external converters, so you have to use an external parser script such as contrib/parse_doc.pl (or better yet, upgrade htdig if you can). External converter scripts are simpler to write and maintain than a full external parser, as they just convert input documents to text/plain or text/html, and pass that back to htdig to be parsed. Parsing is more consistent across document types with external converters, because the final work is done by htdig's internal parsers. External parser scripts tend to be hacks that don't recognize a lot of the parsing attributes in your htdig.conf, so they have to be hacked some more when you change your attributes.
The most recent versions of parse_doc.pl, conv_doc.pl and
the doc2html package are available on our web site.
See below for an example of doc2html.pl, or see the comments in
conv_doc.pl and parse_doc.pl, or the documentation for doc2html
for examples of their usage.
For help with troubleshooting, see questions
5.37 and 5.39.
This too can be done with an external parser or converter, in combination with the pdftotext program that is part of the xpdf 0.90 package. A sample of such a converter is the doc2html.pl Perl script. It uses pdftotext to parse PDF documents, then processes the text into external parser records. The most recent version of doc2html.pl is available on our web site.
For example, you could put this in your configuration file:
external_parsers: application/msword->text/html /usr/local/bin/doc2html.pl \ application/postscript->text/html /usr/local/bin/doc2html.pl \ application/pdf->text/html /usr/local/bin/doc2html.pl
You would also need to configure the script to indicate where all of the document to text converters are installed. See the DETAILS file that comes with doc2html for more information.
Versions of htdig before 3.1.4 don't support external converters, so you have to use an external parser script such as contrib/parse_doc.pl (or better yet, upgrade htdig if you can). See question 4.8 above.
Whether you use this external parser or converter, or acroread with the pdf_parser attribute, to successfully index PDF files be sure to set the max_doc_size attribute to a value larger than the size of your largest PDF file. PDF documents can not be parsed if they are truncated.
This also raises the questions of why two different methods of indexing PDFs are supported, and which method is preferred. The built-in PDF support, which uses acroread to convert the PDF to PostScript, was the first method which was provided. It had a few problems with it: acroread is not open source, it is not supported on all systems on which hl://Dig can run, and for some PDFs, the PostScript that acroread generated was very difficult to parse into indexable text. Also, the built-in PDF support expected PDF documents to use the same character encoding as is defined in your current locale, which isn't always the case. The external converters, which use pdftotext, were developed to overcome these problems. xpdf 0.90 is free software, and its pdftotext utility works very well as an indexing tool. It also converts various PDF encodings to the Latin 1 set. It is the opinion of the developers that this is the preferred method. However, some users still prefer to stick with acroread, as it works well for them, and is a little easier to set up if you've already installed Acrobat.
Also, pdftotext still has some difficulty handling text in landscape orientation, even with its new -raw option in 0.90, so if you need to index such text in PDFs, you may still get better results with acroread. The pdf_parser attribute has been removed from the 3.2 beta releases of htdig, so to use acroread with htdig 3.2.0b5 or other 3.2 betas, use the acroconv.pl external converter script from our web site.
See also question 5.2 below and question 1.13 above. See questions 5.37 and 5.39 for troubleshooting tips.
4.10. How do I index documents in other languages?The first and most important thing you must do, to allow hl://Dig to properly support international characters, is to define the correct locale for the language and country you wish to support. This is done by setting the locale attribute (see question 5.8). The next step is to configure hl://Dig to use dictionary and affix files for the language of your choice. These can be the same dictionary and affix files as are used by the ispell software. A collection of these is available from Geoff Kuenning's International Ispell Dictionaries page, and we're slowly building a collection of word lists on our web site.
For example, if you install German dictionaries in common/german, you could use these lines in your configuration file:
locale: de_DE lang_dir: ${common_dir}/german bad_word_list: ${lang_dir}/bad_words endings_affix_file: ${lang_dir}/german.aff endings_dictionary: ${lang_dir}/german.0 endings_root2word_db: ${lang_dir}/root2word.db endings_word2root_db: ${lang_dir}/word2root.db
You can build the endings database with hlfuzzy endings
.
(This command may actually take days to complete, for
releases older than 3.1.2. Current releases use faster regular
expression matching, which will speed this up by a few orders
of magnitude.) Note that the "*.0" files are not part of
the ispell dictionary distributions, but are easily made by
concatenating the partial dictionaries and sorting to remove
duplicates (e.g.: "cat * | sort | uniq > lang.0
"
in most cases). You will also need to redefine the synonyms
file if you wish to use the synonyms search algorithm. This
file is not included with most of the dictionaries, nor is the
bad_words file.
If you put all the language-specific dictionaries and configuration files in separate directories, and set all the attribute definitions accordingly in each search config file to access the appropriate files, you can have a multilingual setup where the user selects the language by selecting the "config" input parameter value. In addition to the attributes given in the example above, you may also want custom settings for these language-specific attributes: date_format, iso_8601, method_names, no_excerpt_text, no_next_page_text, no_prev_page_text, nothing_found_file, page_list_header, prev_page_text, search_results_wrapper (or search_results_header and search_results_footer), sort_names, synonym_db, synonym_dictionary, syntax_error_file, template_map, and of course database_dir or database_base if you maintain multiple databases for sites of different languages. You could also change the definition of common_dir, rather than making up a lang_dir attribute as above, as many language-specific files are defined relative to the common_dir setting.
If you're running version 3.1.6 of hl://Dig, you may also
be interested in the accents fuzzy match
algorithm in the
search_algorithm
attribute, which lets you treat accented and unaccented letters
as equivalent in words. Note that if you use the accents algorithm,
you need to rebuild the accents database each time you update your
word database, using "hlfuzzy accents"
. This command
isn't in the default rundig script, so you may want to add it there.
The accents fuzzy match algorithm is also in the 3.2 beta releases.
There are also the
boolean_keywords and
boolean_syntax_errors
attributes in 3.1.6 for changing other language-specific messages
in htsearch.
Current versions of hl://Dig only support 8-bit characters, so languages such as Chinese and Japanese, which require 16-bit characters, are not currently supported.
Didier Lebrun has written a guide for configuring htdig to support French, entitled Comment installer et configurer HlDig pour la langue française. His "kit de francisation" is also available on our web site.
See also question 4.2 for tips on customizing htsearch, and question 4.6 for tips where to find bad_words files.
4.11. How do I get rotating banner ads in search results?While htsearch doesn't currently provide a means of doing SSI on its output, or calling other CGI scripts, it does have the capability of using environment variables in templates.
The easiest way to get rotating banners in htsearch is to replace htsearch with a wrapper script that sets an environment variable to the banner content, or whatever dynamically generated content you want. Your script can then call the real htsearch to do the work. The wrapper script can be written as a shell script, or in Perl, C, C++, or whatever you like. You'd then need to reference that environment variable in header.html (or wrapper.html if that's what you're using), to indicate where the dynamic content should be placed.
If the dynamic content is generated by a CGI script, your new wrapper script which calls this CGI would then have to strip out the parts that you don't want embedded in the output (headers, some tags) so that only the relevant content gets put into the environment variable you want. You'd also have to make sure this CGI script doesn't grab the POST data or get confused by the QUERY_STRING contents intended for htsearch. Your script should not take anything out of, or add anything to, the QUERY_STRING environment variable.
An alternative approach is to have a cron job that periodically regenerates a different header.html or wrapper.html with the new banner ad, or changes a link to a different pre-generated header.html or wrapper.html file. For other alternatives, see question 4.7.
4.12. How do I index numbers in documents?By default, htdig doesn't treat numbers without letters as words, so it doesn't index them. To change this behavior, you must set the allow_numbers attribute to true, and rebuild your index from scratch using rundig or htdig with the -i option, so that bare numbers get added to the index.
4.13. How can I call htsearch from a hypertext link, rather than from a search form?If you change the search.html form to use the GET method
rather than POST, you can see the URLs complete with all the
arguments that htsearch needs for a query. Here is an example:
http://www.grommetsRus.com/cgi-bin/htsearch?config=htdig&restrict=&exclude=&method=and&format=builtin-long&words=grapple+grommets
which can actually be simplified to:
http://www.grommetsRus.com/cgi-bin/htsearch?method=and&words=grapple+grommets
with the current defaults. The "&" character acts as a
separator for the input parameters, while the "+" character
acts as a space character within an input parameter.
In versions 3.1.5 or 3.2.0b2, or later, you can use a semicolon
character ";" as a parameter separator, rather than "&", for
HTML 4.0 compliance.
Most non-alphanumeric characters should be hex-encoded following
the convention for URL encoding (e.g. "%" becomes "%25", "+"
becomes "%2B", etc). Any htsearch input parameter that you'd
use in a search form can be added to the URL in this way.
This can be embedded into an <a href="..."> tag.
See also question 5.21.
First of all, you do not do this by using the "keywords" field in the search form. This seems to be a frequent cause of confusion. The "keywords" input parameter to htsearch has absolutely nothing to do with searching meta keywords fields. It actually predates the addition of meta keyword support in 3.1.x. A better choice of name for the parameter would have been "requiredwords", because that's what it really means - a list of words that are all required to be found somewhere in the document, in addition to the words the user specifies in the search form.
As of 3.2.0b5, the most direct way to search for a particular meta keyword is to specify the word as "keyword:<word>". Similarly, "title:", "heading:", and "author:" restrict searches to the respective fields. To search for words in the body of the text, use "text:".
To restrict all search terms to meta keywords only, you can set all factors other than keywords_factor to 0, and for 3.1.x, you must then reindex your documents. In the 3.2 betas, you can change factors at search time without needing to reindex. As of 3.2.0b5, it is possible to restrict the search in the query itself. Note that changing the scoring factors in this way will only alter the scoring of search results, and shift the low or zero scores to the end of the results when sorting by score (as is done by default). For versions before 3.2.0b5, the results with scores of zero aren't actually removed from the search results.
4.15. Can I use meta tags to prevent htdig from indexing certain files?Yes, in each HTML file you want to exclude, add the following between the <HEAD> and </HEAD> tags:
<META NAME="robots" CONTENT="noindex, follow">
Doing so will allow htdig to still follow links to other documents, but will prevent this document from being put into the index itself. You can also use "nofollow" to prevent following of links. See the section on Recognized META information for more details. For documents produced automatically by MhonArc, you can have that line inserted automatically by putting it in the MhonArc resource file, in the sections IDXPGBEGIN and TIDXPGBEGIN.
You can also use the noindex_start and noindex_end attributes to define one set of tags which will mark sections to be stripped out of documents, so they don't get indexed, or you can mark sections with the non-DTD <noindex> and </noindex> tags. The noindex_start and noindex_end attributes can also be used to suppress in-line JavaScript code that wasn't properly enclosed in HTML comment tags (see question 4.26). In 3.1.6, you can also put a section between <noindex follow> and </noindex> tags to turn off indexing of text but still allow htdig to follow links.
If you require much more elaborate schemes for avoiding indexing certain parts of your HTML files, especially if you don't have control over these files and can't add tags to them, you can set up htdig's external_parsers attribute with an external converter that will preprocess the HTML before it's parsed and indexed by htdig. Examples of this are the unhypermail.sh script in our contributed parsers and the ungeoify.sh script in our contributed scripts. By preprocessing the HTML, you can strip out parts you don't want, or you can add or change tags wherever they're needed, if you're willing to put in the effort to learn awk/sed/perl enough to do the job.
4.16. How do I get htsearch to use the star image in a different directory than the default /htdig?You must set either the image_url_prefix attribute, or both star_blank and star_image in your htdig.conf, to refer to the URL path for these files. You should also set this URL path similarly in in common/header.html and common/wrapper.html, as they also refer to the star.gif file. If you want to relocate other graphics, such as the buttons or the hl://Dig logo, you should change all references to these in htdig.conf and common/*.html.
4.17. How do I get htdig or htsearch to rewrite URLs in the search results?This can be done by using the url_part_aliases configuration file attribute. You have to set up different configuration files for htdig and htsearch, to define a different setting of this attribute for each one.
A large number of users insist on ignoring that last point and try to make do with just one definition, either for htdig or htsearch, or sometimes for both. This seems to stem from a fundamental misunderstanding of how this attribute works, so perhaps a clarification is needed. The url_part_aliases attribute uses a two stage process. In the first stage, htdig encodes the URLs as they go into the database, by using the pairs in url_part_aliases going from left to right. In the second stage, htsearch decodes the encoded URLs taken from the database, by using the pairs in url_part_aliases going from right to left. If you have the same value for url_part_aliases in htdig and htsearch, you end up with the same URLs in the end. If you modify the first string (the from string) in the pairs listed in url_part_aliases for htsearch, then when htsearch decodes the URLs it ends up rewriting part of them.
While you might think that if you don't use url_part_aliases in htdig, then you can use it in htsearch to alter unencoded URLs, the reality is that if you don't encode parts of URLs using url_part_aliases, they still get encoded automatically by the common_url_parts attribute. This helps to reduce the size of your databases. So, trying to use url_part_aliases only in htsearch doesn't work because there are no unencoded URLs in the database, so the right hand strings in the pairs you define won't match anything.
You also can't put two different definitions of the url_part_aliases attribute in a single configuration file, as some users have attempted. When you define an attribute twice, the second definition merely overrides the first. Pay close attention to the description and examples for url_part_aliases. You must put one definition of this attribute in your configuration file for htdig, hlmerge (or hlpurge) and hlnotify, and a different definition of it in your configuration file for htsearch.
4.18. What are all the options in htdig.conf, and are there others?In hl://Dig's terminology, the settings in its configuration files are called configuration attributes, to distinguish them from command line options, CGI input parameters and template variables. There are many, many attributes that can be set to control almost all aspects of indexing, searching, customization of output and internationalization. All attributes have a built-in default setting, and only a subset of these appear in the sample htdig.conf file. See the documentation for all default values for attributes not overridden in the configuration file, and for help on using any of them. See also question 1.15.
4.19. How do I get more than 10 pages of 10 search results from htsearch?There are two attributes that control the number of matches per page and the total number of pages. The number of matches per page can be set in your configuration file, using the matches_per_page attribute, or in your search form, using the matchesperpage input parameter.
The number of pages is controlled by the maximum_pages attribute in your search configuration file. The current default for maximum_pages is 10 because the hl://Dig package comes with 10 images, with numbers 1 through 10, for use as page list buttons. If we increased the limit, we'd have to field a whole lot more questions from users irate because only the first 10 buttons are graphics, and the rest are text. If you want more than 10 pages of results, change maximum_pages, but you may also want to set the page_number_text and no_page_number_text attributes in your search configuration file to nothing, or remove them, to use text rather than images for the links to other pages.
In version of htsearch before 3.1.4, maximum_pages limited only the number of page list buttons, and not the actual number of pages. This was changed because there was no means of limiting the total number of pages, but this ended up frustrating users who wanted the ability to have more pages than buttons. In 3.2.0b3 and 3.1.6 we introduced a maximum_page_buttons attribute for this purpose.
4.20. How do I restrict a search to only certain subdirectories or documents?That depends on whether you want to protect certain parts of
your site from prying eyes, or just limit the scope of search
results to certain relevant areas. For the latter, you just need
to set the restrict or exclude
input parameter in the search form.
This can be done using hidden input fields containing preset
values, text input fields, select lists, radio buttons or
checkboxes, as you see fit. If you use select lists, you can
propagate the choices to select lists in the follow-up search
forms using the
build_select_lists
configuration attribute.
The University at Albany has a good description of how to use
the restrict or exclude input
parameters:
Constructing a local search using hl://Dig Search forms.
To include a hex encoded character (such as a %20 for a space)
in a restrict or exclude string, the '%' must again be encoded.
For example, to match a filename containing a space, the URL must
contain %20, and so the CGI parameter passed to htsearch must
contain %2520. The %25 encodes the '%'. (Note that this is only
necessary for CGI input parameters, not for the corresponding
configuration attributes in your htdig.conf file, as attributes
aren't subjected to the same hex decoding step as parameters are.)
See also question 4.4.
If you wish to keep secure and non-secure areas on your site separate, and avoid having unauthorized users seeing documents from secure areas in their search results, that takes a bit more effort. You certainly can't rely on the restrict and exclude parameters, or even the config parameter, as any parameter in a search form can also be overridden by the user in a URL with CGI parameters. The safest option would be to host the secure and non-secure areas on separate servers with independent installations of htsearch, each with its own hl://Dig database, but that is often too costly or impractical an option. The next best thing is to host them on the same site, but make sure that everything is very clearly separated to prevent any leakage of secure data. You should maintain separate databases for the secure and public areas of your site, by setting up different htdig configuration files for each area. Use different settings of the start_url, limit_urls_to and database_dir configuration attributes, and possibly even different common_dir settings as well. Make sure your database_dir, and even your common_dir, are not in any directories accessible from the web server. Run htdig and hlmerge (or rundig) with each separate configuration file, to build your two databases.
The tricky part is to make sure your htsearch program is secure. You don't want to use the same htsearch for the secure and public sites, because otherwise the public site could access the configuration for the secure database, making its data publicly accessible. You must either compile two separate versions of htsearch, with different settings of the CONFIG_DIR make variable, or you must make a simple wrapper script for htsearch that overrides the compiled-in CONFIG_DIR setting by a different setting of the CONFIG_DIR environment variable. Make sure the CONFIG_DIR for the secure area is not a subdirectory of the CONFIG_DIR for the public area. In this way, you can maintain separate directories of config files for the public and secure sites, so that the secure config files are not accessible from the public htsearch.
Put the htsearch binary or wrapper script for the secure site in a different ScriptAlias'ed cgi-bin directory than the public one, and protect the secure cgi-bin with a .htaccess file or in your server configuration. Alternatively, you can put the secure program, let's call it htssearch, in the same cgi-bin, but protect that one CGI program in your server configuration, e.g.:
<Location /cgi-bin/htssearch> AuthType Basic AuthName .... AuthUserFile ... AuthGroupFile ... <Limit GET POST> require group foo </Limit> </Location>
This describes the setup for an Apache server. You'd need to work out an equivalent configuration for your server if you're not running Apache.
4.21. How can I allow people to search while the index is updating?Answer contributed by Avi Rappoport <avirr@searchtools.com>
If you have enough disk space for two copies of the index database, use -a with the htdig and hlmerge processes. This will make use of a copy of the index database with the extension ".work", and update the copy instead of the originals. This way, htsearch can use those originals while the update is going on. When it's done, you can move the .work versions to replace the originals, and htsearch will use them. The current rundig script will do this for you if you supply the -a flag to it. However, rundig builds the database from scratch each time you run it. If you want to update an alternate copy of the database, see the contributed rundig.sh script.
4.22. How can I get htdig to ignore the robots.txt file or meta robots tags?You can't, and you shouldn't. The Standard for Robot Exclusion exists for a very good reason, and any well behaved indexing engine or spider should conform to it. If you have a problem with a robots.txt file, you really should take it up with the site's webmaster. If they don't have a problem with you indexing their site, they shouldn't mind setting up a User-agent entry in their robots.txt file with a name you both agree on. The user agent setting that htdig uses for matching entries in robots.txt can be changed via the robotstxt_name attribute in your config file.
If you have a problem with a robots meta tag in a document (see question 4.15) you should take it up with the author or maintainer of that page. These tags are an all or nothing deal, as they can't be set up to allow some engines and disallow others. If htdig encounters them, it has to give the page's creator the benefit of the doubt and honour them. If exceptions to the rule are wanted, this should be done with a robots.txt file rather than a meta tag.
4.23. How can I get htdig not to index some directories, but still follow links?You can simply add the directory name to your robots.txt file or to the exclude_urls attribute in your configuration, but that will exclude all files under that directory. If you want the files in that directory to be indexed, you have a couple options. You can add an index.html file to the directory, that will include a robots meta tag (see question 4.15) to prevent indexing, and will contain links to all your files in this directory. The drawback of this is that you must maintain the index.html file yourself, as it won't be automatically updated as new files are added to the directory.
The other technique you can use, if you want the directory index to be made by the web server, is to get the server to insert the robots meta tag into the index page it generates. In Apache, this is done using the HeaderName and IndexOptions directives in the directory's .htaccess file. For example:
HeaderName .htrobots IndexOptions FancyIndexing SuppressHTMLPreamble
and in the .htrobots file:
<HTML><head> <META NAME="robots" CONTENT="noindex, follow"> <title>Index of /this/dir</title> </head>
If you don't mind getting just one copy of each directory, but want to suppress the multiple copies generated by Apache's FancyIndexing option, you can either turn off FancyIndexing or you can add "?D=A ?D=D ?M=A ?M=D ?N=A ?N=D ?S=A ?S=D" to the bad_querystr attribute (without the quotes) to suppress the alternately sorted views of the directory. For Apache 2.x, you'd use "C=D C=M C=N C=S O=A O=D" instead in your bad_querystr setting.
4.24. How can I get rid of duplicates in search results?This depends on the cause of the duplicate documents. htdig does keep track of the URLs it visits, so it never puts the same URL more than once in the database. So, if you have duplicate documents in your search results, it's because the same document appears under different URLs. Sometimes the URLs vary only slightly, and in subtle ways, so you may have to look hard to find out what the variation is. Here are some common reasons, each requiring a different solution.
The scores are calculated mostly by htdig at indexing time, with some tweaking done by htsearch at search time. There are a number of configuration attributes, all called <something>_factor, which can control the scoring calculations. In addition, the location of words within the document has an effect on score, as word scores are also multiplied by a varying location factor somewhere in between 1000 for words near the start and 1 for words near the end of the document. As of yet, there is no way to change this factor. For any of the scoring factors you can configure, and which are used by htdig, you will have to reindex your documents so the new factors take effect. The default values for these scoring factors, as well as information about whether they're used by htdig or htsearch, are all listed in the configuration attributes documentation. Malcolm Austen has written some notes on page scores for 3.1.x which you may find helpful.
Note that the above applies to the 3.1.x releases, while in the 3.2 beta releases, all scores are calculated at search time with no weight being put on the location of words within the document.
4.26. How can I get htdig not to index JavaScript code or CSS?The HTML parser in htdig recognizes and parses only HTML, which is all there should be within an HTML file. If your HTML files contain in-line JavaScript code or Cascading Style Sheets (CSS), these in-line codes, which are clearly not HTML, should be enclosed within an HTML comment tag so they are hidden from view from the HTML parser, or for that matter from any web client that is not JavaScript-aware or CSS-aware. See Behind the Scenes with JavaScript for a description of the technique, which applies equally well to in-line style sheets. If fixing up all non-HTML compliant JavaScript or CSS code in your HTML files is not an option, then see question 4.15 for an alternate technique.
The HTML parser in htdig 3.1.6 tries skipping over bare in-line JavaScript code in HTML, unlike previous versions, but a small bug in the parser causes it to be thrown off by a "<" sign in the JavaScript, and it may then miss the closing </script> tag. This can be fixed by applying this patch.
This usually has to do with the default document size limit. If you set max_doc_size in your config file to something enough to read in the directory index (try 100000 for 100K) this should fix this problem. Of course this will require more memory to read the larger file. Don't set it to a value larger than the amount of memory you have, and never more than about 2 billion, the maximum value of a 32-bit integer. If htdig is missing entire directories, see question 5.25.
5.2. I can't index PDF files.As above, this usually has to do with the default document size. What happens is hl://Dig will read in part of a PDF file and try to index it. This usually fails. Try setting max_doc_size in your config file to a larger value than the size of your largest PDF file. Don't go overboard, though, as you don't want to overflow a 32-bit integer (about 2 billion), and you don't want to allocate much more memory than you need to store the largest document.
There is a bug in Adobe Acrobat Reader version 4, in its handling of the -pairs option, which causes a segmentation violation when using it with htdig 3.1.2 or earlier. There is a workaround for this as of version 3.1.3 - you must remove the -pairs option from your pdf_parser definition, if it's there. However, acroread version 4 is still very unstable (on Linux, anyway) so it is not recommended as a PDF parser. An alternative is to use an external converter with the xpdf 0.90 package installed on your system, as described in question 4.9 above.
5.3. When I run "rundig," I get a message about "DATABASE_DIR" not being found.This is due to a bug in the Makefile.in file in version 3.1.0b1. The easiest fix is to edit the rundig file and change the line "TMPDIR=@DATABASE_DIR@" to set TMPDIR to a directory with a large amount of temporary disk space for hlmerge. This bug is fixed in version 3.1.0b2.
5.4. When I run hlmerge, it stops with an "out of diskspace" message.This means that hlmerge has run out of temporary disk space for sorting. Either in your "rundig" script (if you run hlmerge through that) or before you run hlmerge, set the variable TMPDIR to a temp directory with lots of space.
5.5. I have problems running rundig from cron under Linux.This problem commonly occurs on Red Hat Linux 5.0 and 5.1, because of a bug in vixie-cron. It causes hlmerge to fail with a "Word sort failed" error. It's fixed in Red Hat 5.2. You can install vixie-cron-3.0.1-26.{arch}.rpm from a 5.2 distribution to fix the problem on 5.0 or 5.1. A quick fix for the problem is to change the first line of rundig to "#!/bin/ash" which will run the script through the ash shell, but this doesn't solve the underlying problem.
5.6. When I run hlmerge, it stops with an "Unexpected file type" message.Often this is because the databases are corrupt. Try removing them and rebuilding. If this doesn't work, some have found that the solution for question 3.2 works for this as well. This should be fixed in versions from 3.1.x
5.7. When I run htsearch, I get lots of Internal Server Errors (#500).If you are running under Solaris, see 3.6.
The solution for Solaris may also work for other OSes that use shared
libraries in non-standard locations, so refer to question 3.6 if
you suspect a shared library problem. In any case, check your web
server error logs to see the cause of the internal server errors.
If it's not a problem with shared libraries, there's a good chance
that the error logs will still contain useful error messages that
will help you figure out what the problem is.
See also questions 5.13 and
5.23.
Most of the time, this is caused by either not setting or
incorrectly setting the locale attribute. The default locale
for most systems is the "portable" locale, which strips
everything down to standard ASCII. Most systems expect
something like locale: en_US
or
locale: fr_FR
. Locale files are often found in
/usr/share/locale
or the $LANGUAGE
environment variable. See also question 4.10.
Setting the locale correctly seems to be a frequent source of
frustration for hl://Dig users, so here are a few pointers which
some have found useful. First of all, if you don't have any luck
with the settings of the locale
attribute that you try, make sure you use a locale that is
defined on your system. As mentioned above, these are usually
installed in /usr/share/locale
, so look there
for a directory named for the locale you want to use. If
you don't find it, but find something close, try that locale
name. Note that the locale may not have to be specific to the
language you're indexing, as long as it uses the same character
set. E.g. most western European languages use the ISO-8859-1
Latin 1 character set, so on most systems the locales for
all these languages define the same character types table
and can be used interchangeably. Some systems, however,
define only the accented letters used for a given language,
so "your mileage may vary." The important thing is that the
directory for your locale definition must
have a file named LC_CTYPE
in it. For example,
on many Linux distributions, a language-specific locale like
fr
won't contain this file, but country-specific
locales like fr_FR
or fr_CA
will. If
you don't find any appropriate locales installed on your system,
try obtaining and installing the locale definition files from
your OS distribution. Also, once you've set your locale, you need
to reindex all your documents in order for the locale to take
effect in the word database. This means rerunning the "rundig"
script, or running "htdig -i" and hlmerge (or hlpurge in the 3.2
betas).
Note also that some UNIX systems and libc5-based Linux systems just don't have a working implementation of locales, so you may not be able to get locales working at all on certain systems. The testlocale.c program on our web site can let you see the LC_CTYPE tables for any locale, to aid in finding one that works. Carefully follow the directions in the program's comments to know how to use it and what to look for in its output.
5.9. When I run hlmerge, it stops with a "Word sort failed" message.There are three common causes of this. First of all, the sort program may be running out of temporary file space. Fix this by freeing up some space where sort puts its temporary files, or change the setting of the TMPDIR environment variable to a directory on a volume with more space. A second common problem is on systems with a BSD version of the sort program (such as FreeBSD or NetBSD). This program uses the -T option as a record separator rather than an alternate temporary directory. On these systems, you must remove the TMPDIR environment variable from rundig, or change the code in hlmerge/words.cc not to use the -T option. A third cause is the cron program on Red Hat Linux 5.0 or 5.1. (See question 5.5 above.)
5.10. When htsearch has a lot of matches, it runs extremely slowly.When you run htsearch with no customization, on a large database, and it gets a lot of hits, it tends to take a long time to process those hits. Some users with large databases have reported much higher performance, for searches that yield lots of hits, by setting the backlink_factor attribute in htdig.conf to 0, and sorting by score. The scores calculated this way aren't quite as good, but htsearch can process hits much faster when it doesn't need to look up the db.docdb record for each hit, just to get the backlink count, date or title, either for scoring or for sorting. This affects versions 3.1.0b3 and up. In version 3.2, currently under development, the databases will be structured differently, so it should perform searches more quickly.
In version 3.1.6, the date range selection code also slows down htsearch for the same reason. Unfortunately, a small bug crept into the code so that even if you don't set any of the date range input parameters (startyear, endyear, etc.), and you set backlink_factor and date_factor to 0, htsearch still looks at the date in the db.docdb record for each hit. You can avoid this either by setting startyear to 1969 and endyear to 2038 in your config file, or by applying this patch.
5.11. When I run htsearch, it gives me a count of matches, but doesn't list the matching documents.This most commonly happens when you run htsearch while the database is currently being rebuilt or updated by htdig. If htdig and hlmerge have run to completion, and the problem still occurs, this is usually an indication of a corrupted database. If it's finding matches, it's because it found the matching words in db.words.db. However, it isn't finding the document records themselves in db.docdb, which would suggest that either db.docdb, or db.docs.index (which maps document IDs used in db.words.db to URLs used to look up records in db.docdb), is incomplete or messed up. You'll likely need to rebuild your database from scratch if it's corrupted. Older versions of hl://Dig were susceptible to database corruption of this sort. Versions 3.1.2 and later are much more stable.
Another possible cause of this problem is unreadable result template files. If you define external template files via the template_map attribute, rather than using the builtin-short or builtin-long templates, and the file names are incorrect or the files do not have read permission for the user ID under which htsearch runs, then htsearch won't be able to display the results. Also, all directories leading up to these template files must be searchable (i.e. executable) by htsearch, or it won't be able to open the files. This is the opposite problem of that described in question 5.36. If htsearch displays nothing at all, you may have both problems.
5.12. I can't seem to index documents with names like left_index.html with htdig.There is a bug in the implementation of the remove_default_doc attribute in htdig versions 3.1.0, 3.1.1 and 3.1.2, which causes it to match more than it should. The default value for this attribute is "index.html", so any URL in which the filename ends with this string (rather than matches it entirely) will have the filename stripped off. This is fixed in version 3.1.3.
5.13. I get Premature End of Script Headers errors when running htsearch.This happens when htsearch dies before putting out a
"Content-Type" header. If you are running Apache under Solaris,
or another system that may be using shared libraries in non-standard
locations,
first try the solution described in question 3.6.
If that doesn't work, or you're running on another system, try
running "htsearch -vvv" directly from the command line to see where
and why it's failing. It should prompt you for the search words,
as well as the format.
If it works from the command line, but not from the web
server, it's almost certainly a web server configuration problem.
Check your web server's error log for any information related to
htsearch's failure. One increasingly common problem is Apache
configurations which expect all CGI scripts to be Perl,
rather than binary executables or other scripts, so they use
"perl-handler" rather than "cgi-handler".
See also questions 5.7,
5.14 and 5.23.
Despite a great deal of debugging of these programs, we haven't been able to completely eliminate all such problems on all platforms. If you're running htsearch or hlfuzzy on a BSDI system, a common cause of core dumps is due to a conflict between the GNU regex code bundled in htdig 3.1.2 and later, and the BSD C or C++ library. The solution is to use the BSD library's own rx code instead, using version 3.1.6 or newer as summarized by Joe Jah:
This solution may work on some other platforms as well (we haven't heard one way or the other), but will definitely not work on some platforms. For instance, on libc5-based Linux systems, the bundled regex code works fine by default, but using libc5's regex code causes core dumps.
Users of Cobalt Raq or Qube servers have complained of segmentation faults in htdig. Apparently this is due to problems in their C++ libraries, which are fixed in their experimental compiler and libraries. The following commands should install the packages you need:
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/binutils-2.8.1-3C1.mips.rpm
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/egcs-1.0.2-9.mips.rpm
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/egcs-c++-1.0.2-9.mips.rpm
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/egcs-g77-1.0.2-9.mips.rpm
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/egcs-objc-1.0.2-9.mips.rpm
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/libstdc++-2.8.0-9.mips.rpm
rpm -Uvh ftp://ftp.cobaltnet.com/pub/experimental/libstdc++-devel-2.8.0-9.mips.rpm
rpm -Uvh --force ftp://ftp.cobaltnet.com/pub/products/current/RPMS/gcc-2.7.2-C2.mips.rpm
You may have to remove the libg++ package, if you have it installed before installing libstdc++, because of conflicts in these packages. Be sure to do a "make clean" before a "make", to remove any object files compiled with the old compiler and headers.
For other causes of segmentation faults, or in other programs, getting a stack backtrace after the fault can be useful in narrowing down the problem. E.g.: try "gdb /path/to/htsearch /path/to/core", then enter the command "bt". You can also try running the program directly under the debugger, rather than attempting a post-mortem analysis of the core dump. Options to the program can be given on gdb's "run" command, and after the program is suspended on fault, you can use the "bt" command. This may give you enough information to find and fix the problem yourself, or at least it may help others on the htdig mailing list to point out what to do next.
5.15. Why does htdig 3.1.3 mangle URL parameters that contain bare "&" characters?This is a known bug in 3.1.3, and is fixed with this patch. You can apply the patch by entering into the main source directory for htdig-3.1.3, and using the command "patch -p0 < /path/to/HTML.cc.0". This is also fixed as of version 3.1.4.
5.16. When I run hlmerge, it stops with an "Unable to open word list file '.../db.wordlist'" message.The most common cause of this error is that htdig did not manage to index any documents, and so it did not create a word list. You should repeat the htdig or rundig command with the -vvv option to see where and why it is failing. See question 4.1.
5.17. When using Netscape, htsearch always returns the "No match" page.Check your search form. Chances are there is a hidden input
field with no value defined. For example, one user had
<input type=hidden name=restrict>
in his search form, instead of
<input type=hidden name=restrict value="">
The problem is that Netscape sets the missing value to a default of " "
(two spaces), rather than an empty string. For the restrict parameter,
this is a problem, because htsearch won't likely find any URLs with two
spaces in them. Other input parameters may similarly pose a problem.
Another possibility, if you're running 3.2.0b1 or 3.2.0b2, is that you need to make the db.words.db_weakcmpr file writeable by the user ID under which the web server runs. This is a bug, and is fixed in the 3.2.0b5 beta.
5.18. Why doesn't htdig follow links to other pages in JavaScript code?There probably isn't any indexing tool in existance that follows JavaScript links, because they don't know how to initiate JavaScript events. Realistically, it would take a full JavaScript parser in order to be able to figure out all the possible URLs that the code could generate, something that's way beyond the means of any search engine. You have a few options:
Your server is returning the contents of the htsearch binary. Common causes of this are:
By default, Apache is usually configured with one cgi-bin directory as ScriptAlias, so all your CGI programs must go in there, or have a .cgi suffix on them. Your configuration may differ, however.
5.20. Why are the betas of 3.2 so slow at indexing?As the release notes for these versions suggest, they are somewhat unoptimized and are made available for testing Since the 3.2 code indexes all locations of words to support phrase searching and other advanced methods, this additional data slows down the indexer. To compensate, the code has a cache configured by the wordlist_cache_size attribute. As of this writing, the word database code will slow down considerably when the cache fills up. Setting the cache as large as possible provides considerable performance improvement. Development is in progress to improve cache performance. For 3.2.0b6 and higher, see also the store_phrases attribute, which can turn off support for phrase searches, improving the speed.
5.21. Why does htsearch use ";" instead of "&" to separate URL parameters for the page buttons?In versions 3.1.5 and 3.2.0b2, and later, htsearch was
changed to use a semicolon character ";" as a parameter
separator for page button URLs, rather than "&", for HTML
4.0 compliance. It now allows both the "&" and the ";" as
separators for input parameters, because the CGI specification
still uses the "&". This change may cause some PHP or CGI
wrapper scripts to stop working, but these scripts should be
similarly changed to recognize both separator characters.
For the definitive reference on this issue, please refer to
section B.2.2 of W3C's HTML 4.0 Specification,
Ampersands in URI attribute values. We're all a little
tired of arguing about it. If you don't like the standard, you
can change the Display::createURL() code yourself to ignore it.
See also question 4.13.
If you want to try working within the new standard, you may find it helpful to know that recent versions of CGI.pm will allow either the ampersand or semicolon as a parameter separator, which should fix any Perl scripts that use this library. In PHP, you can simply set the following in your php.ini file to allow either separator:
arg_separator.input = ";&"5.22. Why does htsearch show the "&" character as "&" in search results?
In version 3.1.5, htsearch was fixed to properly re-encode the characters &, <, >, and " into SGML entities. However, the default value for the translate_amp, translate_lt_gt and translate_quot attributes is still false, so these entities don't get converted by htdig. If you set these three attributes to true in your htdig.conf and reindex, the problem will go away.
In the 3.2 betas there was a bug in the HTML parser that caused it to fail when attempting to translate the "&" entity. This has been fixed in 3.2.0b3. The translate_* attributes are gone as of 3.2.0b2.
5.23. I get Internal Server or Unrecognized character errors when running htsearch.An increasingly common problem is Apache configurations
which expect all CGI scripts to be Perl, rather than binary
executables or other scripts, so they use "perl-handler"
rather than "cgi-handler". The fix is to create a separate
directory for non-Perl CGI scripts, and define it as such in
your httpd.conf file. You should define it the same way as your
existing cgi-bin directory, but use "cgi-handler" instead of
"perl-handler". In any case, you should check your web server's
error log for any information related to htsearch's failure.
See also questions 5.7,
5.14 and 5.13.
All configuration file attributes have compiled-in, default values. Taking an attribute out of the file is not the same thing as setting it to an empty string, a 0, or a value of false. See question 4.18.
5.25. When I run htdig on my site, it misses entire directories.First of all, htdig doesn't look at directories itself. It is a spider, and it follows hypertext links in HTML documents. If htdig seems to be missing some documents or entire directory sub-trees of your site, it is most likely because there are no HTML links to these documents or directories. (See also question 5.18.) If htdig does not come across at least one hypertext link to a document or directory, and it's not explicitly listed in the start_url attribute, then this document or directory is essentially hidden from view to htdig, or to any web browser or spider for that matter. You can only get htdig to index directories, without providing your own files with links to the contents of these directories, by using your web server's automatic index generation feature. In Apache, this is done with the mod_autoindex module, which is usually compiled-in by default, and is enabled with the "Indexes" option for a given directory hierarchy. For example, you can put these directives in your Apache configuration:
<Directory "/path/to/your/document/root"> Options Indexes FollowSymLinks Includes ExecCGI </Directory>
This will cause Apache to automatically generate an index for any directory that does not have an index.html or other "DirectoryIndex" file in it. Other web servers will have similar features, which you should look for in your server documentation.
As an alternative to relying on the web server's autoindex
feature, you can compose a list of all the unreachable
documents, or write a program to do so, and feed that list as
part of htdig's start_url
attribute. Here is an example of simple shell script to make
a file of URLs you can use with a configuration entry like
start_url: `/path/to/your/file`
:
find /path/to/your/document/root -type f -name \*.html -print | \ sed -e 's|/path/to/your/document/root/|http://www.yourdomain.com/|' > \ /path/to/your/file
Other reasons why htdig might be missing portions of your site might be that they fall out of the bounds specified by the limit_urls_to attribute (which takes on the value of start_url by default), they are explicitly excluded using the exclude_urls attribute, or they are disallowed by a robots.txt file (see the htdig documentation for notes about robot exclusion) or by a robots meta tag (see question 4.15). If htdig seems to be missing the last part of a large directory or document, see question 5.1. For reasons why htdig may be rejecting some links to parts of your site, see question 5.27.
5.26. What do all the numbers and symbols in the htdig -v output mean?Output from htdig -v typically looks like this:
23000:35506:2:http://xxx.yyy.zz/index.html: ***-+****--++***+ size = 4056
The first number is the number of documents parsed so far, the second is the DocID for this document, and the third is the hop count of the document (number of hops from one of the start_url documents). After the URL, it shows a "*" for a link in the document that it already visited (or at least queued for retrieval), a "+" for a new link it just queued, and a "-" for a link it rejected for any of a number of reasons. To find out what those reasons are, you need to run htdig with at least 3 "v" options, i.e. -vvv. If there are no "*", "+" or "-" symbols after the URL, it doesn't mean the document was not parsed or was empty, but only that no links to other documents were found within it.
5.27. Why is htdig rejecting some of the links in my documents?When htdig parses documents and finds hypertext links to other documents (hrefs), it may reject them for any of several reasons. To find out what those reasons are, you need to run htdig with at least 3 "v" options, i.e. -vvv. Here are the meanings of some of the messages you might see at this verbosity level.
Another possibility, if none of the error messages above appear for some of the links you think htdig should be accepting, is that htdig isn't even finding the links at all. First, make sure you're not making false assumptions about how htdig finds these. It only reads links in HTML code, and not JavaScript, and it doesn't read directories unless the HTTP server is feeding it directory listings. You will need to take a close look at the htdig -vvv (or -vvvv) output to see what htdig is finding, in and around the areas where the desired links are supposed to be found in your HTML code, to see if it's actually finding them. See also question 5.25.
5.28. When I run htdig or hlmerge, I get a "DB2 problem...: missing or empty key value specified" message.The most common cause of this error is that htdig or hlmerge rejected any documents that had been put in the database, leaving an empty database. You need to find out the reasons for the rejection of these documents. See questions 4.1, 5.25 and 5.27.
5.29. When I run htdig on my site, it seems to go on and on without ending.There are some things that can cause htdig to run on without ending, especially when indexing dynamic content (ASP, PHP, SSI or CGI pages). This usually involves htdig getting caught in an infinite virtual hierarchy. A sure sign of this is if the current size of your database is much larger than the total size of the site you are indexing, or if in the verbose output of htdig (see question 4.1) you see the same URLs come up again and again with only subtle variations. In any case, you must figure out the reason htdig keeps revisiting the same documents using different URLs, as explained in question 4.24, and set your exclude_urls and bad_querystr attributes appropriately to stop htdig from going down those paths.
5.30. Why does htsearch no longer recognize the -c option when run from the web server?This was a security hole in 3.1.5 and older, and 3.2.0b3 and older releases of hl://Dig. (See question 2.1.) There's a compile-time macro you can set in htsearch.cc to disable this security fix, but that's a bad idea because it reopens the hole. This should only be done as a last recourse, when all other avenues fail. The -c option was only intended for testing htsearch from the command line, and not for use when calling htsearch on the web server. Unfortunately, far too many users have needlessly latched onto this option for CGI scripts. The preferred ways of specifying the config file are as follows, in order of preference:
There are a few fairly common reasons why this might happen:
The most common causes of this error are:
You should always check which version of hl://Dig you're running, before you report any problems, or even if you suspect a problem. You can find out the version number of an installed hl://Dig package by running the command:
htdig -\? | head
(or use "more" if you don't have a "head" command). The
full version number appears on the third line of output,
after "This program is part of hl://Dig", and it should also
include the snapshot date if you're running a pre-release
snapshot. Always include this full version number with any
bug report or problem report on a mailing list. You can save
yourself and others a lot of grief by being certain of which
version you're running, especially if you've installed more than
one. If you're running hl://Dig from an RPM package, you should
also report the package version and release number, which you
can determine with the command "rpm -q htdig
",
and mention where you obtained the package. This will alert
us to the ideosyncracies and/or patches in a particular RPM
package. Also, if you've applied any patches yourself (see
question 2.5) please mention which ones.
See also question 1.8, on reporting bugs
or configuration problems.
This message comes from the pdftotext utility, when a PDF file has been truncated. Find the largest PDF file on the site you're indexing, and set max_doc_size to at least that size (see question 5.2). If you need to track down which PDF is causing the error, try running "htdig -i -v > log.txt 2>&1" so you can see which URL is being indexed when the error occurs. The output redirects in that command combine stdout (where htdig's output goes) and stderr (where pdftotext's error messages go) into one output stream. If you're using acroread to index PDF files, the error message for a truncated PDF file is simply "Could not repair file." It's also possible to get errors like this from PDF files that are smaller than max_doc_size, if they're already truncated or corrupted on the server.
5.35. When running htdig on Mandrake Linux, I get "host not found" and "no server running" errors.The default htdig.conf configuration in Mandrake's RPM package of htdig very stupidly enables the local_urls_only attribute by default, which means you can only index a limited set of files on the local server. Anything else, where htdig would normally fall back to using HTTP, will fail. To make matters worse, they put a very misleading comment above that attribute setting, which throws users off track. This attribute is useful in certain circumstances where you never want htdig to fall back to HTTP, but enabling it by default was a very bad judgement call on Mandrake's part.
5.36. When I run htsearch, it gives me the list of matching documents, but no header or footer.The header and footer typically contain the followup search form, an indication of the total number of matches, and buttons to other pages of matches if the results don't fit on one page. If these don't show up, it could be that in attempting to customize these (see question 4.2), you removed them or rendered them unusable. Even if you didn't customize them, make sure you installed the search_results_header and search_results_footer files (or the search_results_wrapper file) in the correct location (where you told hl://Dig they'd be when you configured prior to compiling). Also make sure they have read permission for the user ID under which htsearch runs, and all directories leading up to these template files are searchable (i.e. executable) by htsearch, or it won't be able to open the files.
This is the opposite problem of that described in question 5.11. If htsearch displays nothing at all, you may have both problems or you may have no matches or a boolean query syntax error and the nothing_found_file or syntax_error_file is missing or unreadable.
5.37. When I index files with doc2html.pl, it fails with the "UNABLE to convert" error.This is an indication that doc2html.pl wasn't configured properly. Carefully follow all the directions for installation in the DETAILS file that comes with the script. In addition to installing doc2html.pl, you must:
doc2html.pl /full/path/to/sample/filename.pdf "application/pdf" url
You should repeat a similar set of steps to configure and test doc2html.pl for other document types, such as Word, RTF, Excel and other document types. See also questions 4.8, 4.9 and 5.39.
5.38. Why do my searches find search terms in pathnames, or how do I prevent matching filenames?htdig doesn't normally add the URL components to the index itself, but when you index a directory where the filenames are used as link description text (such as an automatic DirectoryIndex created by Apache's mod_autoindex) then these link descriptions get indexed, carrying the weight assigned to them by the description_factor attribute. Thus, a search for a filename will match this link description, and the file will show up in search results. To avoid that, make sure your DirctoryIndexes don't get indexed as detailed in question 4.23.
Conversely, there is no way to force htdig to index URL components so that a search for a file name will yield a match on that file, unless you index an HTML file (or several) containing links to all the files you want, where the link description text does contain the full URL or the pathname components you want.
5.39. I set up an external parser but I still can't index Word/Excel/PowerPoint/PDF documents.You probably need to carefully re-read and follow questions 4.8, 4.9, 5.25 and 5.27. When you can't index documents with an external parser or converter, there are three main issues, or points of failure, that you need to resolve. You need to figure out on which of the three stages the process is failing, and focus on that stage to get to the bottom of why it's not working at that stage. You need to run htdig with anywhere from 1 to 4 -v options, to get the debugging output you need to see where it's failing and why. This may be an iterative process, if htdig is failing at more than one stage: you might fix one problem only to run into another.