# # Sample Webalizer configuration file # Copyright 1997-2000 by Bradford L. Barrett (brad@mrunix.net) # # Distributed under the GNU General Public License. See the # files "Copyright" and "COPYING" provided with the webalizer # distribution for additional information. # # This is a sample configuration file for the Webalizer (ver 2.01) # Lines starting with pound signs '#' are comment lines and are # ignored. Blank lines are skipped as well. Other lines are considered # as configuration lines, and have the form "ConfigOption Value" where # ConfigOption is a valid configuration keyword, and Value is the value # to assign that configuration option. Invalid keyword/values are # ignored, with appropriate warnings being displayed. There must be # at least one space or tab between the keyword and its value. # # As of version 0.98, The Webalizer will look for a 'default' configuration # file named "webalizer.conf" in the current directory, and if not found # there, will look for "/etc/webalizer.conf". # ------------------------------------------------------------------------- # Ultimate webalizer.conf file Project. # http://www.scottkriebel.com/webalizer/ # 02-17-04 Version 1.0 - Initital release # 02-28-04 Version 1.01 - Added IIS Worm grouping. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # Acknowledgements # # http://www.realjosh.com/old/webalizer.conf # http://www.tnl.net/blog/entry/Webalizer.conf_hacking # ------------------------------------------------------------------------- # Incremental processing allows multiple partial log files to be used # instead of one huge one. Useful for large sites that have to rotate # their log files more than once a month. The Webalizer will save its # internal state before exiting, and restore it the next time run, in # order to continue processing where it left off. This mode also causes # The Webalizer to scan for and ignore duplicate records (records already # processed by a previous run). See the README file for additional # information. The value may be 'yes' or 'no', with a default of 'no'. # The file 'webalizer.current' is used to store the current state data, # and is located in the output directory of the program (unless changed # with the IncrementalName option below). Please read at least the section # on Incremental processing in the README file before you enable this option. #Incremental no # IncrementalName allows you to specify the filename for saving the # incremental data in. It is similar to the HistoryName option where the # name is relative to the specified output directory, unless an absolute # filename is specified. The default is a file named "webalizer.current" # kept in the normal output directory. If you don't specify "Incremental" # as 'yes' then this option has no meaning. #IncrementalName webalizer.current # ReportTitle is the text to display as the title. The hostname # (unless blank) is appended to the end of this string (seperated with # a space) to generate the final full title string. # Default is (for english) "Usage Statistics for". ReportTitle Statistics for # HTMLExtension allows you to specify the filename extension to use # for generated HTML pages. Normally, this defaults to "html", but # can be changed for sites who need it (like for PHP embeded pages). #HTMLExtension html # PageType lets you tell the Webalizer what types of URL's you # consider a 'page'. Most people consider html and cgi documents # as pages, while not images and audio files. If no types are # specified, defaults will be used ('htm*', 'cgi' and HTMLExtension # if different for web logs, 'txt' for ftp logs). PageType shtml PageType exe PageType pdf PageType doc PageType zip PageType py PageType asp PageType jsp PageType htm* PageType cgi PageType php* PageType phtml PageType pl # UseHTTPS should be used if the analysis is being run on a # secure server, and links to urls should use 'https://' instead # of the default 'http://'. If you need this, set it to 'yes'. # Default is 'no'. This only changes the behaviour of the 'Top # URL's' table. #UseHTTPS no # HTMLPre defines HTML code to insert at the very beginning of the # file. Default is the DOCTYPE line shown below. Max line length # is 80 characters, so use multiple HTMLPre lines if you need more. #HTMLPre # HTMLHead defines HTML code to insert within the # block, immediately after the line. Maximum line length # is 80 characters, so use multiple lines if needed. #HTMLHead <META NAME="author" CONTENT="The Webalizer"> # HTMLBody defined the HTML code to be inserted, starting with the # <BODY> tag. If not specified, the default is shown below. If # used, you MUST include your own <BODY> tag as the first line. # Maximum line length is 80 char, use multiple lines if needed. #HTMLBody <BODY BGCOLOR="#E8E8E8" TEXT="#000000" LINK="#0000FF" VLINK="#FF0000"> # HTMLPost defines the HTML code to insert immediately before the # first <HR> on the document, which is just after the title and # "summary period"-"Generated on:" lines. If anything, this should # be used to clean up in case an image was inserted with HTMLBody. # As with HTMLHead, you can define as many of these as you want and # they will be inserted in the output stream in order of apperance. # Max string size is 80 characters. Use multiple lines if you need to. #HTMLPost <BR CLEAR="all"> # HTMLTail defines the HTML code to insert at the bottom of each # HTML document, usually to include a link back to your home # page or insert a small graphic. It is inserted as a table # data element (ie: <TD> your code here </TD>) and is right # alligned with the page. Max string size is 80 characters. #HTMLTail <IMG SRC="msfree.png" ALT="100% Micro$oft free!"> # HTMLEnd defines the HTML code to add at the very end of the # generated files. It defaults to what is shown below. If # used, you MUST specify the </BODY> and </HTML> closing tags # as the last lines. Max string length is 80 characters. #HTMLEnd </BODY></HTML> # The Quiet option suppresses output messages... Useful when run # as a cron job to prevent bogus e-mails. Values can be either # "yes" or "no". Default is "no". Note: this does not suppress # warnings and errors (which are printed to stderr). #Quiet no # ReallyQuiet will supress all messages including errors and # warnings. Values can be 'yes' or 'no' with 'no' being the # default. If 'yes' is used here, it cannot be overriden from # the command line, so use with caution. A value of 'no' has # no effect. ReallyQuiet yes # TimeMe allows you to force the display of timing information # at the end of processing. A value of 'yes' will force the # timing information to be displayed. A value of 'no' has no # effect. #TimeMe no # GMTTime allows reports to show GMT (UTC) time instead of local # time. Default is to display the time the report was generated # in the timezone of the local machine, such as EDT or PST. This # keyword allows you to have times displayed in UTC instead. Use # only if you really have a good reason, since it will probably # screw up the reporting periods by however many hours your local # time zone is off of GMT. #GMTTime no # Debug prints additional information for error messages. This # will cause webalizer to dump bad records/fields instead of just # telling you it found a bad one. As usual, the value can be # either "yes" or "no". The default is "no". It shouldn't be # needed unless you start getting a lot of Warning or Error # messages and want to see why. (Note: warning and error messages # are printed to stderr, not stdout like normal messages). #Debug no # FoldSeqErr forces the Webalizer to ignore sequence errors. # This is useful for Netscape and other web servers that cache # the writing of log records and do not guarentee that they # will be in chronological order. The use of the FoldSeqErr # option will cause out of sequence log records to be treated # as if they had the same time stamp as the last valid record. # Default is to ignore out of sequence log records. #FoldSeqErr no # VisitTimeout allows you to set the default timeout for a visit # (sometimes called a 'session'). The default is 30 minutes, # which should be fine for most sites. # Visits are determined by looking at the time of the current # request, and the time of the last request from the site. If # the time difference is greater than the VisitTimeout value, it # is considered a new visit, and visit totals are incremented. # Value is the number of seconds to timeout (default=1800=30min) #VisitTimeout 1800 # IgnoreHist shouldn't be used in a config file, but it is here # just because it might be usefull in certain situations. If the # history file is ignored, the main "index.html" file will only # report on the current log files contents. Usefull only when you # want to reproduce the reports from scratch. USE WITH CAUTION! # Valid values are "yes" or "no". Default is "no". #IgnoreHist no # Country Graph allows the usage by country graph to be disabled. # Values can be 'yes' or 'no', default is 'yes'. #CountryGraph yes # DailyGraph and DailyStats allows the daily statistics graph # and statistics table to be disabled (not displayed). Values # may be "yes" or "no". Default is "yes". #DailyGraph yes #DailyStats yes # HourlyGraph and HourlyStats allows the hourly statistics graph # and statistics table to be disabled (not displayed). Values # may be "yes" or "no". Default is "yes". #HourlyGraph yes #HourlyStats yes # GraphLegend allows the color coded legends to be turned on or off # in the graphs. The default is for them to be displayed. This only # toggles the color coded legends, the other legends are not changed. # If you think they are hideous and ugly, say 'no' here :) #GraphLegend yes # GraphLines allows you to have index lines drawn behind the graphs. # I personally am not crazy about them, but a lot of people requested # them and they weren't a big deal to add. The number represents the # number of lines you want displayed. Default is 2, you can disable # the lines by using a value of zero ('0'). [max is 20] # Note, due to rounding errors, some values don't work quite right. # The lower the better, with 1,2,3,4,6 and 10 producing nice results. #GraphLines 2 # The "Top" options below define the number of entries for each table. # Defaults are Sites=30, URL's=30, Referrers=30 and Agents=15, and # Countries=30. TopKSites and TopKURLs (by KByte tables) both default # to 10, as do the top entry/exit tables (TopEntry/TopExit). The top # search strings and usernames default to 20. Tables may be disabled # by using zero (0) for the value. #TopSites 30 #TopKSites 10 #TopURLs 30 #TopKURLs 10 #TopReferrers 30 #TopAgents 15 #TopCountries 30 #TopEntry 10 #TopExit 10 #TopSearch 20 TopUsers 0 # The All* keywords allow the display of all URL's, Sites, Referrers # User Agents, Search Strings and Usernames. If enabled, a seperate # HTML page will be created, and a link will be added to the bottom # of the appropriate "Top" table. There are a couple of conditions # for this to occur.. First, there must be more items than will fit # in the "Top" table (otherwise it would just be duplicating what is # already displayed). Second, the listing will only show those items # that are normally visable, which means it will not show any hidden # items. Grouped entries will be listed first, followed by individual # items. The value for these keywords can be either 'yes' or 'no', # with the default being 'no'. Please be aware that these pages can # be quite large in size, particularly the sites page, and seperate # pages are generated for each month, which can consume quite a lot # of disk space depending on the traffic to your site. AllSites yes AllURLs yes AllReferrers yes AllAgents yes AllSearchStr yes AllUsers no # The Webalizer normally strips the string 'index.' off the end of # URL's in order to consolidate URL totals. For example, the URL # /somedir/index.html is turned into /somedir/ which is really the # same URL. This option allows you to specify additional strings # to treat in the same way. You don't need to specify 'index.' as # it is always scanned for by The Webalizer, this option is just to # specify _additional_ strings if needed. If you don't need any, # don't specify any as each string will be scanned for in EVERY # log record... A bunch of them will degrade performance. Also, # the string is scanned for anywhere in the URL, so a string of # 'home' would turn the URL /somedir/homepages/brad/home.html into # just /somedir/ which is probably not what was intended. #IndexAlias home.htm #IndexAlias homepage.htm # The Hide*, Group* and Ignore* and Include* keywords allow you to # change the way Sites, URL's, Referrers, User Agents and Usernames # are manipulated. The Ignore* keywords will cause The Webalizer to # completely ignore records as if they didn't exist (and thus not # counted in the main site totals). The Hide* keywords will prevent # things from being displayed in the 'Top' tables, but will still be # counted in the main totals. The Group* keywords allow grouping # similar objects as if they were one. Grouped records are displayed # in the 'Top' tables and can optionally be displayed in BOLD and/or # shaded. Groups cannot be hidden, and are not counted in the main # totals. The Group* options do not, by default, hide all the items # that it matches. If you want to hide the records that match (so just # the grouping record is displayed), follow with an identical Hide* # keyword with the same value. (see example below) In addition, # Group* keywords may have an optional label which will be displayed # instead of the keywords value. The label should be seperated from # the value by at least one 'white-space' character, such as a space # or tab. # # The value can have either a leading or trailing '*' wildcard # character. If no wildcard is found, a match can occur anywhere # in the string. Given a string "www.yourmama.com", the values "your", # "*mama.com" and "www.your*" will all match. # This one hides non-referrers ("-" Direct requests) HideReferrer Direct Request # Group The Images to one listing. GroupURL *.gif Images GroupURL *.GIF Images GroupURL *.jpg Images GroupURL *.JPG Images GroupURL *.png Images GroupURL *.PNG Images GroupURL *.ra Images HideURL *.gif HideURL *.GIF HideURL *.jpg HideURL *.JPG HideURL *.png HideURL *.PNG HideURL *.ra # Grouping and Hiding IIS Worm traffic # - http://www.realjosh.com/old/webalizer.conf #GroupURL root.exe IIS Worm #GroupURL cmd.exe IIS Worm GroupURL /default.ida* IIS Worm #GroupURL scripts/*/winnt/* IIS Worm #HideURL root.exe #HideURL cmd.exe #HideURL default.ida #HideURL scripts/*/winnt/* # Grouping options #GroupURL /cgi-bin/* CGI Scripts GroupSite *.aol.com America Online # Hiding Local Traffic HideSite localhost HideSite 192.168.1.1 GroupReferrer yahoo.com/ Yahoo! GroupReferrer excite.com/ Excite GroupReferrer infoseek.com/ InfoSeek GroupReferrer webcrawler.com/ WebCrawler GroupReferrer google.com/ Google GroupReferrer search.msn.com/ MSN Search GroupReferrer altavista.com/ Altavista.com # Hiding Referrers from local pages. GroupReferrer scottkriebel.com/ Local Pages HideReferrer scottkriebel.com/ #GroupUser root Admin users #GroupUser admin Admin users #GroupUser wheel Admin users # The following is a great way to get an overall total # for browsers, and not display all the detail records. # (You should use MangleAgent to refine further...) # 2.17.04 added GroupAgent List found on http://www.tnl.net/blog/entry/Webalizer.conf_hacking GroupAgent Check&Get Program: Check&Get (Bookmark Manager) GroupAgent eXactSite Program: eXactSite (HTML authoring. stupid user!) GroupAgent FavOrg Program: FavOrg (Bookmark Manager) GroupAgent Fetch Program: Fetch (Offline browser) GroupAgent GetRight Program: GetRight (Download Manager) GroupAgent HTTrack Program: HTTrack (Website Copier) GroupAgent Lachesis Program: Packet Loss Report (ftp.intel.com) GroupAgent lachesis Program: Packet Loss Report (ftp.intel.com) GroupAgent Offline Program: Offline Explorer (Offline Browser) GroupAgent Powermarks Program: Powermarks (Bookmark Manager) GroupAgent SuperBot Program: SuperBot (Web Site Copier) GroupAgent Teleport Program: Teleport Pro (Offline Browser tenmax.com) GroupAgent WebStripper Program: WebStripper (Offline Browser) GroupAgent WebZIP Program: WebZIP (Offline Browser) GroupAgent Alcatel- Device: Alcatel Mobile Phone GroupAgent AvantGo Device: AvantGo (Offline Browser) GroupAgent Blazer Device: Blazer (PalmOS browser) GroupAgent DoCoMo Device: I-mode Compatible Mobile Phone GroupAgent Elaine Device: Palm browser GroupAgent Ericsson Device: Ericsson Mobile Phone GroupAgent MOT- Device: Motorola Mobile Phone GroupAgent jBrowser Device: WAP Browser jBrowser (built by Jataayu) GroupAgent Liberate Device: Liberate (Digital TV) GroupAgent Mitsu Device: Mitsubishi Mobile Phone GroupAgent Nokia Device: Nokia Mobile Phone GroupAgent Panasonic Device: Panasonic Mobile Phone GroupAgent PHILIPS- Device: Philips Mobile Phone GroupAgent SAGEM- Device: SAGEM Mobile Phone GroupAgent SAMSUNG- Device: Samsung Mobile Phone GroupAgent SEC- Device: Samsung Mobile Phone GroupAgent SHARP- Device: Sharp Mobile Phone GroupAgent SIE- Device: Siemens Mobile Phone GroupAgent SonyEricsson Device: Sony/Ericsson Mobile Phone GroupAgent www.wapsilon.com Device: www.wapsilon.com (WAP browser) GroupAgent WebGo Device: Offline Browser WebGo (Windows/CE) GroupAgent WebTV Device: WebTV GroupAgent AmphetaDesk RSS: AmphetaDesk GroupAgent Awasu RSS: Awasu GroupAgent FeedDemon RSS: Feed Demon GroupAgent Feedreader RSS: FeedReader GroupAgent FeedOnFeeds RSS: FeedOnFeeds Reader (http://minutillo.com/steve/feedonfeeds/) GroupAgent FeedValidator RSS: Archive.org Feed Validator GroupAgent MagpieRSS RSS: MagpieRSS (PHP-based reader) GroupAgent MyHeadlines RSS: MyHeadlines (http://www.jmagar.com/myh4) GroupAgent NetNewsWire RSS: NetNewsWire GroupAgent NewsGator RSS: NewsGator GroupAgent Newz RSS: Newz Crawler GroupAgent nntp//rss RSS: nntp//rss (http://www.methodize.org/nntprss/) GroupAgent Radio* RSS: Radio Userland GroupAgent Oddbot RSS: OddPost.com GroupAgent PocketFeed RSS: PocketFeed (Pocket PC RSS reader) GroupAgent PostNuke RSS: PostNuke CMS GroupAgent SharpReader RSS: SharpReader GroupAgent Syndigator RSS: Syndigator GroupAgent Syndirella RSS: Syndirella GroupAgent UltraLiberalFeedParser RSS: Ultra Liberal Feed Parser from Mark Pilgrim GroupAgent Wildgrape RSS: Wildgrape NewsDesk GroupAgent china SpamBot: china local browse 2.6 GroupAgent cloakBrowser SpamBot: Fantoma GroupAgent compatible) SpamBot: Pretends to be Mozilla 3.0 GroupAgent Dattatec.com-Sitios-Top SpamBot: Referrer Spam for Dattatec.com GroupAgent DTS SpamBot: Beijing Express Email Address Extractor GroupAgent EmailSiphon SpamBot: EmailSiphon GroupAgent fantomBrowser SpamBot: Fantoma GroupAgent fantomCrew SpamBot: Fantoma GroupAgent Franklin SpamBot: Franklin Locator GroupAgent Finder SpamBot: Mac Finder GroupAgent iaea.org SpamBot: Atomic Harvester 2000 GroupAgent Industry SpamBot: Industry Program GroupAgent IUFW SpamBot: IUFW Web GroupAgent IUPUI SpamBot: IUPUI Research Bot GroupAgent Lincoln SpamBot: Lincoln State Web Browser GroupAgent LinkSweeper SpamBot: LinkSweeper GroupAgent Microcomputers SpamBot: Franklin Locator GroupAgent Missauga SpamBot: Missauga Locate GroupAgent Missigua SpamBot: Missauga Locate GroupAgent NationalDirectory Spambot: National Directory Email Harvester GroupAgent Rainbow SpamBot: Under the Rainbow GroupAgent Shareware Spambot: Program Shareware GroupAgent stealthBrowser Spambot: Fantoma GroupAgent Sweeper Spambot: Mail Sweeper GroupAgent WEP SpamBot: WEP Search GroupAgent Xenu SpamBot: Xenu GroupAgent 348NorthNews Spider: 348north.com GroupAgent almaden.ibm.com/cs/crawler Spider: almaden.ibm.com GroupAgent antibot Spider: Antidot.net http://www.antidot.net/Welcome/jsp/robots.html GroupAgent http://Ask.24x.Info/ Spider: MnogoSearch.org GroupAgent ASPseek Spider: ASPseek.org free search engine software GroupAgent augurfind Spider: augurnet.ch (Swiss Search Engine) GroupAgent Baiduspider Spider: Baidu.com GroupAgent BarraHomeCrawler Spider: Barrahome.org GroupAgent BBot Spider: http://www.otthon.net/search/ GroupAgent Bilbo Spider: wise-guys.nl GroupAgent blo.gs Spider: blo.gs GroupAgent BlogBot Spider: Blogdex.net GroupAgent Blogosphere Spider: Blogosphere.us GroupAgent BlogPulse Spider: Blogpulse.com GroupAgent BlogShares Spider: BlogShares.com GroupAgent Blogwise.com Spider: Blogwise.com GroupAgent boitho.com Spider: boitho.com GroupAgent bookwatch@onfocus.com Spider: OnFocus.com Weblog BookWatch GroupAgent brainoff.com/geoblog/ Spider: The World as a Blog (brainoff.com/geoblog/) GroupAgent www.business-socket.com Spider: www.business-socket.com GroupAgent CJNetworkQuality Spider: CommissionJunction.com GroupAgent combine Spider: http://www.lub.lu.se/combine/ GroupAgent COMBINE Spider: http://www.lub.lu.se/combine/ GroupAgent CoolBot Spider: www.suchmaschine21.de (German Search Engine) GroupAgent CoologFeedSpider Spider: CoolLog http://www.webfanatic.lunarpages.com/coolog/ GroupAgent CopyHunter Spider: AWstats referrer log analyzer GroupAgent daypopbot Spider: DayPop.com GroupAgent Ecosystem/development Spider: Blogging Ecosystem GroupAgent EgotoBot Spider: Egoto.com GroupAgent ETS Spider: Freetranslation.com GroupAgent exactseek.com Spider: exactseek.com GroupAgent Exalead Spider: Exalead.com (AOL France) GroupAgent FAST Spider: All The Web GroupAgent Fast Spider: All The Web GroupAgent Feedster Spider: Feedster.com GroupAgent FlickBot Spider: DivX Networks FlickBot GroupAgent Gaisbot Spider: GAIS (http://gais.cs.ccu.edu.tw/ ) GroupAgent GalaxyBot Spider: Galaxy.com GroupAgent Genome Spider: Waypath.com GroupAgent Gigabot Spider: Gigablast.com GroupAgent Google* Spider: Google.com GroupAgent gossamer-threads.com Spider: Links SQL GroupAgent grub-client Spider: Grub.org GroupAgent htdig Spider: htdig (Open Source Search Engine) GroupAgent ia_archiver Spider: Archive.org GroupAgent INGRID/3.0 Spider: ilse.nl (Dutch search engine) GroupAgent InternetSeer Spider: InternetSeer.com (Web Site Monitoring) GroupAgent internetseer Spider: InternetSeer.com (Web Site Monitoring) GroupAgent IXE Spider: ideare.com GroupAgent janes-blogosphere Spider: BlogMatrix.com GroupAgent jiffe Spider: jiffe.com GroupAgent k2spider Spider: Verity Spider GroupAgent larbin Spider: larbin (http://sourceforge.net/projects/larbin/) GroupAgent Leknor.com Spider: Leknor.com GZIP Tester GroupAgent Linkbot Spider: Linkbot link monitoring tool (Watchfire.com) GroupAgent LinkHype Spider: LinkHype.com GroupAgent LinksManager.com Spider: LinksManager.com GroupAgent LinkWalker Spider: seventwentyfour.com GroupAgent MnogoSearch Spider: MnogoSearch.org GroupAgent mogimogi Spider: www.goo.ne.jp (Japanese Search Engine) GroupAgent MSNBOT Spider: MSN.com GroupAgent msnbot Spider: MSN.com GroupAgent MyWireServiceBot Spider: MyWireService.com GroupAgent NaverRobot Spider: Naver.com (Korean Search Engine) GroupAgent Netcraft Spider: Netcraft Web Survey GroupAgent NetResearchServer Spider: Look.com GroupAgent NIF Spider: Newsisfree.com GroupAgent NG/1.0 Spider: Exalead.com (AOL France) GroupAgent NITLE Spider: Blogcensus.net GroupAgent NPBot Spider: NameProtect.com GroupAgent NRK-bruker Spider: NRK.no GroupAgent Openbot Spider: OpenFind (http://www.openfind.com.tw/) GroupAgent Pompos Spider: Dir.com GroupAgent Popdexter Spider: Popdex.com GroupAgent psbot Spider: Picsearch.com GroupAgent QuepasaCreep Spider: Quepasa.com (Spanish site) GroupAgent Robozilla Spider: Link Checker for Dmoz.org GroupAgent Scooter Spider: Altavista GroupAgent searchspider.com Spider: searchspider.com GroupAgent semanticdiscovery Spider: semanticdiscovery.com GroupAgent SideWinder Spider: Infoseek GroupAgent slurp@inktomi.com Spider: Inktomi GroupAgent spider@spider.ilab.sztaki.hu Spider: http://www.ilab.sztaki.hu/websearch/ GroupAgent Spinne Spider: webauskunft.at GroupAgent Steeler Spider: Kitsuregawa Laboratory, The University of Tokyo GroupAgent SurveyBot Spider: whois.sc GroupAgent Syndic8 Spider: Syndic8 GroupAgent Tagword Spider: Tagword - http://tagword.com/dmoz_survey.php GroupAgent Teoma Spider: Teoma GroupAgent Teradex Spider: Teradex.com (directory) GroupAgent Terrar Spider: Terrar (http://www.terrar.com) GroupAgent Technoratibot Spider: Technorati GroupAgent T-H-U-N-D-E-R-S-T-O-N-E Spider: Webinator (http://www.thunderstone.com/texis/site/pages/webinator.html) GroupAgent timboBot Spider: BreakingBlogs.com GroupAgent TurnitinBot Spider: Turnitin.com GroupAgent http://www.tutorgig.com/ Spider: tutorgig.com GroupAgent Vagabondo Spider: kobala.nl GroupAgent verzamelgids Spider: verzamelgids.nl GroupAgent VoilaBot Spider: Voila.com GroupAgent W3C_Validator Spider: W3C Validator GroupAgent www.walhello.com Spider: Walhello.com GroupAgent WebCapture Spider: WebCapture.biz GroupAgent Webclipping Spider: Webclipping.com GroupAgent WebFilter Spider: http://www.ils.unc.edu/webfilter/ GroupAgent WebGather Spider: City Polytechnic of Hong Kong GroupAgent WebRACE Spider: WebRACE (University of Cyprus, Distributed Crawler) GroupAgent websitealert.net Spider: websitealert.net (Monitoring System) GroupAgent Zealbot Spider: Looksmart.com GroupAgent ZyBorg Spider: WiseNut.com GroupAgent curl Programming: curl library (PHP) GroupAgent MSFrontPage Programming: Microsoft FrontPage GroupAgent Indy Programming: Indy (Delphi-based client) GroupAgent Java Programming: Java-based client GroupAgent Jakarta Programming: Jakarta (Java) GroupAgent libwww-perl Programming: LIB-WWW (Perl library) GroupAgent LWP: Programming: LWP: : Simple (Perl library) GroupAgent OPWV-SDK Programming: OpenWave Mobile Development SDK GroupAgent PEAR Programming: PEAR Library (PHP) GroupAgent PHP Programming: PHP-based client GroupAgent Python-urllib Programming: URLLIB (Python library) GroupAgent rdflib Programming: rdflib (Python RDF library) GroupAgent RPT-HTTPClient Programming: RPT-HTTP (Java) GroupAgent Snoopy Programming: Snoopy (PHP class - http://snoopy.sourceforge.net/ ) GroupAgent SOFTWING_TEAR_AGENT Programming: Softwing Tear Agent (Active Server Pages) GroupAgent Wget Programming: Wget library (http://www.gnu.org/software/wget/wget.html) GroupAgent WinHttp.WinHttpRequest Programming: WinHttp.WinHttpRequest library (Visual Basic) GroupAgent Bison Proxy: Proxomitron (Proxomitron.info) GroupAgent BorderManager Proxy: Novell Border Manager Security Suite GroupAgent CE-Preload Proxy: Cisco Content Engine GroupAgent DA Proxy: DA GroupAgent junkbuster Proxy: junkbuster (junkbusters.com) GroupAgent AppleWebKit Safari (OSX) GroupAgent BFS_method BeOS browser GroupAgent Camino Mozilla-based browser Camino (OSX) GroupAgent iCab iCab (Mac) GroupAgent Konqueror Konqueror GroupAgent Links Links (Text-based browser) GroupAgent Lynx* Lynx (Text-based browser) GroupAgent NCBrowser NCBrowser (RISC OS) GroupAgent Opera Opera GroupAgent SlimBrowser SlimBrowser (http://www.flashpeak.com/sbrowser/sbrowser.htm) GroupAgent w3m w3m (Text-based browser - http://w3m.sourceforge.net/ ) GroupAgent rv:1.4 Mozilla 1.4 GroupAgent 3.01 Navigator 3.01 (16-bit version) GroupAgent 4.01 Internet Explorer 4.01 GroupAgent 5.01 Internet Explorer 5.01 GroupAgent 5.0 Internet Explorer 5.0 GroupAgent 5.23 Internet Explorer (Mac) GroupAgent 5.22 Internet Explorer (Mac) GroupAgent 5.21 Internet Explorer (Mac) GroupAgent 5.17 Internet Explorer (Mac) GroupAgent 5.16 Internet Explorer (Mac) GroupAgent 5.15 Internet Explorer (Mac) GroupAgent 5.13 Internet Explorer (Mac) GroupAgent 5.12 Internet Explorer (Mac) GroupAgent 5.5 Internet Explorer 5.5 (Windows) GroupAgent 6.0 Internet Explorer 6.0 (Windows) GroupAgent Mozilla/3.04Gold Netscape 3.04 Gold GroupAgent Mozilla/4.04 Netscape 4 GroupAgent Mozilla/4.06 Netscape 4 GroupAgent Mozilla/4.08 Netscape 4 GroupAgent Mozilla/4.5 Netscape 4.5 GroupAgent Mozilla/4.7 Netscape 4.7 GroupAgent Mozilla/4.8 Netscape 4.8 GroupAgent MSIE Internet Explorer GroupAgent Mozilla Netscape # HideAllSites allows forcing individual sites to be hidden in the # report. This is particularly useful when used in conjunction # with the "GroupDomain" feature, but could be useful in other # situations as well, such as when you only want to display grouped # sites (with the GroupSite keywords...). The value for this # keyword can be either 'yes' or 'no', with 'no' the default, # allowing individual sites to be displayed. #HideAllSites no # The GroupDomains keyword allows you to group individual hostnames # into their respective domains. The value specifies the level of # grouping to perform, and can be thought of as 'the number of dots' # that will be displayed. For example, if a visiting host is named # cust1.tnt.mia.uu.net, a domain grouping of 1 will result in just # "uu.net" being displayed, while a 2 will result in "mia.uu.net". # The default value of zero disable this feature. Domains will only # be grouped if they do not match any existing "GroupSite" records, # which allows overriding this feature with your own if desired. #GroupDomains 0 # The GroupShading allows grouped rows to be shaded in the report. # Useful if you have lots of groups and individual records that # intermingle in the report, and you want to diferentiate the group # records a little more. Value can be 'yes' or 'no', with 'yes' # being the default. #GroupShading yes # GroupHighlight allows the group record to be displayed in BOLD. # Can be either 'yes' or 'no' with the default 'yes'. #GroupHighlight yes # The Ignore* keywords allow you to completely ignore log records based # on hostname, URL, user agent, referrer or username. I hessitated in # adding these, since the Webalizer was designed to generate _accurate_ # statistics about a web servers performance. By choosing to ignore # records, the accuracy of reports become skewed, negating why I wrote # this program in the first place. However, due to popular demand, here # they are. Use the same as the Hide* keywords, where the value can have # a leading or trailing wildcard '*'. Use at your own risk ;) #IgnoreSite bad.site.net #IgnoreURL /test* #IgnoreReferrer file:/* #IgnoreAgent RealPlayer #IgnoreUser root # The Include* keywords allow you to force the inclusion of log records # based on hostname, URL, user agent, referrer or username. They take # precidence over the Ignore* keywords. Note: Using Ignore/Include # combinations to selectivly process parts of a web site is _extremely # inefficent_!!! Avoid doing so if possible (ie: grep the records to a # seperate file if you really want that kind of report). # Example: Only show stats on Joe User's pages... #IgnoreURL * #IncludeURL ~joeuser* # Or based on an authenticated username #IgnoreUser * #IncludeUser someuser # The MangleAgents allows you to specify how much, if any, The Webalizer # should mangle user agent names. This allows several levels of detail # to be produced when reporting user agent statistics. There are six # levels that can be specified, which define different levels of detail # supression. Level 5 shows only the browser name (MSIE or Mozilla) # and the major version number. Level 4 adds the minor version number # (single decimal place). Level 3 displays the minor version to two # decimal places. Level 2 will add any sub-level designation (such # as Mozilla/3.01Gold or MSIE 3.0b). Level 1 will attempt to also add # the system type if it is specified. The default Level 0 displays the # full user agent field without modification and produces the greatest # amount of detail. User agent names that can't be mangled will be # left unmodified. #MangleAgents 0 # The SearchEngine keywords allow specification of search engines and # their query strings on the URL. These are used to locate and report # what search strings are used to find your site. The first word is # a substring to match in the referrer field that identifies the search # engine, and the second is the URL variable used by that search engine # to define it's search terms. # 2.14.04 added SearchEngine List found on http://www.tnl.net/blog/entry/Webalizer.conf_hacking SearchEngine 2020search.com Keywords= SearchEngine 348north.com search= SearchEngine abcsearch.com terms= SearchEngine alltheweb.com q= SearchEngine altavista.com q= SearchEngine antisearch.net KEYWORDS= SearchEngine aolsearch query= SearchEngine ask.com ask= SearchEngine ask.co.uk ask= SearchEngine augurnet.ch q= SearchEngine baidu.com word= SearchEngine barrahome.org query= SearchEngine blogdex.net q= SearchEngine blogdigger.com queryString= SearchEngine blogosphere.us s= SearchEngine blogmatrix.com search= SearchEngine blogwise.com query= SearchEngine boitho.com query= SearchEngine buscador.ya.com q= SearchEngine by.com query= SearchEngine daypop.com q= SearchEngine dir.com req= SearchEngine dmoz.org search= SearchEngine dogpile.com q= SearchEngine dpxml qkw= SearchEngine egoto.com keywords= SearchEngine elf8888.at query0= SearchEngine eureka.com q= SearchEngine excite search= SearchEngine feedster.com q= SearchEngine gais.cs.ccu.edu.tw q= SearchEngine galaxy.com k= SearchEngine gigablast.com q= SearchEngine google q= SearchEngine goo.ne.jp MT= SearchEngine hotbot.com query= SearchEngine infoseek.com qt= SearchEngine ixquick.com query= SearchEngine kobala.nl qr= SearchEngine lycos.com query= SearchEngine look.com q= SearchEngine looksmart key= SearchEngine mamma.com query= SearchEngine metacrawler q= SearchEngine msn.com q= SearchEngine msxml qkw= SearchEngine mysearch.com serachfor= SearchEngine naver.com query= SearchEngine netscape.com query= SearchEngine northernlight.com qr= SearchEngine ntlworld.com q= SearchEngine openfind query= SearchEngine overture.com Keywords= SearchEngine picsearch.com q= SearchEngine popdex query= SearchEngine quepasa.com q= SearchEngine searchspider.com q= SearchEngine search.earthlink q= SearchEngine search.msn.com q= SearchEngine suchmaschine21.de search= SearchEngine syndic8 ShowMatch= SearchEngine technorati query= SearchEngine teensearch query= SearchEngine teoma.com q= SearchEngine teradex.com q= SearchEngine texis q= SearchEngine voila kw= SearchEngine walhello key= SearchEngine waypath.com key= SearchEngine webcrawler searchText= SearchEngine whois.sc q= SearchEngine wisenut.com q= SearchEngine yahoo p= # The Dump* keywords allow the dumping of Sites, URL's, Referrers # User Agents, Usernames and Search strings to seperate tab delimited # text files, suitable for import into most database or spreadsheet # programs. # DumpPath specifies the path to dump the files. If not specified, # it will default to the current output directory. Do not use a # trailing slash ('/'). #DumpPath /var/lib/httpd/logs # The DumpHeader keyword specifies if a header record should be # written to the file. A header record is the first record of the # file, and contains the labels for each field written. Normally, # files that are intended to be imported into a database system # will not need a header record, while spreadsheets usually do. # Value can be either 'yes' or 'no', with 'no' being the default. #DumpHeader no # DumpExtension allow you to specify the dump filename extension # to use. The default is "tab", but some programs are pickey about # the filenames they use, so you may change it here (for example, # some people may prefer to use "csv"). #DumpExtension tab # These control the dumping of each individual table. The value # can be either 'yes' or 'no'.. the default is 'no'. #DumpSites no #DumpURLs no #DumpReferrers no #DumpAgents no #DumpUsers no #DumpSearchStr no # End of configuration file... Have a nice day!