I find myself in an interesting dilemma.
I run logwatch the mail and web servers I manage. It used to give me some
useful info, but these days, it has problems.
It's catching every single 404 from the web server logs. On a site with
~40,000 pages, ~150,000 unique URL's, that's heavily crawled by robots on a
daily basis, that makes for a pretty large report.
Add to that the fact that it's also reporting every bounced spam, and it
appears to be reporting all of the NNTP log entries as well, and any useful
information is obliterated in a report that's well over a megabyte of text.
So I tried to follow the instructions and turn off HTTPD reporting.
Apparently, I got the syntax wrong, so now instead of the 1.4 meg report, all
I get is an error message.
Which is really just as useful. So why fix it?
I know it's theoretically possible to configure and customize logwatch, but
when I tried to find documentation on it, all I found were incredibly
detailed and obtuse instructions for building your own custom log filters
from the ground up. Not helpful.
I'm posting this for your amusement at the dilemma, and in the vague hope that
someone, somewhere, has written an intelligible guide to configuring logwatch
on a very superficial level. Then again, maybe someone knows of a better
tool.