It's easy to /say/ that any modern server should be able to handle a few thousand GET requests.
The reality is that a single URI may affect dozens of scripts that you didn't write, which might hit some database as many times, and you can't change them for business or political reasons; even if you are entirely qualified to fix bugs and do performance tuning.
When an aggressor, or just some well-intentioned runaway script harps on one of these URIs, your options as an admin are limited.
You can throw more hardware at it, put (and maintain) some caching proxy infront of it; or you can throttle the aggressor Fail2ban will help you do the latter, and much more. For instance, it becomes realistic to run ssh on its official port (gasp!) if you use fail2ban to cut down on riff raff.
As fail2ban starts blocking the sources of the floods, look over the list of addresses, and see if you can identify a business partner. If you can get them to fix their script, all the better.
On Mon, Mar 18, 2013 at 3:45 PM, J. Wade Michaelis jwade@userfriendlytech.net wrote:
On Mon, Mar 18, 2013 at 2:58 PM, Mark Hutchings mark.hutchings@gmail.com wrote:
You sure it was just a http attack? Several hundred requests in a few minutes shouldnt really put it on it's knees, unless the server is a VPS with low memory/CPU usage limits, or the server itself is low on resources.
I've gone over my access logs again, and here are the particulars on the two attacks that caused the server to hang:
On March 6th, between 4:29:11 and 4:31:40, there were 1453 requests from a single IP, and all were 'GET' requests for a single page (one that does exist).
On March 14th, between 15:15:19 and 15:16:29, there were 575 requests from the one IP address. These were all different GET requests, nearly all resulting in 404 errors. Some appear to be WordPress URLs. (The website on my server is a Magento commerce site.)
Here are some other example requests from the attack:
GET /?_SERVER[DOCUMENT_ROOT]=http://google.com/humans.txt? HTTP/1.1 GET /?npage=1&content_dir=http://google.com/humans.txt%00&cmd=ls HTTP/1.1 GET /A-Blog/navigation/links.php?navigation_start=http://google.com/humans.txt? HTTP/1.1 GET /Administration/Includes/deleteUser.php?path_prefix=http://google.com/humans.txt HTTP/1.1 GET /BetaBlockModules//Module/Module.php?path_prefix=http://google.com/humans.txt HTTP/1.1 GET /admin/header.php?loc=http://google.com/humans.txt HTTP/1.1
I don't recognize most of these, but the pattern indicates to me that these are most likely 'standard' URLs in various CMSs.
As for the server configuration, it is a dedicated server (only one website) running on VMware ESXi 5.0.
CentOS 6.3 8 virtual CPU cores (2 quad-core CPUs) 4096 MB memory
Other VMs on the same host appeared to be unaffected by the attack.
Thanks, ~ j. jwade@userfriendlytech.net
KCLUG mailing list KCLUG@kclug.org http://kclug.org/mailman/listinfo/kclug