Server load and MYSQL

Aaron aaron at aarons.net
Mon Oct 21 23:11:20 CDT 2002


> Having run into problems before that were not entirely dissimilar, I would
> venture a guess that your drives are overloaded.  Based on your listed
> config, you probably have those poor drives mirrored, split into a couple
> of partitions, and those 2 poor spindles have to handle all of the traffic
> for your pagefiles, database files, database page files, operating system,
> and serving your websites to boot.  Databases and operating systems have
> timeout thresholds built into them and will start to get flaky if it takes
> them too long to write something to the disk.  With everything that you
> have going on in that system, I would guess that your system is simply
> trying to push too much bandwidth through that controller and drives.  If
> it were me (and if that controller had the channels), I would put at least
> another 4 drives in the system: a mirrored set for the O/S, and a mirrored
> set for your pagefiles.  Barring that, I would put one more drive in there
> at least for both the Linux paging partition and for the database
pagefiles
> (but keeping the datafiles on a redundant drive).

The drives aren't mirrored. Here is the current partition scheme.  I
recently moved the databases and log files to the second drive and that
seemed to help a little.

Current Disk Usage
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3            1011M  332M  628M  35% /
/dev/sda1              30M   16M   13M  55% /boot
/dev/sda8             9.3G  426M  8.4G   5% /home
none                  503M     0  503M   0% /dev/shm
/dev/sda7            1011M   33M  927M   4% /tmp
/dev/sda5             2.0G  1.2G  741M  62% /usr
/dev/sda6             1.4G  104M  1.2G   8% /var
/dev/sdb1             3.8G  424M  3.2G  12% /domlogs
/dev/sdb2             3.8G  1.2G  2.4G  33% /mysqldb
/dev/sdb3             9.0G  1.6G  6.9G  19% /backup

> On a sidenote, databases perform at their best when they are on a lot of
> drives.  You can pull data from many drives far faster than you can pull i
t
> from a few large ones.  Ideally, you would swap out those 18GB's for a set
> of 4 new 9GB's so you could spread the workload over more spindles.  Even
> if you have 10k RPM 18GB's going against 7k RPM 9GB's, the four 9GB's will
> always have more raw throughput.  If you want to get hardcore, you could
> tweak your data block allocation size so every read or write was spread
> across at least 2 drives, but that is another story. :)

:)  Sounds right... the problem is there is only room for 2 drives in the
box.  It's a 1U case.

> All of that having been said, if MySQL is anything like Oracle, your
> performance is very closely tied to how efficiently you use your RAM.  I
> would have a pro look at your settings.  I have seen processing times drop
> by 90% with some operations when the memory was properly tweaked for the
> application.  Similarly, if your queries are going against tables instead
> of indexes (or going to unindexed tables) you will see a significant
> dropoff.  If your system is bogged down in processing it can't send data
to
> the drives as fast.  I've seen even better performance gains from properly
> written queries than I have from just about anything else.

This is my thought.  I have weird memory allocation on each MySQL process.
Anywhere from 27 to 36MB's per process.  I always thought it was weird that
they were all the same and it seems to me like I've seen this mentioned
somewhere as a potential problem.

> So, if you're stuck with the memory allocations and the code but you still
> need a boost.  Well, not to overstate the obvious, but reading from RAM is
> still far faster than reading from drives, and the more of the database
> that is in RAM, the faster you can get data from it.  RAM is also cheaper
> than downtime in most cases.  In an ideal situation, you could throw
enough
> RAM in the box that you wouldn't even need a pagefile.

I've been toying with throwing in another Gig.  Maybe now would be a good
time. :)

> 1) Stay away from writeback.

Why?  It seems to me that when doing database writes a writeback system
would increase throughput dramatically.  Also, as far as data integrity
goes, it's essentially the same as an ext2 filesystem except with 5 second
writes instead of 30.




More information about the Kclug mailing list