Jump to content

RAID 5 poor performance


Recommended Posts

I made mysef a server out of a gutted tivo box. i used parts i had at my house already to keep the overall cost down and heres what ended up inside. keep in mind that this is only serving 2 computers in my home and extreme performance is not a priority

850mhz celleron on unique mini motherboard with dual 100 NIC

256 mb sd ram pc 100 (thats maxed)

3x 80 gb hard drives (2 IBM and one WD)

40gb laptop drive (for the system drive, low power always on)

Geforce 2 video 16mb PCI card

running server 2003

i crammed it all into a tivo box and configured the 3 80gig drives to run in a RAID 5 via dynamic disk in windows. since the drives are IDE i figured i would get like 50mb/sec out of them since individually i was able to pull 40mb/sec as a baseline test. but after all was said and done i was only able to pull 23 mb/sec out of the Raid. write caching and enhanced performance is enable on all drives (its running on a UPS) and the drive tempatures peak at 140 f on the IBM's. how can i bring the performance back up or diagnose any problems i might be missing?

post-34103-1117927487_thumb.jpg

Link to comment
Share on other sites


What benchmark were you running when you got 23MB/sec?

DMA enabled for all drives?

If you read or write a big file to the array what is the CPU usage like? I would think the overhead for a software RAID 5 array would be significant...maybe a limiter?

Link to comment
Share on other sites

1. Clean up the cable clutter. Buy shorter IDE , flat cables and use ancient origami based folding techniques to max airflow. I would get a PSU sleeving kit, for this setup it will help for functionality and not just for looks.

2. I can't tell if your box has any vents in the front, but if it doesn't, put some near the bottom so as to bring the colder air in. Adjusting your CPU fan to blow air across and towards the back (out) would be better in this setup though you will have to make your own brace to keep it vertical.

3. As I understand, the system drive is on a 40g lappy HD. My bet is 4200 rpm with a lower DMA as the rest of the HD's. The other drive on the IDE ribbon will set itself to the lower setting because of this & add in the fact that you put the system on the slowest drive WITH low power always on and you have something that is just not good.

4. The overhead of 2003, esp. if you have not set all the GUI candy off is going to make it slow on your system. 2000 would be a faster OS and since it is for home, there is nothing 2003 has that would justify the performance hit.

5. Double check that all the cables are plugged in extra snug.

6. Set processor scheduling and memory usage to the correct setting for servers. Double check services that are not needed are disabled.

7. You don't have a raid card and are probably using the built in feature. This does not necessarily mean better performance.

Did you benchmark on this setup or was it done on a different system / mobo? Was this benchmark done over the network?

Link to comment
Share on other sites

-the cpu overhead for running the raid is about 12% considering that this box's primary function is data server this never runs at full capacity.

-the laptop drive is a 7200rpm seagate drive with 8mb of cache, it is the highest performing drive in the box.

-all cables are snug

-UDMA is enabled on all drives

-all unneeded services are disabled

- i am using the built in raid because i dont have any PCI slots. i only really expected a slight increase in speed, not a decrease

-your right about the benchmarking; i didnt do it on this board so that might be a factor but i did use the same utility (sis sandra, same version on everything)

-i run 2003 because i had a free copy and im practicing for my MCSE testing on it. since its 98% file server im not worried about that much overhead

-the picture does not show it but i have a fan in the rear blowing air in across the board and another fan in the lid (not pictured) that sucks the hot air out. the exaust is directly above the drives and their are air vents between the drives that because of the suction on the exaust fan, suck cool air thru the drives from every side. (i was real proud of that)

-il take your advice on the IDE cables, perhaps some custom ones (friend has the tools)

Link to comment
Share on other sites

I'm pretty sure the OS based 'Raid' shares each disk for both type 0 & 1 (and 5), whereas a dedicated card would/could use each disk exclusively for each type, thus maximizing performance. If it is sharing each type across your disks, you would be hindering the performance since it now has to write twice to each disk for 1 action + if 1 disk dies, you lose both your active and backup partitions.

Also, the cluster size can play a significant role is performance depending on your file activities. (On a side note, the reduced cache in the celeron isn't helping things either.Most raid 5 systems have a dedicated CPU to handle the distributed parity type checks)

I remember a good guide I read that showed performance of culster size vs. file size....I'll try to find it and post the link

Link to comment
Share on other sites

that was a great guide. i never knew the speed differences with the cluster sizes so when i was setting it up i just figured "bigger is better". the benchmarks on that guide show that on average that is just the opposite and that clusters of 4-16k were best overall. since i dont have any data on the drive i am going to format them and configure the raid for a 16k cluster. il report the results

Link to comment
Share on other sites

i changed the block size to 16k and ran sis sandra again. as expected i took a hit on write speed (14mb/sec still way to low) and my read speed increased to 27mb/sec. when going over the results i found that sandra uses a 1024 block size for benchmarking and i dont think i can adjust it. i think my raid setup is super sensitive to block sizes because my celleron doesnt have any cache. on another system (running an athlon 64 1mb cache) the block sizes made almost no difference. i am going to try one of the benchmarks that was used in the raid article you referred me to and see if i can get some different results

Link to comment
Share on other sites

found the source of the problem. i benchmarked the drives using my quicktech bootable diagnostic disk and each drive is only pushing 10mb/sec. that explains why i am getting 23-27mb in a raid. the diagnostic utility is OS independant and i tested the drives individually so raid overhead and cluster size were not factors. its running a 440bx chipset with a ATA 33 IDE controllers, that are supposed to support up to 33mb/sec and my drives are all capable of higher speeds. my dma settings are all in the on position any other ideas?

Edited by kurbycar32
Link to comment
Share on other sites

Well since they are all on a UDMA33 controller that read speed for the RAID arrary does look about right. Keep in mind that the 33MB/s is only a theoretical maximum. You could opt for a PCI UDMA 100/133 card, but Im not so sure there would even be a point to that because even now you are going to be limited by the 100Mb network interface when serving files over the network. Over a 100Mbit connection you are only going to be able to get somewhere between 8-12 MBytes of throughput, and your array is already well over that anyway.

Link to comment
Share on other sites

smokee i looked into what you were saying and it checks out. so even though i am not seeing the performance my drives are capable of, i am very close the the 100mbit bottle neck anyway. since this will only be accessed across the network my speeds are acceptable. is that what your saying?

Link to comment
Share on other sites

since your not using a true raid setup, the speed of normal disk op's are probably on par with your raid.

I would set the disks up normally and enjoy the increased reliability and peace of mind knowing that if a drive dies, the others are still accessable. Also, recovering an OS that is in Raid often proves to be more time consuming and complicated.

If you had the urge to upgrade the mobo or found out how to somehow fit a Raid card in there, it would then be worth the hassle.

I get way higher xfer speeds over my network. Are you sure that none of your nics is setting itself to 10 instead of 100?

Link to comment
Share on other sites

the nics are set on 100 base full duplex. also remember the raid is only for storage, my OS is running on a single drive. if i loose the OS disk i can reinstall and import a foreign volume so disaster recovery wouldnt be too bad. i am goign to run this as long as it runs, then if it ever dies il replace the board with a mini ITX with gigabit network and a PCI slot for a real raid controller, or hell it might have on board raid. on a side note, if i tell the bios to not halt on any errors can i boot this with out a video card? i remote into it anyway. if its possible i could put a raid card in there

Link to comment
Share on other sites

hmmm...you might be able to that. Some boards won't allow you to boot w/o a gfx card, but there are prog's that can modify a variety of bios settings if that becomes an issue. Though Linux can run w/o a gfx card after tweaking the kernel, I don't think Windows is as flexible, but i haven't tried it. I gather you do not have an on-board gfx chip? What kind of board do you have.

As to your OS not being RAID...

Actually, i would run the OS (after making a back up so you can trash the OS often and keep clean) in RAID 0 and save work product to regular partitions. Remember, since this isn't true RAID, the info is stored across all disks and if one disk dies, all (including your backup) will die with it.

This is why you need a RAID card so as to keep the type 1 on a entirely different disk set ensuring a safe backup and negating any speed issues that arise from your setup as i understand, which is mimicking RAID 5.

If your RAID is a basic 0 or 1 type exclusively, then this would not be an issue. However, doing both as your are now requires twice the work per disk. You might as well just make the whole thing mimick RAID 0.

When you do get a new board, for just file serving, make sure to pick a board w/ simple on-board gfx, and if you have the $, dual gig nic and lots of firewire / usb plugs. Hopefully you can get a board with a RAID chip in it other than Intel matrix RAID or a board w/ a RAID chip and intel matrix RAID.

Edited by LiquidSage
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...