IDE RAID Comparison

by Matthew Witheiler on June 18, 2001 4:31 AM EST

About the Test

Our RAID testbed consisted of a Gigabyte GA-7DX motherboard outfitted with 128MB of PC2100 memory and a Duron 850MHz processor. All tests were conducted under Windows 2000 with the operating system running off of an old Western Digital 20GB 205AA drive connected to the motherboard's IDE controller.

All RAID tests were performed using two (in the case of RAID 0 and RAID 1) or three (in the case of RAID 5) 30GB IBM 75GXP hard drives connected directly to the RAID controller being tested. In all cases the drives were on different channels and always set to master.

Intel's Iometer benchmark was used in the tests and this required that the drives in the RAID array have no partition information on them and thus were strictly physical drives. For testing purposes we used three different Iometer access patterns under three different loads. The three access patterns used were a file server access pattern, a database access pattern, and a workstation access pattern. Both the file server pattern as well as the database pattern come predefined by Intel. These patterns consist of the following characteristics.

File Server Access Pattern

Size
% Access
% Read
% Random
512B
10%
80%
100%
1KB
5%
80%
100%
2KB
5%
80%
100%
4KB
60%
80%
100%
8KB
2%
80%
100%
16KB
4%
80%
100%
32KB
4%
80%
100%
64KB
10%
80%
100%

Database Access Pattern

Size
% Access
% Read
% Random
8KB
100%
67%
100%

The third access pattern used was made to simulate disk usage in a workstation. To make an appropriate access pattern for this type of situation, we turned to the knowledgeable folks over at StorageReview.com who developed and use the following pattern to simulate workstation usage.

Workstation Access Pattern

Size
% Access
% Read
% Random
8KB
100%
80%
80%

More information regarding this access pattern and how it was developed can be found at StorageReview's website.

We tested each of the above access patterns on a set of three I/O loads described as light, medium, and high. The below table describes what each load consisted of.

Loads

Light
16 Outstanding I/Os
Medium
64 Outstanding I/Os
High
128 Outstanding I/Os

Iomter build 1999.10.20 was used to conduct the tests. A ramp-up time of 30 seconds was set to eliminate any variability that would occur during the start of a test. Each of the 9 tests (three loads for each access pattern) was set to run for 10 minutes. Using the command line implementation of Iometer, we were able to construct a program that ran each of the tests one after another, meaning that the automated test ran for a total of 1 hour 34 minutes and 30 seconds. As the test results will show, these tests were done on each card at each available stripe size on RAID 0, RAID 1 (no stripe sizes), and RAID 5 (all available stripes), making this review a truly monstrous undertaking considering we ran other tests as well. To put it in perspective, the Iometer tests alone took a total of benchmark time of 78 hours and 45 minutes.

The other benchmarks used required an active RAID array with a valid partition table and format. For these tests we created an NTFS partition on the array using the default cluster size (which happens to be 4KB). Once the array was partitioned and formatted, the only information written to or read from the drive was information needed for testing. For example, on the Content Creation Winstone 2001 tests, the benchmark was installed to the RAID array and then told to use the array for the test. This ensures that the RAID array is what is being benchmarked; the IDE drive on the motherboard simply served as a boot drive and performed basic operating system functions.

We also felt it necessary to compare the speed of the various RAID arrays with the speed of the standard motherboard IDE controller. To perform this test, we hooked one of the IBM 75GXP hard drives up to the secondary IDE channel of the motherboard. Since the Gigabyte GA-7DX motherboard makes use of the VIA 686B south bridge, the motherboard supports the ATA100 specification, meaning that we could compare apples to apples since most cards in this review are ATA100 cards. The same procedure as above was used when testing the performance of the 686B in the various benchmarks.

Promise SuperTrak100 Iometer: Understanding the Results
Comments Locked

2 Comments

View All Comments

  • kburrows - Thursday, December 4, 2003 - link

    Have you run any tests on any onboard RAID solutions for RAID 0 & 1? I would love to see the results posted for the new SATA RAID on the Intel 875 boards.
  • Anonymous User - Sunday, August 17, 2003 - link

    In adressing the performance of an raid array with different stripe sizes, you miss an important factor, namely the accestime of an disk. This wait time has two main couses. First the head positioning and second the rotational latency (the heads track the right trace, but position where the read start has not passed under the head). You may have to wait from 0 to (in the worst case) a full cycle.
    Since the disks move independently You can calculate that the average latency to get an small file is minimal when the stripe size is about an full cycle of an disk in the array (aprox. 250kB today). All other factors I do know do not reduce this. (controller overhead, transport,...)
    So I think that today a minimum stripe size of 256kB should be used.

Log in

Don't have an account? Sign up now