Closing Thoughts

We've run quite a few tests, and encountered some surprising results. What exactly does all this tell us? We've only truly tested four different drives, each representing one of the current interface standards. Ideally, we would have tested nearly identical drives on each interface, but sometimes that's just not possible. Even with differences in basic features, however, we can draw some meaningful conclusions.

SAS

SAS clearly dominates when it comes to pure I/O throughput with the exception of 5%-20% random writes. With the introduction of a Point-to-Point topology, the performance floodgates have been opened and I/O reads of up to 540MB/sec with just 12 drives are now achievable. Even writes have jumped from 95MB/sec on SCSI to over 330MB/sec with SAS. It's also amazing to see that with just 12 drives in a RAID 10 you can almost guarantee a minimum of 100MB/sec sustained for both reads and writes in any combination.

SATA I/II

It's hard to find a good place for SATA I and II when it comes to performance. Reads for both types can hold their own against SCSI when the access type is between 0 and 20% random, but after that they are slower. SATA II performs extremely well with writes from 0 to 100% random, but SATA I drops below SCSI as soon as you reach 5% random. So why would you want SATA I or II? Put simply, the cost per GB of SATA cannot be touched by SAS or SCSI, which makes SATA the perfect choice for high capacity, albeit slower storage. If you are considering this type of configuration, we would strongly recommend buying drives with a large disk cache, as it appears they can improve your writes significantly.

The Raptor line is also unique at present, as it is the only 10K RPM SATA solution and it also comes with a 16MB cache. It is "only" an SATA I interface, but without further testing it's difficult to say for certain how much of the SATA II performance advantage is due to the interface and how much simply comes from having newer hard drive models with more cache. We're inclined to think the cache and platter density is playing a significant role, along with NCQ support, as each individual drive shouldn't be able to come anywhere near saturating SATA's 150MB/s bandwidth limit per connection.

The simple fact of the matter, however, is that no businesses looking at something like the VTrak J300s (or any other SAS enclosure) are going to really be interested in the Raptor line. Most will want a high performance, low capacity SAS drive array, perhaps complemented by a few high capacity SATA drives. The Raptor is either not fast enough or not large enough; the lower price of the Raptors vs. 15K SAS models doesn't normally enter the picture.

SCSI

SCSI still has its place in the server market when you compare it against SATA I/II, but it simply cannot touch SAS. SCSI has been around for a long time now, and SAS is clearly the next logical step. When you work on a drive by drive basis we're sure the gap between SAS and SCSI will be very small, but as you add more drives to the array the point-to-point architecture clearly wins. As time has progressed, SCSI drives have become faster and faster, but the shared bus architecture is definitely holding them back. This is very apparent when you consider how little SCSI drops from 0% random to 100% random in both reads and writes. Higher RPM drives would drop even less, but they still won't be able to improve on the bandwidth limits we encountered. The only way to improve overall bandwidth is to move to multiple SCSI connections, which can improve performance but comes with increased costs and more cable clutter.

Performance Conclusion

Both SAS and SATA I/II have their place in the Direct Attached Storage market. If your application needs high I/O throughput and can reside on low to medium capacities, then SAS is definitely the way to go. If you need high capacity more than you need high I/O then SATA I/II is the best choice. Of course with 300GB SAS drives (soon to be 600GB) you can still reach very high capacities, but at a cost that is significantly higher than with SATA. In the end you have to balance performance vs. cost and determine which is best for you.

Conclusion

Through our testing the Promise VTrak J300s has proven itself to be a reliable solution with good performance that is well suited for the SMB world. The ability to use high capacity SATA or high performance SAS in the same chassis gives users a new level of flexibility that wasn't an option before. Add to that the new performance levels that SAS can obtain vs. SCSI and DAS becomes a feasible solution again before moving to a full Fiber/SAN infrastructure. Of course when that time comes, it can be filled with high capacity SATA disks and still be useful for hosting a Virtual Tape Library, NAS backend, or long term data storage.

At the time of writing Shopping.com reported a price of $2,100 to $2,500 USD for the VTrak J300s, which is a decent price for an external SAS/SATA chassis. We had a hard time coming up with things to say about our experience with the Promise VTrak because it simply just worked, and in the end, isn't that what we're all looking for?

Test results – Write Performance
Comments Locked

31 Comments

View All Comments

  • yyrkoon - Friday, February 2, 2007 - link

    When are you guys going to do some reviews on consumer grade equipment ? Well, let me clarify, 'consumer grade' with on card RAID processor(s). For instance,, right now, I'm in the market for a 8 + port RAID HBA, but would like to know if buying a Highpoint 16 port SATA RAID HBA, would really be any worse than getting an Areca 8 port HBA, for ~$200 usd more. 3Ware, from what I understand offers the best Linux/Unix support, or does it ? If so, would it really make much of a difference in a SOHO application ?

    I personally, would like to see a comparison of the latest Promise, Highpoint, Areca, 3Ware, etc controllers. In short, there is a lot out there for a potential buyer, such as myself, to get lost in, and basically, I personally am interested in reliability first, speed second (to a point).

    Anyhow, I just thought I'd point out, that while you guys do cover a lot in the area, you guys seem to have a gap, where I think it really matters most to your readers (home PC / enthusiast crowd/SOHO).
  • mino - Saturday, February 3, 2007 - link

    I would stay away from Highpoint.
    We have had several issues of RAID HBA(new one!) consistently going down AND screwing the whole RAID5 ubner some workloads. For the money one is better off with QuadFX ASUS board than to go Highpoint-like solutions.
    Areca is pretty much on a different level, ofcourse...
  • yyrkoon - Sunday, February 4, 2007 - link

    Again, this only reinforces what I've said, need a good article on which HBAs are good for reliability, etc.
  • mino - Sunday, February 4, 2007 - link

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.

    Most people do not actually need RAID5 for home usage and it is usually cheaper to go _software_ RAID1 with every drive in the RAID attached to different controller. In such a scenario even the cheapest or onboard controller offers comparable fault-tollerancy to high-end RAID% solutions.

    However the simplest way to go is really 2 NAS RAID5 boxes mirroring each other.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.


    I would rule out Adaptec and the older LSI chipsets still available (under several brands like Intel for example). We replaced a bunch of Intel 6 & 8 port controllers with top of the line 8-port Adaptec SATA II controllers.

    The performance of the Intel controllers (with LSI chipsets) was terrible. We got about 8-13MB/s sequential writes with RAID 10 arrays, and tested using alot of differant drives. The Adaptec products are alot better in regard to speed, but keep dropping drives. This seems to be a common problem, but they have no solution.

    I've previously used 3ware without any problems, and would gladly test Areca if they where available here.
  • yyrkoon - Sunday, February 4, 2007 - link

    why would I want to spend 1300 usd + per 5 disk array (minus drives), when I could build my own system much cheaper, and use the hardware/software I wanted ? Just because I don't know which HBAs are more reliable, than others (because I obviously cant afford to buy them all), doesn't mean I'm an idiot ;)
  • Bob Markinson - Friday, February 2, 2007 - link

    Interesting review!
    I would have liked to see a comparison with latest gen 15K SCSI drives and not 10K SCSI drives to see the true SAS interface performance advantage over SCSI. Futhermore, the Serveraid 6M comes in two versions - one with 128 MB cache and the other with 256 MB cache. Also, there were performance issues with early 7.xx firmware/sw revisions on the 6M at high IO loads - hopefully you ran the tests most recent firmware. Write-back cache was enabled on the 6M, right?

  • Lifted - Tuesday, February 6, 2007 - link

    Based on the title of the article, Promise VTrak J300S, you are expecting too much. The "comparison" was more like an ad for the product. What is point in comparing 10K U320 vs 15k SAS? It's supposed to tell us what exactly? You clearly need to look elsewhere for a SAS vs U320 comparison if that's what you were expecting here. This was more for kicks I think, and perhaps to make the J300S look better than ____ ??? I don't get it, it's just a storage enclosure. The RAID adapters and drives are what determine performance, so why was this apples-to-oranges "performance" review thrown into an enclosure article?

    Odd, quite odd.
  • fjeske - Friday, February 2, 2007 - link

    Isn't it a bit unfair to use old IBM 10K SCSI drives in this comparison? None of the now Hitachi drives show good performance on Storagereview.com. Compare to Seagate's Cheetah 15K.5 and I think you'll see a difference.

    Also, how was the SCSI setup done? Attaching 12 drives to one U320 bus will obviously saturate it. Servers usually pair them when connecting this many drives.
  • cgaspar - Friday, February 2, 2007 - link

    SAS and SCSI drives have disk write caches disabled by default, as the drives' caches are not battery backed. IDE and SATA drives frequently have write caching enabled by default. This makes writes much faster, but if you loose power, those writes the drive claimed were committed will be lost, which can be a very bad thing for a database. I'd suggest disabling the write cache on the SATA drives and re-testing (if you still have the gear), I suspect the results will be illuminating.

Log in

Don't have an account? Sign up now