Firewire and USB
After looking at many options for Firewire and USB testing, we finally determined that an external USB 2.0, Firewire 400, and Firewire 800 hard disk would be a sensible way to look at USB and Firewire throughput. We utilize a RAM disk as our "server", since memory removes almost all overhead from the serving end. We turn off disk caching on the USB and Firewire side by setting up the drives for "quick disconnect" so our results are consistent.
We use 1GB of fast 2-2-2-5 system memory set up as a 450MB RAM disk and 550MB of system memory. Our standard file is the SPECviewPerf install file, which measures 432,533,504 bytes (412.4961MB). After copying this file to our RAM disk, we measured the time for writing from the RAM disk to our external USB 2.0, Firewire 400, or Firewire 800 drive utilizing our internal Windows based timing program. The copy times in seconds were then converted into Megabits per second (Mb) to provide a convenient means of comparing throughput. Higher Rates therefore mean better performance in this particular test.
The MSI board offers the excellent VIA based IEEE 1394 Firewire option providing better performance than the TI chipset solutions. The USB 2.0 performance is competitive with the other ULi and NVIDIA based controllers in throughput. We did not note any Firewire or USB 2.0 issues with our inventory of peripherals that included steaming data from our USB 2.0 or Firewire 400/800 external storage drive to a USB 2.0 optical drive for backup purposes. Of course, as good as the VIA solution is, it's still a far cry from the performance of Firewire 800. It's surprising that so few companies opt for the faster IEEE 1394B standard, as the price difference can't be very large.
Ethernet Performance
As with recent motherboard reviews, networking performance is measured using the Windows 2000 Driver Development Kit LAN testing utility ntttcp. We set up one machine as the server: an Intel system with an Intel CSA Gigabit LAN connection. Intel CSA has a reputation for providing fast throughput and is a logical choice for our Gigabit LAN server. On the server side, we used the following Command Line:
The NVIDIA nForce4 SLI Southbridge natively supports PCI Gigabit Ethernet operations so the choice of the PCI-E based Marvell 88E8053 10/100/1000Mb/s Ethernet controller as the secondary Gigabit Ethernet controller is an excellent option by MSI.
The performance of the Marvell 88E8053 Gigabit LAN controller is on par with the other boards except for the throughput issue noticed on the ECS implementation of this controller. The Marvell solution provides better throughput than the NVIDIA PHY but at the expense of greater CPU utilization rates. We generally utilized the Marvell controller in our on-line gaming or LAN sessions as it seemed less prone to throughput spikes across our Gigabit network. Although this is a subjective opinion, we are investigating a potential test suite to further look at the extended performance of the network controllers in real world applications.
NVIDIA has tweaked their implementation of ActiveArmor in the latest driver releases by reducing the amount of offloading the ActiveArmor engine provides thereby reducing the amount of hardware based Gigabit Ethernet acceleration. As a result the CPU utilization rates are not as low as before but this was done to avoid data corruption issues that have been dogging NVIDIA since ActiveArmor was introduced. In our testing, we found the driver changes worked as we could not replicate a user based scenario that resulted in data corruption in previous releases. However, we are still testing the driver updates and will provide more information in a future article.
All standard Ethernet tests were performed with standard frames and the NVIDIA ActiveArmor suite disabled unless otherwise noted. Gigabit Ethernet supports Jumbo frames as well and provides a further reduction in CPU overhead.
After looking at many options for Firewire and USB testing, we finally determined that an external USB 2.0, Firewire 400, and Firewire 800 hard disk would be a sensible way to look at USB and Firewire throughput. We utilize a RAM disk as our "server", since memory removes almost all overhead from the serving end. We turn off disk caching on the USB and Firewire side by setting up the drives for "quick disconnect" so our results are consistent.
We use 1GB of fast 2-2-2-5 system memory set up as a 450MB RAM disk and 550MB of system memory. Our standard file is the SPECviewPerf install file, which measures 432,533,504 bytes (412.4961MB). After copying this file to our RAM disk, we measured the time for writing from the RAM disk to our external USB 2.0, Firewire 400, or Firewire 800 drive utilizing our internal Windows based timing program. The copy times in seconds were then converted into Megabits per second (Mb) to provide a convenient means of comparing throughput. Higher Rates therefore mean better performance in this particular test.
The MSI board offers the excellent VIA based IEEE 1394 Firewire option providing better performance than the TI chipset solutions. The USB 2.0 performance is competitive with the other ULi and NVIDIA based controllers in throughput. We did not note any Firewire or USB 2.0 issues with our inventory of peripherals that included steaming data from our USB 2.0 or Firewire 400/800 external storage drive to a USB 2.0 optical drive for backup purposes. Of course, as good as the VIA solution is, it's still a far cry from the performance of Firewire 800. It's surprising that so few companies opt for the faster IEEE 1394B standard, as the price difference can't be very large.
Ethernet Performance
As with recent motherboard reviews, networking performance is measured using the Windows 2000 Driver Development Kit LAN testing utility ntttcp. We set up one machine as the server: an Intel system with an Intel CSA Gigabit LAN connection. Intel CSA has a reputation for providing fast throughput and is a logical choice for our Gigabit LAN server. On the server side, we used the following Command Line:
ntttcpr -m 4,0,‹server IP› -a 4 -l 256000 -n 30000
On the client side (the motherboard under test), we used the following Command Line:ntttcps -m 4,0,‹client IP› -a 4 -l 256000 -n 30000
At the conclusion of the test, we captured the throughput and CPU utilization figures from the client screen.The NVIDIA nForce4 SLI Southbridge natively supports PCI Gigabit Ethernet operations so the choice of the PCI-E based Marvell 88E8053 10/100/1000Mb/s Ethernet controller as the secondary Gigabit Ethernet controller is an excellent option by MSI.
The performance of the Marvell 88E8053 Gigabit LAN controller is on par with the other boards except for the throughput issue noticed on the ECS implementation of this controller. The Marvell solution provides better throughput than the NVIDIA PHY but at the expense of greater CPU utilization rates. We generally utilized the Marvell controller in our on-line gaming or LAN sessions as it seemed less prone to throughput spikes across our Gigabit network. Although this is a subjective opinion, we are investigating a potential test suite to further look at the extended performance of the network controllers in real world applications.
NVIDIA has tweaked their implementation of ActiveArmor in the latest driver releases by reducing the amount of offloading the ActiveArmor engine provides thereby reducing the amount of hardware based Gigabit Ethernet acceleration. As a result the CPU utilization rates are not as low as before but this was done to avoid data corruption issues that have been dogging NVIDIA since ActiveArmor was introduced. In our testing, we found the driver changes worked as we could not replicate a user based scenario that resulted in data corruption in previous releases. However, we are still testing the driver updates and will provide more information in a future article.
All standard Ethernet tests were performed with standard frames and the NVIDIA ActiveArmor suite disabled unless otherwise noted. Gigabit Ethernet supports Jumbo frames as well and provides a further reduction in CPU overhead.
42 Comments
View All Comments
OvErHeAtInG - Tuesday, April 18, 2006 - link
Let me be the devil's advocate here. I know AT reviews (and others) usually regard same-colored RAM slots to be "correct" for dual-channel operation, but for me it's always made more sense when it's like this MSI board--one color for each channel. Of course, it wouldn't be confusing if manufacturers just chose one standard color scheme and left it at that.Gary Key - Thursday, April 20, 2006 - link
We completely agree about having a standard color scheme. The majority of boards have different colors for dual channel operation so our comments are based this fact. A previous MSI board we tested followed this pattern and then they change it on this board. It is too confusing in my opinion when a single supplier cannot agree on a color scheme between board releases. This is certainly not an MSI only issue either. :)
Wesleyrpg - Tuesday, April 18, 2006 - link
Hmmm, is this common knowledge? One of my friends suffers from this so badly that we had to set up his cable modem for use the the USB port (yuck). Is there a workaround/beta drivers? I'm sure NVIDIA wouldnt want this leaked if it was true? Why havent they done anything about it? The current AMD driver 6.70 is about 6 months old now and still corrupts data. Damn them!!!!
Gary Key - Thursday, April 20, 2006 - link
Yes, this is fairly common knowledge and we have actually referred readers to NVIDIA for assistance. I do have a new set of drivers for the Business Platform system and will be testing them next week.
Wesleyrpg - Tuesday, April 18, 2006 - link
hmmm, seems NVIDIA have more than just an issue with their ActiveArmor, from what i have been reading theres also huge problems with the IDE/SATA file tranfers as well. Wish i'd know about these issues earlier, up until today i been recommending the NFORCE4, but with all these issues its hard to recommend them at all. Why hasnt any articles been written up on the MAJOR issues with the Nforce4 chipset?Per Hansson - Sunday, April 16, 2006 - link
Just replying to your message to me in the other mobo review "Please email me - I have the photographs. We are doing some revisions on the engine and could not get these in but I did take the photographs for you."I'm not the one that needs the pictures, I can identify the caps anyway... I was just thinking that it would be a nice addition to your articles, incase there where other readers that where interested in this too...
This board looks really nice though, United Chemicon KZG everywhere it seems, except for the Sanyo Polymers at the VRM and plenty more than what should "really" be necessary for a budget board like this... (okay, there looks to be one or four odd caps in the PCI section but they do oftenly not see very much ripple current so it should be ok)
tekkstore - Monday, April 17, 2006 - link
http://www.tekkstore.com">tekkstore.comAnnonymousCoward - Friday, April 14, 2006 - link
Macs still don't have a right mouse button? When will they put their stubbornness behind?goinginstyle - Wednesday, April 12, 2006 - link
I think you guys need to check your facts on the southbridge. According to the HardOCP article the board has the ULi 1575 southbridge. Are you sure the board has the nforce4 sli on the southbridge as that chipset is usually on the northbridge .Gary Key - Wednesday, April 12, 2006 - link
We will not comment in an open forum about another website's information that might or might not have been posted. However, I can ensure you this board utilizes the NVIDIA nForce4 SLI for the Southbridge (MCP) and the C51D for the Northbridge (SPP) as explained on the front page. If you still question our statements, then please visit MSI's website where the chipset information is available for this board - http://www.msi.com.tw/program/products/mainboard/m...">MSI K8N DiamondThanks!