Benchmarking Software RAID 6 on USB Flash Drives

While configuring software RAID 6 in Debian Etch on my Slug, I ran a bunch of rather basic tests trying to understand the impact of different chunk sizes.

Sidebar: Linux is rather smart in its handling of RAID: if you take a look at your logs you will discover that Linux benchmarks several algorythms at boot time and picks the most efficient one:

kernel: raid5: measuring checksumming speed
kernel: arm4regs : 172.000 MB/sec
kernel: 8regs : 192.000 MB/sec
kernel: 32regs : 131.200 MB/sec
kernel: raid5: using function: 8regs (192.000 MB/sec)
kernel: raid6: int32x1 33 MB/s
kernel: raid6: int32x2 34 MB/s
kernel: raid6: int32x4 30 MB/s
kernel: raid6: int32x8 24 MB/s
kernel: raid6: using algorithm int32x2 (34 MB/s)

Since it may save someone time and effort, I am posting the results of my tests below.

Test configuration:

  • Slug (Linksys NSLU2), de-underclocked.
  • Kensington Dome USB hub.
  • Six basic (and rather old) 64MB USB sticks – remember, we are not testing for maximum speed, just for relative speed as affected by RAID chunk size.

Important Notes:

  • these benchmarks do not show the effect of using different file systems (or properly tuning them)

chunk=32

——————————————————————————
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=2k count=16000
16000+0 records in
16000+0 records out
32768000 bytes (33 MB) copied, 39.868 seconds, 822 kB/s
———————————————————————-
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=4k count=8000
8000+0 records in
8000+0 records out
32768000 bytes (33 MB) copied, 9.8388 seconds, 3.3 MB/s
——————————————————————-
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=8k count=4000
4000+0 records in
4000+0 records out
32768000 bytes (33 MB) copied, 10.6711 seconds, 3.1 MB/s
———————————————————————-
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=16k count=2000
2000+0 records in
2000+0 records out
32768000 bytes (33 MB) copied, 10.929 seconds, 3.0 MB/s
———————————————————————-
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=32k count=1000
1000+0 records in
1000+0 records out
32768000 bytes (33 MB) copied, 10.1632 seconds, 3.2 MB/s
———————————————————————–
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=64k count=1000
1000+0 records in
1000+0 records out
65536000 bytes (66 MB) copied, 20.1916 seconds, 3.2 MB/s

chunk=16

————————————————————–
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=4k count=2500
2500+0 records in
2500+0 records out
10240000 bytes (10 MB) copied, 3.83433 seconds, 2.7 MB/s

—————————————————–
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=8k count=10000
10000+0 records in
10000+0 records out
81920000 bytes (82 MB) copied, 26.0804 seconds, 3.1 MB/s
————————————————————
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=16k count=10000
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 51.3674 seconds, 3.2 MB/s
—————————————————————-

foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=32k count=2500
2500+0 records in
2500+0 records out
81920000 bytes (82 MB) copied, 25.3954 seconds, 3.2 MB/s
———————————————————————

chunk=8

foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=4k count=8000
8000+0 records in
8000+0 records out
32768000 bytes (33 MB) copied, 19.0489 seconds, 3.2 MB/s
——————————————————————–
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=8k count=4000
4000+0 records in
4000+0 records out
32768000 bytes (33 MB) copied, 10.3234 seconds, 3.2 MB/s
——————————————————————
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=16k count=2000
2000+0 records in
2000+0 records out
32768000 bytes (33 MB) copied, 10.1259 seconds, 3.2 MB/s
——————————————————————–
foobar:/etc/mdadm# dd if=/dev/zero of=/dev/md0 bs=32k count=1000
1000+0 records in
1000+0 records out
32768000 bytes (33 MB) copied, 10.9854 seconds, 3.0 MB/s

The Bottom Line

It seems that the chunk size is virtually irrelevant, except for very small records (2k records were processed twice faster by an array with 32K chunks).

It is possible that there is a bottleneck elsewhere in the system, rendering these benchmarks completely meaningless.

If you think my methodology sucks and there is a better way – I am all ears!

Advertisements

About this entry