Software RAID 6 on Debian Etch Micro-HOWTO

Introduction

RAID6 is very much like RAID5, only you can afford to loose two drives before you start to worry. An overkill? – Recall that all your drives probably came from the same batch, and suddenly RAID6 feels barely sufficient.

RAID 6 is as standard as RAID 5, and offers comparable performance (allegedly slightly slower, but how much is your peace of mind worth to you?)

This document does NOT cover setting up bootable RAID 6. I believe that it is possible to set up bootable RAID6 (even with LVM if you want to be really fancy) in Debian Etch, but I have not tried it, an, given my needs, not likely to try in the future. Modern mdadm is extremely helpful and will try to assist you with configuring bootable RAID during its (mdadm) installation. If you want embark on that adventure, you may consider reading this document (if you know a better source, please post the link in the comments!)

I chose to stick to non-bootable RAID6 – it is perfect for ever-changing data like your /home, /srv, or /var

Disclaimer

Everything you see here has worked fine for me on my Slug running Debian Etch. (If you are interested in RAID6 on OpenSlug, check out my older HOWTO). There is no reason why it would not work on any other Etch installation. Still, my expertise is limited and your configuration is guaranteed to differ, so think before you do anything.

Step One: Creating Partitions

fdisk -l will show you the drives you got. Figure out which one is which, how many you intend to use, and whether you need to create new partitions. On flash memory the “physical” parameters of the “disk” (heads, sectors, cylinders) do not really mean much. Just make sure that the numbers add up. Otherwise your filesystems will get very confused later.

This is not an fdisk HowTo – read the man page, or this (oversimplified) HOWTO.

You do not really need the partitions: you can assamble your RAID aray from any kind of block devices. Unpartitioned drives will do just fine. Or, if you feel twisted, you can build an array from a USB flash stick, your old SCSI drive, and a partition on your brand spanking new SATA drive. It will work too. I built my array with six USB sticks. Because I could. But if you do not need the partitions, skip to the next chapter.

Within fdisk menu use p command and you will get something like

 Disk /dev/sda: 65 MB, 65536000 bytes 8 heads, 32 sectors/track, 500 cylindersUnits = cylinders of 256 * 512 = 131072 bytes    Device Boot      Start         End      Blocks   Id  System/dev/sda1               1         500       63984   83  Linux

In this case the math is simple: 131072 bytes/cylinder * 500 cylinders = 6553600 bytes. If you see any messages about physical and logical parameters of the partition being different – take your time and sort it out. (I ignored it first time around at my own peril)

Create a partition for your raid (it does not usually make much sense to have multiple partitions of the same drive within the same array).

Use t command within fdisk interface to set partition type to fd – Linux raid autodetect.

Exit fdisk using w command – that’s when all your changes are actually written to the drive.

Do not attempt to label your newly minted partitions: the label is a feature of filesystem superblock, and there is no filesystem yet.

Your fdisk session will look something like this

foobar:~# fdisk /dev/sddCommand (m for help): pDisk /dev/sdd: 65 MB, 65536000 bytes

3 heads, 42 sectors/track, 1015 cylinders

Units = cylinders of 126 * 512 = 64512 bytes

Device Boot      Start         End      Blocks   Id  System

Notice that I dive into the expert menu to change the disc configuration – do not do it unless you have a really good reason:

Command (m for help): xExpert command (m for help): cNumber of cylinders (1-1048576, default 1015): 500

Expert command (m for help): r

back to normal

Command (m for help): pDisk /dev/sdd: 65 MB, 65536000 bytes3 heads, 42 sectors/track, 500 cylinders

Units = cylinders of 126 * 512 = 64512 bytes

Device Boot      Start         End      Blocks   Id  System

Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-500, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-500, default 500):

Using default value 500

Command (m for help): t

Selected partition 1

Hex code (type L to list codes): fd

Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

foobar:~#

Repeat for every component of your future array.

Now on with the fun.

Step Two: Install RAID support

mdadm is a simple and effective tool that has taken over the niche previously occupied by raidtools.

The installation is as simple as it gets. There is a couple of tricky choices to be made, but mdadm gives you extremely detailed descriptions – I am truly humbled by the thought that has gone into making these decisions user-friendly.

Your installation process will look something like this:

foobar:~# apt-get install mdadm Reading package lists... Done

Building dependency tree... Done

mdadm can notify you by email when your RAID fails, but it does not include a mailer in it, hence the recommended MTA below.

Recommended packages:  mail-transport-agent

The following NEW packages will be installed:

mdadm

0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.

Need to get 230kB of archives.

After unpacking 484kB of additional disk space will be used.

Get:1 http://linux.csua.berkeley.edu stable/main mdadm 2.5.6-9 [230kB]

Fetched 230kB in 1s (178kB/s)

Preconfiguring packages ...

Selecting previously deselected package mdadm.

(Reading database ... 15786 files and directories currently installed.)

Unpacking mdadm (from .../archives/mdadm_2.5.6-9_arm.deb) ...

Setting up mdadm (2.5.6-9) ...

Generating mdadm.conf... done.

The following eight lines are specific to my hardware platform – NSLU2/Slug – and its somewhat byzantine boot process via initramfs.

update-initramfs: Generating /boot/initrd.img-2.6.18-5-ixp4xxW: mdadm: /etc/mdadm/mdadm.conf defines no arrays.

W: mdadm: no arrays defined in configuration file.

W: mdadm: falling back to emergency procedure in initramfs.

update-initramfs: Generating /boot/initrd.img-2.6.18-4-ixp4xx

W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.

W: mdadm: no arrays defined in configuration file.

W: mdadm: falling back to emergency procedure in initramfs.

Notice that the installer thoughtfully starts the monitoring service for you

Starting MD monitoring service: mdadm --monitor.

There are no RAID arrays yet, so yes, an attempt to assemble one will fail.

Assembling MD arrays...failed (no arrays found in config file or automatically).foobar:~#

Your mdadm is now installed and running. You can confirm this by issuing ps aux command: closer to the end of its output you will notice /sbin/mdadm –m – it is already monitoring your RAID arrays! It would be a shame to let all that effort go to waste, so let’s create an array for mdadm to monitor.

Step Three: Create your RAID array.

If you haven’t dealt with it before, take advantage of the following excellent sources (I would have never succeeded in this adventure without them):

mdadm: A New Tool For Linux Software RAID Management by Derek Vadala

mdadm(8) page on man-wiki

Devil-Linux Documentation

Since I am really lazy, way too lazy to look things up every time, you get the benefit of my writing down all the steps here 😉

BTW, there is only one step:

foobar:/etc/mdadm# mdadm --create --verbose  /dev/md0 --level=raid6 --chunk=16   --raid-devices=6  --spare-devices=0  /dev/sd{a,b,c,d,e,f}

It’s Ok to make mistakes at this point: you can always mdadm –stop /dev/md0 and start over.

Important consideration: if you want to be able to partition your raid in the future, you must create it as “partitionable” RAID. Instead of specifying the device name for the array (I used /dev/md0) tell mdadm to create a partitionable array with whatever the standard name would be: –auto=p

mdadm: layout defaults to left-symmetric   mdadm: /dev/sdc appears to be part of a raid array:       level=raid6 devices=6 ctime=Sat Nov  3 13:03:40 2007   mdadm: /dev/sdd appears to be part of a raid array:       level=raid6 devices=6 ctime=Sat Nov  3 13:03:40 2007   mdadm: size set to 63936K Continue creating array? y   mdadm: array /dev/md0 started.

You are done! Since you have followed the links above and enthusiastically absorbed every detail ;-) very little comment is needed: clearly I reused a couple of drives and mdadm gave me a chance to back off before it wipes them out.

A word about chunk size: the default value is 32KB. The optimal value really depends on how you use your system. I have done some benchmarking, and it would seem that for USB flash sticks the chunk size is virtually irrelevant. Hard drives are a very different story though.

Are we done yet? – Almost.

Let’s confirm that our RAID is alive and kicking:

foobar:/etc/mdadm# cat /proc/mdstat  Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]  md0 : active raid6 sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]        255744 blocks level 6, 16k chunk, algorithm 2 [6/6] [UUUUUU]

A new array will take some time to sync – you will see it in mdstat.

Step Four: Finishing Up

If you created a partitionable array, use fdisk to “officially” create the partitions.

The device names associated with the raid partitions on your system will be created automatically, no thinking required -)

Now format with the filesystem of your choice. For ext2/ext3 there are very important performance considerations covered here

foobar:/etc/mdadm# mke2fs -j -b 4096 -R stride=4 /dev/md0
mke2fs 1.40-WIP (14-Nov-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
63936 inodes, 63936 blocks
3196 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=67108864
2 block groups
32768 blocks per group, 32768 fragments per group
31968 inodes per group
Superblock backups stored on blocks:
        32768

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

You are DONE! Congratulations! Go ahead, mount your filesystem wherever you want, live long and prosper.

Conclusion: Housekeeping Considerations

  • Many people believe that it is a good practice to have your RAID described in /etc/mdadm.conf file. Some disagree. Your choice.
  • If you used USB devices for your array, /etc/mdadm.conf is your best friend. Unlike IDE, SCSI, and SATA devices, USB drives do not necessarily retain their names. What used to be your /dev/sdc last time, might be something very different now. The way around it is to address them by device ID. But who wants to type /dev/disk/by-id/usb-Flash_Drive_UT_USB20_00000000001950 instead of /dev/sdc ??? This is when you start defining your arrays in /etc/mdadm.conf (or /etc/mdadm/mdadm.conf on some systems).
  • You really want to make sure that you are notified when one of your drives croaks. RAID buys you the luxury of not having a heart attack on the spot, but… watch out for those pesky gremlins! You can easily configure it in mdadm.conf. I have installed ssmtp (it replaces exim4 during installation) and followed the excellent directions in the archlinux wiki. Now i sleep much better.
  • My RAID starts automagically (I did not do anything, I just defined it in mdadm.conf). Since I wanted to keep my /srv on it, all I had to do was add an extra line to my /etc/fstab:
/dev/md0        /srv            ext3    defaults,noatime,nodiratime,commit=60        0       2

(you may want to add data=writeback for extra performance, just make sure you understand the implications)

  • For the owners of Slug and other devices with low memory: Since RAID6 gives you certain piece of mind, you can save a bit of memory (about half a Meg) by not running mdadm –monitor as a daemon – just set it to fire up from cron every couple of hours or so (if you want to be notified of a disc failure within 60 seconds, the daemon option is the way to go though)

Your comments, examples, and opinions are welcome. I wrote this in hope of learning something new 😀



  1. […] Software RAID 6 on USB Flash Drives While configuring software RAID 6 in Debian Etch on my Slug, I ran a bunch of rather basic tests trying to understand the impact of different chunk […]

  2. […] is why I am a huge fan of RAID6, although even that is not the ultimate solution – I have seen hardware raid controller fail […]

  3. […] software raid Dit klopt niet, het kan tegenwoordig wel. Ik zag er zelfs toevallig een howto over: https://n0tablog.wordpress.com/howtos…h-micro-howto/ Overigens wel een interessante vraag: wat is beter, raid5 met een hot-spare of raid6. Met […]

  4. I used your guide in the past, thanx for that.

    Now I’ve build a 8 TB software RAID 6 based on Linux software RAID and 10 disks.

    Currently, it is growing by one disk, however, that is seriously slow since all disks must read and write simultaneously.

    I don’t know what your performance figures are (and if linear speeds are of your concern) but I was very well pleased to see 850 mb/s read speeds and 300 mb/s write speeds.

    1. Thanks for the numbers!

      My own implementation was on a bunch of flash drives, but I bet many people will thank you for the hard drive figures.

      Thank you for sharing!

  5. […] good HOWTO, with additional, useful […]

  6. Thanks heaps for the guide, just setting up RAID-6 as a backup server. Appreciate the performance tips too 🙂

  7. […] Raid6 on Debian Etch: HowTo […]

Leave a comment


About

I am sorry you have to see this. Actually, I am not – if you came here, it’s you fault. When we choose our actions we also choose the consequences.

Despite my oversized ego I do NOT believe that anyone out there craves a daily dose of my insight. In fact, I am suspicious of people who think otherwise.

This blog is a tool.

I meddle with many complex computer deployments, and as I go through adding features and learning things, I also tend to forget numerous details and the reasoning behind the many choices I have made in the past. This is the place to document my adventures. And to give something back.

If you stumble upon this and find it useful – …good for you.