MFCF.math Raid 0 Setup of /fsys1 Partition


For general information about Raid disk, a person should probably start with Robyn's document on RAID configuration. These pages are going to deal directly with setting up a Raid Level 0 disk on a host with Solaris 2.6 OS.

Installing Software Tools

The set of Solaris tools to setup and admin raid devices is contained in the Solstice DiskSuite package. For Solaris 2.6 and 2.7 we use the DiskSuite 4.2 package from the "Easy Access Server 2.0" CD in the Solaris 7 Server media folder. (Solaris 8.0 has DiskSuite package is part of the OS media.) Unfortunately, DiskSuite is packaged up to unpack in /usr/opt tree (as opposed to the /opt tree). Since we don't want two /opt trees lets create a software link from /opt to /usr/opt

Now we can install the software from CD.

Setting up raid devices

Whenever you're setting up Raid devices, you need to keep a database of the disk slices you are using on what physical disks and how they are grouped to create the Raid devices. Solaris refers to this as the "metadb" database and requires you to implement a redundant set of metadb's (min 3, max 50) so a majority rules algorithm can be used to determine correctness of the database. Also note that state database replicas are relatively small (517KB or 1034 blocks).

Deciding location of Metadb's

Create the metadb's on their own small disk slice(s)! Although these metadb's can be part of physical slices that are to be used as part of a raid volume, this is not advised. It seems if a metadb becomes corrupted, the only way to fix it is to delete it and create a new one. Unfortunately you can only create new metadb's on empty disk slices, so if you make the metadb's part of the physical slices that are also part of a raid volume, you need to backup the data on the raid volume; delete and recreate the metadb and raid volume; and finally restore the data to the raid volume. Allot of extra work that can be avoided by having the metadb's have their own disk slice.

Details for Creating the Meta Database.

Use "format" utility to partition the disks as you want them.

Make sure DiskSuite software in on your local path. As of Solaris 8, DiskSuite is part of the OS in /usr/sbin (if the appropriate packages have been installed).
# setenv PATH /opt/SUNWmd/sbin:/usr/sbin:$PATH

Creating initial four State Database Replicas (one metadb per disk slice)
# metadb -a -f -c 1 c0t1d0s3 c0t1d0s7 c0t0d0s3 c0t0d0s7

Details for Creating a Stripped (Raid 0) Device.

Creating a stripped metadevice (raid0) volume.
# metainit d0 1 2 c0t1d0s4 c0t0d0s4 -i 512k

Creating the file system
# newfs -m 1 -d 0 -n 1 -i 16384 /dev/md/rdsk/d0

Details for Creating a Mirrored (Raid 1) Device.

Details for Creating a Raid 5 Device.

Raid 5 devices need at least 3 devices should follow the 20% rule, ie less than 20% of the activity should be writes. This makes them useful for /usr, /opt and /fsys (where applications are installed) but not /u (home directories).

Repairing corrupted "metadb" partitions.

Boot system into single user mode making sure the partition containing the "/etc/lvm" directory is read/writable.

Check status of the State Database Replicas (metadb devices)
# metadb -i
Any metadb that has "capital letter status flags" is corrupted.

Delete any metadb's that are corrupted.
# metadb -d c0t1d0s3 c0t1d0s7

If you haven't already done so, now is the time to replace any failed disks. Remember to partition them appropriately.

Recreate the metadb's that you just deleted.
# metadb -a c0t1d0s3 c0t1d0s7

reboot system to sync the metadb's. (do we really have to or is there another way?)
# reboot -- -s
mount the rout file system read/write.
# mount -o remount /

Repairing a Raid 0 (disk strip) setup.

After checking that all the metadb are correct.

Clear the old strip (raid 0) setup.
# metaclear d0

Now go back up to the part where we're about to create a stripped metadevice (raid0) volume and proceed from there.

Repairing a Raid 1 (disk mirror) setup.

Check status of the State of the raid setup.
# metastat
Check the "State:" field of each device to see if it "Needs maintenance".

An example taken from the mirror setup on "cscf.cs" is

d60: Mirror
    Submirror 0: d61
      State: Needs maintenance 
    Submirror 1: d62
      State: Okay         
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 61441920 blocks

d61: Submirror of d60
    State: Needs maintenance 
    Invoke: metareplace d60 c1t0d0s6 
    Size: 61441920 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Hot Spare
        c1t0d0s6                   0     No    Maintenance  

d62: Submirror of d60
    State: Okay         
    Size: 61441920 blocks
    Stripe 0:
        Device              Start Block  Dbase State        Hot Spare
        c2t0d0s6                   0     No    Okay         

In this case, disk partition c1t0d0s6 was still OK, (can be checked via the "format" or "metadb -i" commands), so we just had to inform LVM to enable the partition and resync it.
# metareplace -e d60 c1t0d0s6