For general information about Raid disk, a person should probably start with Robyn's document on RAID configuration. These pages are going to deal directly with setting up a Raid Level 0 disk on a host with Solaris 2.6 OS.
The set of Solaris tools to setup and admin raid devices is contained in the Solstice DiskSuite package. For Solaris 2.6 and 2.7 we use the DiskSuite 4.2 package from the "Easy Access Server 2.0" CD in the Solaris 7 Server media folder. (Solaris 8.0 has DiskSuite package is part of the OS media.) Unfortunately, DiskSuite is packaged up to unpack in /usr/opt tree (as opposed to the /opt tree). Since we don't want two /opt trees lets create a software link from /opt to /usr/opt
Whenever you're setting up Raid devices, you need to keep a database of the disk slices you are using on what physical disks and how they are grouped to create the Raid devices. Solaris refers to this as the "metadb" database and requires you to implement a redundant set of metadb's (min 3, max 50) so a majority rules algorithm can be used to determine correctness of the database. Also note that state database replicas are relatively small (517KB or 1034 blocks).
Use "format" utility to partition the disks as you want them.
Make sure DiskSuite software in on your local path. As of Solaris 8,
DiskSuite is part of the OS in /usr/sbin (if the appropriate packages
have been installed).
#
Creating initial four State Database Replicas (one metadb per disk slice)
#
Creating a stripped metadevice (raid0) volume.
#
Creating the file system
#
Raid 5 devices need at least 3 devices should follow the 20% rule, ie less than 20% of the activity should be writes. This makes them useful for /usr, /opt and /fsys (where applications are installed) but not /u (home directories).
Boot system into single user mode making sure the partition containing the "/etc/lvm" directory is read/writable.
Check status of the State Database Replicas (metadb devices)
#
Any metadb that has "capital letter status flags" is corrupted.
Delete any metadb's that are corrupted.
#
If you haven't already done so, now is the time to replace any failed disks. Remember to partition them appropriately.
Recreate the metadb's that you just deleted.
#
reboot system to sync the metadb's.
(do we really have to or is there another way?)
#
mount the rout file system read/write.
#
After checking that all the metadb are correct.
Clear the old strip (raid 0) setup.
#
Now go back up to the part where we're about to
create a stripped metadevice (raid0) volume and proceed from there.
Check status of the State of the raid setup.
#
Check the "State:" field of each device to see if it "Needs maintenance".
An example taken from the mirror setup on "cscf.cs" is
d60: Mirror Submirror 0: d61 State: Needs maintenance Submirror 1: d62 State: Okay Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 61441920 blocks d61: Submirror of d60 State: Needs maintenance Invoke: metareplace d60 c1t0d0s6Size: 61441920 blocks Stripe 0: Device Start Block Dbase State Hot Spare c1t0d0s6 0 No Maintenance d62: Submirror of d60 State: Okay Size: 61441920 blocks Stripe 0: Device Start Block Dbase State Hot Spare c2t0d0s6 0 No Okay
In this case, disk partition c1t0d0s6 was still OK,
(can be checked via the "format" or "metadb -i" commands),
so we just had to inform LVM to enable the partition
and resync it.
#