Saturday, July 18, 2009

Ubuntu: Mount a RAID array, show in Gnome Places as a single disk


DISCLAIMER: I accept no responsibility for any damage you may do by following the instructions below, do so at your own risk!! I advise you read all of the instructions before you start.


After the pathetic lack of responses on Linux forums for this question here and here I decided to spend a large amount of my time getting this to work so some of you don't have to. In all honestly I've not posted on these forums because I want the website traffic for my 2 days of sweat and blood trying to find a clean solution to clear deficiencies in the Gnome Window Manager.

Background for Ubuntu RAID (Important read! :) )

First of all, although you may have been sold your mainboard on the premise it has a hardware raid controller built in, many have in fact what is known as a soft RAID controller, basically a chip that can link two IDE (disk) channels together - the actual RAID work is done by the software in the OS, i.e. a driver in Windows.

This causes a problem in Ubuntu as without the software part it shows as two individual disks and if you mount any separately it will break your RAID setup, as one disk will be out of sync - so don't do it!

Gnome "Places" on the menu adds these drives, unmounted, to the "Places" menu on the "taskbar" (the top panel) with no way of altering them in the UI (sure you can change bookmarks, but drives don't show as bookmarks).

The solution is simple, but I've documented in depth here for those unfamiliar with the command line (xterm). Sadly you need to use an xterm to set the drives up initially, but don't worry, you don't need to touch it again after that.

The Solution (The RAID half)

First of all you need to determine the soft RAID controller you have - use the dmraid tool for this. If Ubuntu doesn't have it already on the command line (an xterm) type:

sudo apt-get install dmraid

Now you need to run it and find your array (not add it):

sudo dmraid -r

You will be shown output like the following:

ERROR: sil: only 3/4 metadata areas found on /dev/sdb, electing...
/dev/sdb: "via" and "sil" formats discovered (using sil)!
ERROR: sil: only 3/4 metadata areas found on /dev/sda, electing...
/dev/sda: "via" and "sil" formats discovered (using sil)!
/dev/sdb: sil, "sil_afajdgcdejbj", mirror, ok, 234439600 sectors, data@ 0
/dev/sda: sil, "sil_afajdgcdejbj", mirror, ok, 234439600 sectors, data@ 0

In my case the device was incorrectly recognised as a "sil" (Silicon Image) RAID device, but I was told it guessed the device make of "sil" instead of "via" (which it is), so I need to type the following to add and recognise it correctly:

sudo dmraid -ay -fvia

which tells it to recognise the RAID device as a "via" AND adds it as a new device.

You may just need to do (no need to specify the format):

sudo dmraid -ay

You will be shown output like the above, make note of the weird multi character name...

Now to make it available to the system (dmraid has just added the RAID array as new device, but it's not ready -mounted- yet) type:

sudo mkdir /media/raid_disk
sudo mount /dev/mapper/via_yournamehere /media/raid_disk/

where via_yournamehere is the name of the sil, via or other RAID device - the weird name I said to make note of. You can change the name "raid_disk" to whatever you want, e.g. "WindowsXP" or "Data" or whatever you want to call the RAID array (this does not change any RAID data or names, just how you use it in Ubuntu).

Your RAID drive is now viewable in Gnome by browsing to /media/raid_disk - but you'll need to make this permanent to start with the machine.

Type:

sudo gedit /etc/rc.local &

This opens a text editor - the file may be empty, but if it isn't add at the bottom the two lines we've just tested so it looks like the following (example is from my own rc.local - remember I needed the "-fvia bit" you may not). Note you don't need the sudo command anymore:

/sbin/dmraid -ay -fvia
mount /dev/mapper/via_dgcgebhjdg1 /media/WindowsXP/


Now your RAID device will be loaded and mounted whenever you boot Ubuntu. Sadly it will now show up as three drives in Gnome "Places", if you've chosen to give the raid array the same name as the individual disks the only one mounted will be the RAID device, so if click in "Places" and you are prompted with a security warning to mount a disk DON'T DO IT - CLICK CANCEL! The one that opens a new file explorer window without any mounting prompt will be the RAID array.

We can see how to fix "Places" with the next part of the solution....

The Solution (The Gnome Places half)

Down to a stupid deficiency in Gnome you can't easily edit the drives in the "Places" on the menu bar panel. Since to find and add the drives Gnome is tied to the part of the Linux system that looks after devices (gvfs and udev if I remember), you need to tell Gnome to ignore the extra disks (i.e. the RAID array's individual disks found on startup).

Type the following:

sudo mkdir /usr/share/hal/fdi/preprobe/95userpolicy

This creates a folder for a new device probe policy, basically a rule set about using certain devices.
Now create a new rule set in this folder to ignore some disks:

sudo gedit /usr/share/hal/fdi/preprobe/95userpolicy/10ignore-disks.fdi &

If you know the device names of the drives you want to remove from "Places" you can skip this next step - type on the command line/xterm (not in the editor!):

sudo fdisk -l

to list the drives on your system, you should be able to identify the name from the size information, you need the full drive-partition name, not the drive, i.e. /dev/sda1 NOT /dev/sda.

Then add this information in the text editor for each drive you don't want to show in "Places" (where "/dev/sda1" is the device name of the drive that it is showing in Places - in my case I needed to remove the two RAID disks that were identified individually - /dev/sda1 and /dev/sdb1, this will not stop them being used as a RAID array by dmraid). For example from my 10ignore-disks.fdi file I have two:

<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
  <device>
   <match key="block.device" string="/dev/sda1">
    <merge key="info.ignore" type="bool">true</merge>
   </match>
  </device>
  <device>
    <match key="block.device" string="/dev/sdb1">
    <merge key="info.ignore" type="bool">true</merge>
   </match>
  </device>
</deviceinfo>

For every extra disk you want to hide, just add a new tag inside the tags, changing the bit that says string="/dev.....", or obviously if you are reading this and just want to know how to hide a device you only need the one element above, not two.

Once you save this file and reboot, Places will no longer show these drives, if you get it wrong (and hide the wrong disks) just redo this last step.

Drop me a post if you find this useful :)

7 comments:

Anonymous said...

Nice article. I have a a megaraid IDE card which presents my RAID 5 (500GB) and RAID 0 (160GB) volumes as /dev/sdb and /dev/sdc. Linux does not know these are raid devices as that is handled completely by hardware. The thing is when I put these in fstab, I get an error saying the /dev/sdb is invalid somehow (sorry, I don't have the exact message). I suspect it has something to do with the fact that somehow I managed to create the filesystem on /dev/sdb not /dev/sdb1. I'm not sure how it even let me do that. I'm afraid to try to put a partition table in it in case I lose my data. I don't mind having gnome put both drives under the Places menu, but it's annoying me because it seems to violate the "everything-is-in-fstab" rule that linux has had.

Piyoosh said...

Hi Benjamin!

Thanks for posting this solution. However, in my case, it seems to take care only of the "Places" in KDE File manager: Dolphin. For GNOME, the problem with nautilus still stays the same. Infact, for some applications, when using "File->Open" too, the "places" have these volumes still visible. Any ideas on what other solutions can solve the problem?

Benjamin said...

Hi Piyoosh,

This instructions were for 9.04 if I remember correctly. I have the same problem on 9.10 - three drives show up in Gnome as the fix for Gnome no longer works. I've not been able to get around this problem however I make sure that my RAID disk starts with the machine (rc.local as per instructions) so I know if I click the wrong disk (a disk in the RAID array) it will prompt me to mount it instead which I then just ignore.

If you find a fix please tell me!

Benjamin

Piyoosh said...

Sure, I'll post an article on my blog as soon as I resolve it and also drop a link here at the same time. I guess I'll have to look into details of which are the other ways to safely hide the devices from various applications. This is going to be quite a detailed study, so I'll checking this out probably in my vacations.

Anonymous said...

When I use the "sudo mount /dev/mapper/via_yournamehere /media/raid_disk/" command it asks me to provide a filesystem type with t- "filesystem" after sudo mount. I don't know what to do at this point.

Benjamin said...

Anonymous:

In my experience if it prompts for the filesystem then it cannot determine what filesystem is being used automatically. Usually if this is a Windows formatted array you can try: -t ntfs

Mark-Willem said...

This tutorial helped me find the solution to the problem that I have with the e17 module places. This module was also showing all separate disks. I do have some pointers that will maybe help Piyoosh.

First pointer is to put the fdi file not in the /user/share/hal/fdi/preprobe/95userpolicy directory but in /etc/hal/fdi/preprobe. The /usr/share directory is normally used by the packages to store there files. The /etc directory is normally for configuration files like these.

The second pointer is to use the following lines in your fdi file for every block device.
<device>
  <match key="block.device" string="/dev/sda1">
    <merge key="volume.fstype" type="string">nvidia_raid_member</merge>
    <merge key="volume.fsusage" type="string">raid</merge>
  </match>
</device>
This way you tell hal that the volume is part of a raid volume. Off-course you could change "nvidia_raid_member" to something that is more suitable for your system. This can maybe help Piyoosh. You could also try to use "volume.ignore" instate of "info.ignore". In a standard lshal output info.ignore is not present, but volume.ignore is. I have not tested this.

Last pointer you do not need to reboot the system you can just restart hal with.
sudo /etc/init.d/hal restart