SysAdmin to SysAdmin: Using RAID with PVFS under ROCKS

    Date30 Dec 2004
    Posted ByBrittany Day
    I administer a newly deployed ROCKS compute cluster, and I use the Parallel Virtual Filesystem which comes with the ROCKS linux distribution to provide a parallel IO system. For those who are not familiar, check out my earlier ROCKS article, as well as my earlier article about PVFS. My cluster is slightly older hardware -- dual PIIIs, and each PC has two hard drives. Initially, I thought having two drives was great news, because I could add all of the capacity of the second drive, along with unused capacity of the first drive to grant large amounts of scratch space to the cluster users, some of whom would be more than happy to have it.

    However, what I didn't realize was that PVFS can only use a single mount point for its data storage needs. I couldn't tell PVFS to use /dev/hda3 and /dev/hdb1. Then someone on the ROCKS list said I might consider using RAID, but that he hadn't tried it himself. I was game, and it works wonderfully. So here's how I did it.

    Moving the metadata server off the head node

    On a ROCKS cluster, the head node is called frontend-0-0. In testing, this is where my PVFS metadata server lived. However, the frontend also serves up home directories to the rest of the cluster, and handles intercommunications between the scheduler and workload management daemons across the cluster. It also gathers statistics on the cluster, pushes out administrative changes to the nodes, and runs a web server. That's more than enough load without coordinating all of the PVFS clients and matching their requests with the 16 PVFS IO nodes.
    You are not authorised to post comments.

    LinuxSecurity Poll

    Has your email account ever been pwned in a data breach?

    No answer selected. Please try again.
    Please select either existing option or enter your own, however not both.
    Please select minimum 0 answer(s) and maximum 2 answer(s).

    We use cookies to provide and improve our services. By using our site, you consent to our Cookie Policy.