Installing R, Rmpi & Bioconductor on Beowulf Cluster Running ClusterVision OS

We recently purchased a new computer cluster here and I was tasked with installing R & Rmpi on the thing. Not an especially easy task, so here’s how you do it.

Firstly we need to install R on the slave image. The slave image is basically a directory tree contained on the master node that the slaves/nodes download and use as their filesystem when they boot up.

So to install files in the slave image, firstly log in as root. If you have a system setup with a separate login node, first exit from the login node by typing “exit”, to get to the master node.

On our system /cm/images/default-image/ is the location of the slave image, which is different from what it says in our manual, apparently they moved it and haven’t bothered to update the manual so I had to email support to find this. The following code tells the yum package manager to install R in the slave image’s directory tree.

yum --installroot=/cm/images/default-image/ install R.x86_64

Now to install the Bioconductor packages in the installation of R we have just created we need to change the root directory to the slave image, initiate R and download and install the packages.

[root@cluster]# R
> source("http://bioconductor.org/biocLite.R")
> biocLite()

On our cluster, some packages that certain Bioconductor libraries required were not installed, so we needed to install them as follows before installing Bioconductor as above. You may have to do something similar when installing on the master node (see below).

yum --installroot=/cm/images/default-image/ install gcc-gfortran.x86_64 libxml2-devel.x86_64 curl-devel.x86_64

The slave nodes need to now be restarted for them to download and install the updated slave image. This took about 5 minutes on our system. It can be achieved by entering cmsh and using the following command to restart the slaves.

[root@cluster]# cmsh
% device power reset -c slave

Next we must install R and Rmpi on the master node. First we type “exit” to undo what was done with the “chroot” command above, we now have a normal shell again. So install R and Rmpi:

[root@cluster]# yum install R.x86_64
[root@cluster]# R
> source("http://bioconductor.org/biocLite.R")
> biocLite("Rmpi")

On our system which uses the Sun GridEngine queuing system, to submit a job we use the following shell script to submit a script called “task_pull.R “. This shell script may be quite different based on your systems setup or if you are using the PBS, but the ClusterVision users guide is a good place to start. I’ve included comments in the script below.

#!/bin/sh
#
# Your job name
#$ -N My_Job
#
# Use current working directory
#$ -cwd
#
#
# pe (Parallel environment) request. Set your number of processors here.
#$ -pe mpich 15
#
# Run job through bash shell
#$ -S /bin/bash
# If modules are needed, source modules environment:
. /etc/profile.d/modules.sh
# Add any modules you might require:
module add shared mvapich2 openmpi
# The following output will show in the output file. Used for debugging.
echo ``Got $NSLOTS processors.''
echo ``Machines:''
cat $TMPDIR/machines
#Merge the standard out and standard error to one file
#$ -j y
mpirun -np 1 R --slave CMD BATCH task_pull.R

Now to submit the job to the cluster type the following line of code at the command prompt, where “all.q” is the name of the queue you wish to submit to and test_Rmpi.sh is the name of the shell script above.

qsub -q all.q test_Rmpi.sh

The sample code that I’m running (“task_pull.R”) is found here: http://math.acadiau.ca/ACMMaC/Rmpi/examples.html. There are some useful examples on this site and it is a good guide to MPI in R, although doesn’t provide information on the setup of the system.

Keep it real.

Leave a Reply