OmicsPipe on the Amazon Cloud (AWS EC2) Tutorial¶
OmicsPipe on AWS uses a custom StarCluster image, created with docker.io (which installs docker.io, environment-modules, and easybuild on an AWS EC2 cluster). All you have to do is get the docker image, upload your data, launch the Amazon cluster and run a single command to analyze all of your data according to published, best-practice methods.
Step 1: Create an AWS Account¶
- Create an AWS account by following the instructions at Amazon-AWS
- Note your AWS ACCESS KEY ID, AWS SECRET ACCESS KEY and AWS USER ID
Step 2 (Mac or Linux): Install StarCluster and download config/plugins¶
- Install StarCluster on your machine following the StarCluster instructions
- Download the template Omics Pipe StarCluster configuration file (config) and three plugin files (sge.py, sgeconfig.py, omicspipe_config_prebuilt.py) from Omics Pipe Bitbucket
- Move downloaded config file to ~/.starcluster/config
- Move downloaded plugin files to the ~/.starcluster/plugins/ folder.
- Go on to configure StarCluster by following directions below in Step 3.
Step 2 (Windows): Load the the OmicsPipe on AWS docker image on your machine¶
- Download docker.io following the instructions for your operating system at Get-Docker
From inside the Docker environment, run the command:
docker run -i -t omicspipe/aws_readymade /bin/bash
Note
If you want to share a file from your local computer with the docker container, follow the instructions for Docker Folder Sharing, put your desired file within the shared folder and run the command below (this is recommended for saving your /.starcluster/config file from the next step:
docker run -it --volumes-from NameofSharedDataFolder omicspipe/aws_readymade /bin/bash
- If you are on a local Ubuntu installation, skip this step and install the StarCluster client directly.
- If you are using Windows, it might be necessary to update your BIOS to enable virtualization before installing Docker
Step 3: Configure StarCluster¶
After running the omicspipe/aws_readymade Docker container, run the command below to edit the StarCluster configuration file:
nano ~/.starcluster/config Or if you prefer vim:: vim ~/.starcluster/config
Enter your “AWS ACCESS KEY ID”, “AWS SECRET ACCESS KEY”, and “AWS USER ID”
Change the AWS REGION NAME and AWS REGION HOST variables if you do not live in the AWS us-west region to the appropriate region AWS Regions.
Select your desired pre-configured cluster by editing the “DEFAULT_TEMPLATE” variable or creating a custom cluster. The default is a test cluster with 5 c3.large nodes.
Create your starcluster SSH key by running the command:
starcluster createkey omicspipe -o ~/.ssh/omicspipe.rsa
To remove a key from the AWS registry, run the command:
starcluster removekey omicspipe
For more information on editing the StarCluster configuration file, see the StarCluster website.
Step 4: Create AWS Volumes¶
Create AWS volumes to store the raw data and results of your analyses. From within the Docker environment, run:
starcluster createvolume --name=data -i ami-52112317 -d -s <volume size in GB> us-west-1a starcluster createvolume --name=results -i ami-52112317 -d -s <volume size in GB> us-west-1a
- Specify the <volume size in GB> as a number large enough to accomodate all of your raw data and ~4x that size for your results folder
- Change us-west-1b to your region as described in AWS Regions.
- Make a volume from the provided snapshot of reference databases (currently only supports H. sapiens)
- Go to the AWS-Console
- Click on the EC2 option
- Click on Volumes
- Click on “Create Volume”
- Set availability zone
- In Snapshot ID search for “omicspipe_db” and click on the resulting Snapshot ID
- Click Create
- From the Volumes tab, note the “VOLUME_ID” of the database snapshot
Edit your StarCluster configuration file to add your volume IDs. Run the command below and edit the VOLUME_ID variables for data, results, and database:
nano ~/.starcluster/config
Edit the fields below:
[volume results] VOLUME_ID = MOUNT_PATH = /data/results [volume data] VOLUME_ID = MOUNT_PATH = /data/data [volume database] VOLUME_ID = MOUNT_PATH = /data/database
Save your StarCluster configuration file to ~/.starcluster/config
Step 5: Launch the Cluster¶
From the Docker container, run the command below to start a new cluster with the name “mypipe”:
starcluster start mypipe
SSH into the cluster by running the command below:
starcluster sshmaster mypipe
Step 6: Upload data to the cluster¶
Now that you are in your cluster, you can use it like any other cluster. Before running omics pipe on your own data (you can skip this step if you are running the test data, you will want to upload your data, unless it is already present in your attached data volume. There are several options to upload your data:
Upload data from your local machine or cluster using StarCluster put:
starcluster put mypipe <myfile> /data/raw
Retrieve a file from an FTP server:
scp <localfile>username@tohostname:<remotefile>
Get a file from an S3 bucket with S3cmd:
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Use Webmin to transfer files from your local system to the cluster (recommended for small files only, like parameter files).
- In the AWS Management Console go to “Security Groups”
- Select the “StarCluster-0_95_5” group associated with your cluster’s name
- On the Inbound tab click on “Edit”
- Click on “Add Rule” and a new “Custom TCP Rule” will apear. On “Port Range” enter “10000” and on “Source” select “My IP”
- Hit “Save”
- Selct Instances in the AWS managemnt console and note the “Public IP” of your instance
- In a Web browser, enter https://the_public_ip:10000. Type in the Login info when prompted: user: root password: sulab
- This will take a few seconds to load, and once it does, you can navigate your cluster’s file structure with the tabs on the left
- To upload a file from your local file system, click “upload” and specify the directory /data/data to upload your data.
Step 7: Run the test pipelines¶
Once you have successfully started the cluster, you may run Omics Pipe with the following commands for the different pipelines. *Note: Small .fastq files are provided on the instance for the tests below to demonstrate the functionality of the pipelines, but they may not generate meaningful results. Larger test files can be uploaded to the cluster by following the instructions in the documentation above.
RNA-seq Count Based Pipeline
omics_pipe RNAseq_count_based /root/src/omics-pipe/tests/test_params_RNAseq_counts_AWS.yaml
RNA-seq Tuxedo Pipeline
omics_pipe RNAseq_Tuxedo /root/src/omics-pipe/tests/test_params_RNAseq_Tuxedo_AWS.yaml
Whole Exome Sequencing:
omics_pipe WES_GATK /root/src/omics-pipe/tests/test_params_WES_GATK_AWS.yaml
ChIP-seq Homer
omics_pipe ChIPseq_HOMER /root/src/omics-pipe/tests/test_params_ChIPseq_HOMER_AWS.yaml
Installing extra software¶
Both the GATK and MuTect software are used by OmicsPipe, but they require licenses from The Broad Institute and cannot be distributed with the OmicsPipe software. GATK and MuTect are free to download after accepting the license agreement.
To install GATK:
Upload the GenomeAnalysisTK.jar file to the /root/.local/easybuild/software/gatk/3.2-2 using either Webmin or StarCluster put
Make the jar file executable by running the command:
chmod +x /root/.local/easybuild/software/gatk/3.2-2/GenomeAnalysisTK.jar
To install MuTect:
Upload the muTect-1.1.4.jar file to the /root/.local/easybuild/software/mutect/1.1.4 using either Webmin or StarCluster put
Make the jar file executable by running the command:
chmod +x /root/.local/easybuild/software/mutect/1.1.4/muTect-1.1.4.jar
Adding software that OmicsPipe was not built with might require a little more configuration, but OmicsPipe is designed as a foundation to which new software can be added. New software can obviously be added in any manner that the user prefers, but to follow the structure that was used to build OmicsPipe, please refer to the “custombuild” scripts.
Important
- If you configure software that you think extends the functionality of OmicsPipe, please create a pull request on our Bitbucket page.
To build your own docker image¶
Download docker.io following the instructions at Get-Docker
Run the command:
docker build -t <Repository Name> https://bitbucket.org/sulab/omics_pipe/downloads/Dockerfile_AWS_prebuiltAMI_public
This will store the dockercluster image in the Repository Name of your choice.
There is also an AWS_custombuild Dockerfile, which can be used to build an Amazon Machine Image from scratch
Add storage > 1TB to the cluster using LVM (for advanced users)¶
Within StarCluster create x new volumes by running:
nvolumes=2 #number of volumes vsize=1000 #in gb instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id` akey=<AWS KEY> skey=<AWS SECRET KEY> region=us-west-1 zone=us-west-1a for x in $(seq 1 $nvolumes) do ec2-create-volume \ --aws-access-key $akey \ --aws-secret-key $skey \ --size $vsize \ --region $region \ --availability-zone $zone done > /tmp/vols.txt
Attach the volumes to the head node:
i=0 for vol in $(awk '{print $2}' /tmp/vols.txt) do i=$(( i + 1 )) ec2-attach-volume $vol \ -O $akey \ -W $skey \ -i $instance \ --region $region \ -d /dev/sdh${i} done > /tmp/attach.txt
Mark the EBS volumes as physical volumes:
for i in $(find /dev/xvdh*) do pvcreate $i done
Create a volume group:
vgcreate vg /dev/xvdh*
Create a logical volume:
lvcreate -l100%VG -n lv vg
Create the file system:
mkfs -t xfs /dev/vg/lv
Mount the file system:
mount /dev/vg/lv /data/data_large
Create mount point and mount the device:
mkdir /data/data_large mount /dev/md0 /data/data_large
Add new mountpoint to /etc/exports:
for x in $(qconf -sh | tail -n +2) do echo '/data/data_large' ${x}'(async,no_root_squash,no_subtree_check,rw)' >> /etc/exports done
Reload /etc/exports:
exportfs -a
Mount the new folder on all nodes:
for x in $(qconf -sh | tail -n +2) do ssh $x 'mkdir /data/data_large' ssh $x 'mount -t nfs master:/data/data_large /data/data_large' done
How to increase volume size?
Create and attach EBS volumes as described in steps 1. & 2. and then create the additional physical volumes:
for i in $(cat /tmp/attach.txt | cut -f 4 | sed 's/[^0-9]*//g') do pvcreate /dev/xvdh${i} done
Add new volumes to the volume group:
for i in $(cat /tmp/attach.txt | cut -f 4 | sed 's/[^0-9]*//g') do vgextend vg /dev/xvdh${i} done lvextend -l100%VG /dev/mapper/vg-lv
Grow the file system to the new size:
xfs_growfs /data/data_large
Add storage > 1TB to the cluster using RAID 0 (for advanced users)¶
Within StarCluster create x new volumes by running:
nvolumes=2 #number of volumes vsize=1000 #in gb instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id` akey=<AWS KEY> skey=<AWS SECRET KEY> region=us-west-1 zone=us-west-1a for x in $(seq 1 $nvolumes) do ec2-create-volume \ --aws-access-key $akey \ --aws-secret-key $skey \ --size $vsize \ --region $region \ --availability-zone $zone done > /tmp/vols.txt
Attach the volumes to the head node:
i=0 for vol in $(awk '{print $2}' /tmp/vols.txt) do i=$(( i + 1 )) ec2-attach-volume $vol \ -O $akey \ -W $skey \ -i $instance \ --region $region \ -d /dev/sdh${i} done
Create a raid 0 volume:
mdadm --create -l 0 -n $nvolumes /dev/md0 /dev/xvdh*
Create a file system:
mkfs -t ext4 /dev/md0
Create mount point and mount the device:
mkdir /data/data_large mount /dev/md0 /data/data_large
Add new mountpoint to /etc/exports:
for x in $(qconf -sh | tail -n +2) do echo '/data/data_large' ${x}'(async,no_root_squash,no_subtree_check,rw)' >> /etc/exports done
Reload /etc/exports:
exportfs -a
Mount the new folder on all nodes:
for x in $(qconf -sh | tail -n +2) do ssh $x 'mkdir /data/data_large' ssh $x 'mount -t nfs master:/data/data_large /data/data_large' done
Backing up your data to S3¶
Run:
s3cmd --configure
and follow the instructions
Create a S3 bucket:
s3cmd mb s3://backup
Copy data to the bucket:
s3cmd put -r /data/results s3://backup
More info on s3cmd here: https://github.com/s3tools/s3cmd