This is the second part of a series of posts on the RAMMCAP suite of bioinformatics tools.
Last time, we left off with a freshly compiled version of RAMMCAP ready for testing. Like last time, we’ll start with the README in the current directory, which is the
rammcap directory inside the main directory (named
RAMMCAP-20091106 in my case) from the RAMMCAP download.
The new README has its own test script, so I’m going to follow this one. The first thing to run is the CD-HIT-EST program.
The CD-HIT-EST program is the clustering program. It takes in a FASTA-formatted file and clusters that data according to a greedy algorithm that uses simple word counting and indexing to help speed things up considerably. Basically, it:
- Sorts the sequences from longest to shortest.
- Begins clustering:
- Compare the current sequence to the list of known clusters.
- If it matches an existing cluster, it is added to that cluster and made the “representative” sequence for that cluster if it is longer than the existing “representative” sequence.
- If it doesn’t match an existing cluster, a new cluster is made with this sequence as the representative sequence.
- Repeat until all the sequences are clustered.
- Compare the current sequence to the list of known clusters.
(None of which I knew at the time I ran this for the first time, but it’s information that makes sense here.)
I pull out the really long test command (written below), prepare myself, and hit Enter. It takes a long time to complete.
../cd-hit/cdhit-est -i testdb -o testdb_95 -M 50 -B 1 -d 0 -n 10 \ -l 11 -r 1 -p 1 -g 1 -G 0 -c 0.95 -aS 0.8 > testdb_95.log
While that’s going, here’s what that command means:
../cd-hit/cd-hit-est– use the CD-HIT-EST command, which is the CD-HIT command for DNA/RNA comparison tasks. The original CD-HIT was written for protein comparison.
-i testdb– use the testdb file as the input file. This file is a FASTA file with 1,000,000 sequences at most 361 bases long pulled from various metagenomic samples by the author.
-o testdb_95– write the output to a file called testdb_95
-M 50– I have 50 MB of RAM free (which I don’t think really makes sense, but that’s what the README says)–turns out it’s actually the maximum available memory, not the amount of free memory according to the rest of the documentation.
-B 1– sequences are stored on the hard drive (1) instead of in RAM (0)
-d 0– The length of the description in .clster file. Since it’s 0, it just takes the FASTA sequence description up to the first space.
-n 10– the word length
-l 11– length of the throw_away_sequences
-r 1– compare both strands of DNA
-p 1– print the alignment overlap in the .clstr file (if 0, it’s not printed)
-g 1– clusters the strings into the most similar cluster, not the first one CD HIT finds, which is the default (0).
-G 0– don’t use global sequence identity, i.e., treat each input sequence individually when calculating identity.
-c 0.95– cluster at 95% sequence identity
-aS 0.8– alignment coverage for the shorter sequence. In this case, it means that the alignment between the longer and shorter sequence must cover at least 80% of the shorter sequence.
> testdb_95.log– write the standard output to the testdb_95.log file
- There are other options available, which I’m not going to go into much for right now.
Long story short, it takes around 200 minutes to complete its test data processing. That’s roughly 3 hours and 20 minutes, which I think is pretty long for a test, though I did limit the amount of memory it could use to 50 MB–upping that limit will probably speed things up substantially, but according to what I read in the README I couldn’t. In retrospect, though, I think upping the limit is just fine, especially with the information I’m reading in the other files. (UPDATE: Upping the memory limit to 400MB, the default, drops the execution time down to about 67 minutes. Eight fold memory increase, three fold time decrease–not great, but not too shabby.)
The next program to run is the ORF_FINDER. This program apparently is a program to scan the sequences for ORFs (not surprising given the name), which are what most people mean when they talk about their “genes” and their DNA (kind of).
../orf_finder/orf_finder -l 30 -L 30 -t 11 -i testdb -o testorf
This command takes far fewer options.
../orf_finder/orf_finder– runs the orf_finder command.
-l 30– the minimal length of the ORF.
-L 30– the minimal length of the ORF between stop codons. I’m not sure exactly how this differs from the -l option. Maybe you need to move at least 30 bases past the end of the previous stop codon before starting to look for another stop codon?
-t 11– Translation table?. Again, not sure what this represents.
-i testdb– Use the testdb file as input.
-o testorf– Write the output to the testorf file.
- Again, there are other options I’m not going to talk about much.
In addition to having far fewer options, ORF_FINDER takes far less time to execute. Roughly a minute and a half on the same dataset, which is a pretty huge upgrade. ORF finding is a much simpler task than sequence clustering, obviously. Now for the fun part–clustering the ORFs with the real CD-HIT program.
Now, CD-HIT works basically the same way that CD-HIT-EST does, except it scans for amino acids instead of nucleotides. The full commands I’m running are:
../cd-hit/cdhit -i testorf -o testorf_95 -M 400 -d 0 -n 5 -p 1 \ -g 1 -G 0 -c 0.95 -aS 0.8 > testorf_95.log ../cd-hit/cdhit -i testorf_95 -o testorf_60 -M 400 -d 0 -n 4 -p 1 \ -g 1 -G 0 -c 0.60 -aS 0.8 > testorf_60.log ../cd-hit/clstr_rev.pl testorf_95.clstr testorf_60.clstr \ > testorf_60full.clstr
It has much the same options as CD-HIT-EST, so I’m not going go into a huge amount of detail on those. Instead, notice how I’m going to:
- Cluster the ORFS at 95% identity.
- Re-cluster the ORFs with less stringent criteria (60% identity and a shorter word length) to help cluster the non-redundant sequences.
- Combine the two clustering runs into a single cluster file.
Running things this way (according to the documentation) can help generate a sort of hierarchical structure of clusters. This makes sense, since the ones at 95% identity at the very least are closely related and may actually be redundant sequences while those at 60% are more distant relatives and may be homologs from divergent species or something similar.
The first run of CD-HIT took around 98 minutes, roughly an extra half an hour than just clustering the sequences as a whole. I’m guessing that since the ORFs are more similar to each other than just the sequences as a whole that they take longer to cluster.
The second run of CD-HIT took around 235 minutes, probably because the less stringent criteria took longer to process. Combining the two cluster files only took around 15 seconds, so at least that’s an easy task.
That takes care of the basic tools included with RAMMCAP. I’ll explore some of the graphical tools in a later post.