Next Generation Sequencing (NGS)/De novo assembly

From Wikibooks, open books for an open world
Jump to navigation Jump to search
Next Generation Sequencing (NGS)
Chromatin structure De novo genome assembly De novo RNA assembly

De novo assembly[edit | edit source]

The generation of short reads by next generation sequencers has lead to an increased need to be able to assemble the vast amount of short reads that are generated. This is no trivial problem, as the sheer number of reads makes it near impossible to use, for example, the overlap layout consensus (OLC) approach that had been used with longer reads. Therefore, most of the available assemblers that can cope with typical data generated by Illumina use a de Bruijn graph based k-mer based approach.

A clear distinction has to be made by the size of the genome to be assembled.

  • small (e.g. bacterial genomes: few Megabases)
  • medium (e.g. lower plant genomes: several hundred Megabases)
  • large (e.g. mammalian and plant genomes: Gigabases)

All de-novo assemblers will be able to cope with small genomes, and given decent sequencing libraries will produce relatively good results. Even for medium sized genomes, most de-novo assemblers mentioned here and many others will likely fare well and produce a decent assembly. That said, OLC based assemblers might take weeks to assemble a typical genome. Large genomes are still difficult to assemble when having only short reads (such as those provided by Illumina reads). Assembling such a genome with Illumina reads will probably will require using a machine that has about 256 GB and potentially even 512GB RAM, unless one is willing to use a small cluster (ABySS, Ray, Contrail), or invest into commercial software (CLCbio_Genomics_Workbench).

Typical workflow[edit | edit source]

Overview of the denovo assembly process for WGS

A genome assembly project, whatever its size, can generally be divided into stages:

  1. Experiment design
  2. Sample collection
  3. Sample preparation
  4. Sequencing
  5. Pre-processing
  6. Assembly
  7. Post-assembly analysis

Experiment design[edit | edit source]

Like any project, a good de novo assembly starts with proper experimental design. Biological, experimental, technical and computational issues have to be considered:

  • Biological issues: What is known about the genome?
    • How big is it? Obviously, bigger genomes will require more material.
    • How frequent, how long and how conserved are repeat copies? More repetitive genomes will possibly require longer reads or long distance mate-pairs to resolve structure.
    • How AT rich/poor is it? Genomes which have a strong AT/GC imbalance (either way) are said to have low information content. In other words, spurious sequence similarities will be more frequent.
    • Is is haploid, diploid, or polyploid? Currently genome assemblers deal best with haploid samples, and some provide a haploid assembly with annotated heterozygous sites. Polyploid genomes (e.g. plants) are still largely problematic.
  • Experimental issues: What sample material is available?
    • Is it possible to extract a lot of DNA? If you have only little material, you might have to amplify the sample (e.g. using MDA), thus introducing biases.
    • Does that DNA come from a single cell, a clonal population, or a heterogeneous collection of cells? Diversity in the sample can create more or less noise, which different assemblers handle differently.
  • Technical issues: What sequencing technologies to use?
    • How much does each cost?
    • What is the sequence quality? The greater the noise, the more coverage depth you will need to correct for errors.
    • How long are the reads? The longer the reads, the more useful they will be to disambiguate repetitive sequence.
    • Can paired reads be produced cost-effectively and reliably? If so, what is the fragment length? As with long reads, reliable long distance paired can help disambiguate repeats and scaffold the assembly.
    • Can you use a hybrid approach? E.g. short and cheap reads mixed with long expensive ones.
  • Computational issues: What software to run?
    • How much memory do they require? This criteria can be final, because if a computer does not have enough memory, it will either crash, or slow down tremendously as it swaps data on and off the hard drive.
    • How fast are they? This criteria is generally less stringent, since the assembly time is generally minor within a complete genome assembly and annotation project. However, some scale better than other.
    • Do they require specific hardware? (e.g. large memory machine, or cluster of machines)
    • How robust are they? Are they prone to crash? Are they well supported?
    • How easy are they to install and run?
    • Do they require a special protocol? Can they handle the chosen sequencing technology?

Some steps which are likely common to most assemblies:

  1. If it is within reason and would not tamper with the biology: Try to get DNA from haploid or at least mostly homozygous individuals.
  2. Make sure that all libraries are really ok quality-wise and that there is no major concern (e.g. use FastQC)
  3. For paired end data you might also want to estimate the insert size based on draft assemblies or assemblies which you have made already.
  4. Before submitting data to a de-novo assembler it might often be a good idea to clean the data, e.g. to trim away bad bases towards the end and/or to drop reads altogether. As low quality bases are more likely to contain errors, these might complicate the assembly process and might lead to a higher memory consumption. (More is not always better) That said, several general purpose short read assemblers such as SOAP de-novo and ALLPATHS-LG can perform read correction prior to assembly.
  5. Before running any large assembly, double and triple check the parameters you feed the assembler.
  6. Post assembly it is often advisable to check how well your read data really agrees with the assembly and if there are any problematic regions
  7. If you run de Bruijn graph based assemblies you will want to try different k-mer sizes. Whilst there is no rule of thumb for any individual assembly, smaller k-mers would lead to a more tangled graph if the reads were error free. Larger k-mer sizes would yield a less tangled graph, given error free reads. However, a lower k-mer size would likely be more resistant to sequencing errors. And a too large k might not yield enough edges in the graph and would therefore result in small contigs.

Data pre-processing[edit | edit source]

For a more detailed discussion, see the chapter dedicated to pre-processing.

Data pre-processing consists in filtering the data to remove errors, thus facilitating the work of the assembler. Although most assemblers have integrated error correction routines, filtering the reads will generally greatly reduce the time and memory overhead required for assembly, and probably improve results too.

Genome assembly[edit | edit source]

Genome assembly consists in taking a collection of sequencing reads, which are much shorter than the actual genome, and creating a genome sequence which is a likely source of all these fragments. What defines a likely genome depends generally on heuristics and the data available. Firstly, by parsimony, the genome must be as short as possible. One could take all the reads and simply produce the concatenation of all their sequences, but this wold not be parsimonious. Secondly, the genome must include as much of the input data as possible. Finally, the genome must satisfy as many of the experimental data as possibly. Typically, paired-end reads are expected to map onto the genome with a given respective orientation and a given distance from each other.

The output of an assembler is generally decomposed into contigs, or contiguous regions of the genome which are nearly completely resolved, and scaffolds, or sets of contigs which are approximately placed and oriented with respect to each other.

There are many assemblers available (See the Wikipedia page on sequence assembly for more details). Tutorials on how to use some of them are below.

Techniques for comparing assemblies[edit | edit source]

Once several genome assemblies are generated, they need to be evaluated.[1][2][3] Current methods include:

  • N50 (length of contigs or scaffolds)[4]
  • mapping of reads that were used to produce the assembly[5][6][7][8][9][10]
  • identification and counting of highly conserved genes expected to be present based on evolution[11]
  • mapping of transcripts to genome assemblies[12]

Post-assembly analysis[edit | edit source]

Once a genome has been obtained, a number of analyses are possible, if not necessary:

  • Quality control
  • Comparison to other assemblies
  • Variant detection
  • Annotation

Creating a dataset[edit | edit source]

Free Software[edit | edit source]

ABySS[edit | edit source]

ABySS is a de-novo assembler which can run on multiple nodes where it uses the message parsing interface (MPI) interface for communication. As ABySS distributes tasks, the amount of RAM needed per machine is smaller and thus Abyss is able to cope with large genomes. See here for a tutorial.

  • Pros
    • distributed interface a cluster can be used
    • a large genome can be assembled with relatively little RAM per compute node. A human genome was assembled on 21 nodes having 16GB RAM each
  • Cons
    • relatively slow
Allpaths-LG[edit | edit source]

Allpath-LG is a novel assembler requiring specialized libraries. The authors of the software benchmarked ALLPATH-LG against SOAP-denovo and ALLPATH-LG reported superior performance. However it must be noted that they might not have used the SOAP-denovo gap filling module for one of the data set due to time constraints. This would probably have improved the SOAP assembly contiguous sequence length. In our own hand (usadellab) we have seen similar good N50 results[13] and also reported good N50 values for ALLPATHS-LG Arabidopsis assemblies. Similarly ALLPATHS-LG was named as well performing in the Assemblathon.

  • Pros
    • relatively fast runtime (slower than SOAP)
    • good scaffold length (likely better than SOAP)
    • can use long reads (e.g. PAC Bio) but only for small genomes
  • Cons
    • specially tailored libraries are necessary
    • large genomes (mammalian size) need a lot of RAM. The publications estimates about 512GB would be sufficient though
    • slower than SOAP
Euler SR USR[edit | edit source]

EULER is an assembler that includes an error correction module.

  • Pros
    • Has an error correction module
  • Cons
MIRA[edit | edit source]

MIRA is a general purpose assembler that can integrate various platform data and perform true hybrid assemblies.

  • Pros
    • very well documented and many switches
    • can combine different sequencing technologies
    • likely relatively good quality data
  • Cons
    • Only partly multithreaded thus and due to the technology slow
    • Probably not recommended to assemble larger genomes
Ray[edit | edit source]

Ray is a distributed scalable assembler tailored for bacterial genomes, metagenomes and virus genomes.

Tutorial available here

  • Pros
    • scalability (uses MPI)
    • correctness
    • usability
    • well documented
    • responsive mailing list
    • can combine different sequencing technologies
    • de Bruijn-based
  • Cons
SOAP de novo[edit | edit source]

SOAPdenovo is an all purpose genome assembler. It was used to assemble the giant panda genome. See here for a tutorial.

  • Pros
    • SOAP de novo uses a medium amount of RAM
    • SOAP de novo is relatively fast (probably the fastest free assembler)
    • SOAP de novo contains a scaffolder and a read-corrector
    • SOAP de novo is relatively modular (read-corrector, assembly, scaffold, gap-filler)
    • SOAP de novo works well with very short reads[14]
  • Cons
    • potentially somewhat confusing way in which contigs are built.
    • Relatively large amount of RAM needed, BGI states ca. 150GB (less than ALLPATHS though)
SPAdes[edit | edit source]

SPAdes is a single-cell genome assembler.

  • Pros
    • SPAdes works good with highly non-uniform coverage (e.g. after using Multiple Displacement Amplification)
    • SPAdes uses medium ammount of RAM
    • SPAdes is relatively fast
    • SPAdes includes error correction software BayesHammer
    • SPAdes have scaffolder (version 2.3+)
  • Cons
    • SPAdes is well tested only on bacterial genomes
    • SPAdes works with Illumina reads only
Velvet[edit | edit source]

See here for a tutorial on creating an assembly with Velvet.

  • Pros
    • Easy to install, stable
    • Easy to run
    • Fast (multithreading)
    • Can take in long and short reads, works with SOLiD colorspace reads
    • Can use a reference genome to anchor reads which normally map to repetitive regions (Columbus module)
  • Cons
    • Velvet might need large amounts of RAM for large genomes, potentially > 512 GB for a human genome based if at all possible. This is based on an approximation formula derived by Simon Gladman[15] for smaller genomes -109635 + 18977*ReadSize + 86326*GenomeSize in MB + 233353*NumReads in million - 51092*Kmersize
Minia[edit | edit source]

Minia is a de Bruijn graph assembler optimized for very low memory usage.

  • Pros
    • Assembles very large genomes quickly on modest resources
    • Easy to install, run
  • Cons
    • Illumina data only
    • Does not perform any scaffolding
    • Some steps are I/O-intensive, i.e. a local hard disk should be used rather than a network drive

Commercial[edit | edit source]

CLC cell[edit | edit source]

The CLC assembly cell is a commercial assembler released by CLC. It is based on a de Bruijn graph approach.

  • Pros
    • CLC uses very little RAM
    • CLC is very fast
    • CLC contains a scaffolder (version 4.0+)
    • CLC can assemble data from most common sequencing platforms.
    • Works on Linux, Mac and Windows.
  • Cons
    • CLC is not free
    • CLC might be a bit more liberal in folding repeats based on our own plant data.
Newbler[edit | edit source]

Newbler is an assembler released by the Roche company.

  • Pros
    • Newbler has been used in many assembly projects
    • Newbler seems to be able to produce good N50 values
    • Newbler is often relatively precise
    • Newbler can usually be obtained free of charge
  • Cons
    • Newbler is tailored to (mostly) 454 data. Since Ion Torrent PGM data has a similar error profile (predominance of miscalled homopolymer repeats), it may be a good choice there also. Whilst it can accommodate some limited amount of Illumina data as has been described by bioinformatician Lex Nederbragt[16], this is not possible for larger data sets. The fire ant genome[17] added ~40x Illumina data to ~15x 454 coverage in the form of "fake" 454 reads: first assembling the Illumina data using SOAPdenovo and then chopping the obtained contigs into overlapping 300bp reads, and finally inputting these fake 454 reads to Newbler alongside real 454 data.
    • As Newbler at least partly uses the OLC approach large assemblies can take time

Decision Helper[edit | edit source]

This is based both on personal experience as well as on published studies. Please note however that genomes are different and software packages are constantly evolving.

An Assemblathon challenge which uses a synthetic diploid genome assembly was reported on by Nature to call SOAP de novo, Abyss and ALLPATHS-LG the winners.[18]

However a talk on the Assemblethon website names SOAP de novo, sanger-sga and ALLPATHS-LG to be consistently amongst the best performers for this synthetic genome.[19]

I want to assemble:

  • Mostly 454 or Ion Torrent data
    • small Genome =>MIRA, Newbler
    • all others use Newbler
  • Mixed data (454 and Illumina)
    • small genome => MIRA, but try other ones as well
    • medium genome => no clear recommendation
    • large genome, assemble Illumina data with ALLPATHS-LG and SOAP, add in other reads or use them for scaffolding
  • Mostly Illumina (or Colorspace)
    • small genome => MIRA, velvet
    • medium genome => no clear recommendation
    • large genome, assemble Illumina data with ALLPATHS-LG and SOAP, add in other reads or use them for scaffolding

(For large genomes this is based on the fact that not many assemblers can deal with large genomes, and based on the assemblathon outcome. For 454 data this is based on Newbler's good general performance, and MIRA's different outputs, its versatility and the theoretical consideration that de Bruijn based approaches might fare worse)

Post assembly you might want to try the SEQuel software to improve the assembly quality.

I want to start a large genome project for the least cost

  • Use Illumina reads with ALLPATHS-LG specification (i.e. overlapping), the reads will work in e.g. SOAP de novo as well

(This recommendation is based on the Assemblathon outcome, the original ALLPATHS publication[20] as well as a publication that used ALLPATHS for the assembly of Arabidopsis genomes.[13]

Each software has its particular strength, if you have specific requirement, the result from Assemblathon will guide you. Another comparison site GAGE has also released its comparison.[2] Also there exists QUAST tool for assessing genome assembly quality.

Case study[edit | edit source]

Further Reading Material[edit | edit source]

  • Comparisons
    • Ye et al., 2011 Comparison of Sanger/PCAP; 454/Roche and Illumina/SOAP assemblies. Illumina/SOAP had lower substitution, deletion and insertion rates but lower contig and scaffold N50 sizes than 454/Newbler.
    • Paszkiewicz et al., 2010 General review about short read assemblers
    • Zhang et al., 2011 In depth comparison of different genome assemblers on simulated Illumina read dat. Unfortunately only up to medium genomes were tested. For eukaryotic genomes and short reads Soap denovo is suggested for longer reads ALLPATHS-LG.
    • Chapman JA et al. 2011 introduce the new assembler Meraculous gathered literature data on the assembly of E. coli K12 MG1655 for Allpaths 2, Soapdenovo, Velvet, Euler-SR, Euler, Edena, AbySS and SSAKE. Allpaths2 had by far the largest Contig and Scaffold N50 and was apart from Meraculous the only misassembly free. Meraculous was shown to even contain no errors.
    • Liu et al., 2011 benchmark their new assembler PASHA against SOAP de novo (v 1.04), velvet (1.0.17) and ABySS (1.2.1) using three bacterial data sets. Whilst PASHA usually the largest NG50 and NG80 (N50 and N80 calculated with the true genome sizes) SOAP de novo produced the highest number of contigs and soemtimes worse NG50 and NG80. However for one dataset SOAP denovo showed the best genome coverage.
    • The Assemblathon comparing de novo genome assemblies of many different teams based on a synthetic genome. The Assemblathon 1 competition is now published in Genome Research by Earl et al.[1]

Reference datasets[edit | edit source]

ENA[edit | edit source]

See here for more information.

The European Nucleotide Archive (ENA), has a three-tiered data architecture.It consolidates information from:

  • EMBL-Bank.
  • the European Trace Archive:containing raw data from electrophoresis-based sequencing machines.
  • the Sequence Read Archive: containing raw data from next-generation sequencing platforms.

SRA[edit | edit source]

See SRA for more information.

The Sequence Read Archive (SRA) is:

  • the Primary archival repository for next generation sequencing reads and alignments (BAM)
  • Expanding to manage other high-throughput data including sequence variations (VCF)
  • Will shorty also accept capillary sequencing reads
  • Globally comprehensive through INSDC data exchange with NCBI and DDBJ
  • Part of European Nucleotide Archive (ENA)
  • Data owned by submitter and complement to publication
  • Data expected to be made public and freely available; no access/use restrictions permitted
  • Pre-publication confidentiality supported
  • Controlled access data submitted to EGA
  • Active in the development of sequence data storage and compression algorithms/technologies

SRA Metadata Model[edit | edit source]

  • Study: sequencing study description
  • Sample: sequenced sample description
  • Experiment/Run: primary read and alignment data
  • Analysis: secondary alignment and variation data
  • Project: groups studies together
  • EGA DAC: Data Access Committee
  • EGA Policy: Data Access Policy
  • EGA Dataset: Dataset controlled by Policy and DAC

NCBI[edit | edit source]

Viewing datasets[edit | edit source]

ENSEMBL[edit | edit source]

UCSC[edit | edit source]

Tablet[edit | edit source]

IGV[edit | edit source]

IGV is the Integrative Genomics Viewer developed by NCBI, the National Center for Biotechnology Information. IGV allows for easy navigation of large-scale genomic datasets, and supports the integration of genomic data types such as aligned sequence reads, mutations, copy number, interfering RNA screens, gene expression, methylation, and genomic annotations. Users can amplify specific areas down to individual base-pairs, and more generally scroll through an entire genome. It can be used to visualize and share whole genomes/reference genomes, alignments, variants, and regions of interest as well as filter, sort, and group genomic data.

Comparing datasets[edit | edit source]

Whole genome alignments[edit | edit source]

References[edit | edit source]

  1. a b Earl, D.; Bradnam, K.; St. John, J.; et al. (2011). "Assemblathon 1: A competitive assessment of de novo short read assembly methods". Genome Research. 21 (12): 2224–41. doi:10.1101/gr.126599.111. PMC 3227110. PMID 21926179. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  2. a b Salzberg, S.L.; Phillippy, A.M.; Zimin, A.; et al. (2012). "GAGE: A critical evaluation of genome assemblies and assembly algorithms". Genome Research. 22 (3): 557–67. doi:10.1101/gr.131383.111. PMC 3290791. PMID 22147368. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  3. Bradnam, K.R.; Fass, J.N.; Alexandrov, A.; et al. (2013). "Assemblathon 2: Evaluating de novo methods of genome assembly in three vertebrate species". GigaScience. 2 (1): 10. doi:10.1186/2047-217X-2-10. PMC 3844414. PMID 23870653. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  4. Mäkinen, V.; Salmela, L.; Ylinen, J. (2012). "Normalized N50 assembly metric using gap-restricted co-linear chaining". BMC Bioinformatics. 13: 255. doi:10.1186/1471-2105-13-255. PMC 3556137. PMID 23031320.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  5. Ghodsi, M.; Hill, C.M.; Astrovskaya, I.; et al. (2013). "De novo likelihood-based measures for comparing genome assemblies". BMC Research Notes. 6: 334. doi:10.1186/1756-0500-6-334. PMC 3765854. PMID 23965294. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  6. Hunt, M.; Kikuchi, T.; Sanders, M.; et al. (2013). "REAPR: A universal tool for genome assembly evaluation". Genome Biology. 14 (5): R47. doi:10.1186/gb-2013-14-5-r47. PMC 3798757. PMID 23710727. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  7. Phillippy, A.M.; Schatz, M.C.; Pop, M. (2008). "Genome assembly forensics: Finding the elusive mis-assembly". Genome Biology. 9 (3): R55. doi:10.1186/gb-2008-9-3-r55. PMC 2397507. PMID 18341692.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  8. Rahman, A.; Pachter, L. (2013). "CGAL: Computing genome assembly likelihoods". Genome Biology. 14 (1): R8. doi:10.1186/gb-2013-14-1-r8. PMC 3663106. PMID 23360652.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  9. Vezzi, F.; Narzisi, G.; Mishra, B. (2012). "Reevaluating assembly evaluations with feature response curves: GAGE and Assemblathons". PLoS One. 7 (12): e52210. doi:10.1371/journal.pone.0052210. PMC 3532452. PMID 23284938.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  10. Howison, M.; Zapata, F.; Dunn, C.W. (2013). "Toward a statistically explicit understanding of de novo sequence assembly". Bioinformatics. 29 (23): 2959–63. doi:10.1093/bioinformatics/btt525. PMID 24021385.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. Parra, G.; Bradnam, K.; Korf, I. (2007). "CEGMA: A pipeline to accurately annotate core genes in eukaryotic genomes". Bioinformatics. 23 (9): 1061–7. doi:10.1093/bioinformatics/btm071. PMID 17332020.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  12. Ryan, J.F. (7 February 2014). "Baa.pl: A tool to evaluate de novo genome assemblies with RNA transcripts". Cornell University Library. Retrieved 4 May 2016.
  13. a b Schneeberger, K.; Ossowski, S.; Ott, F.; et al. (2011). "Reference-guided assembly of four diverse Arabidopsis thaliana genomes". PNAS. 108 (25): 10249–54. doi:10.1073/pnas.1107739108. PMC 3121819. PMID 21646520. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  14. Zhang, W.; Chen, J.; Yang, Y.; et al. (2011). "A practical comparison of de novo genome assembly software tools for next-generation sequencing technologies". 6 (3): e17915. doi:10.1371/journal.pone.0017915. PMC 3056720. PMID 21423806. {{cite journal}}: Cite journal requires |journal= (help); Explicit use of et al. in: |author= (help); Text "journal PLoS One" ignored (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  15. Gladman, S. (23 July 2009). "(Velvet-users) Velvetg running time". Velvet-users mailing list. The European Bioinformatics Institute. Retrieved 4 May 2016.
  16. Nederbragt, L. (21 January 2011). "Newbler input II: Sequencing reads from other platforms". An assembly of reads, contigs and scaffolds. Retrieved 4 May 2016.
  17. Wurm, Y.; Wang, J.; Riba-Grognuz, O.; et al. (2010). "The genome of the fire ant Solenopsis invicta". PNAS. 108 (14): 5679–84. doi:10.1073/pnas.1009690108. PMC 3078418. PMID 21282665. {{cite journal}}: Explicit use of et al. in: |author= (help)CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)
  18. Hayden, E.C. (2011). "Genome builders face the competition". Nature. 471 (7339): 425. doi:10.1038/471425a. PMID 21430748.
  19. "Assemblathon 1 results". assemblathon.org. University of California - Davis. 1 June 2011. Retrieved 4 May 2016.
  20. Gnerre, S.; Maccallum, I.; Przybylski, D. (2011). "High-quality draft assemblies of mammalian genomes from massively parallel sequence data". Proceedings of the National Academy of Sciences of the United States of America. 108 (4): 1513–8. doi:10.1073/pnas.1017351108. PMC 3029755. PMID 21187386.{{cite journal}}: CS1 maint: PMC format (link) CS1 maint: multiple names: authors list (link)