Skip to content Skip to footer

Data analysis Edit me

What are the best practices for data analysis?

Description

When carrying out your analysis, you should also keep in mind that all your data analysis has to be reproducible. This will complement your research data management approach since your data will be FAIR compliant but also your tools and analysis environments. In other words, you should be able to tell what data and what code or tools were used to generate your results.

This will help to tackle reproducibility problems but also will improve the impact of your research through collaborations with scientists who will reproduce your in silico experiments.

Considerations

There are many ways that will bring reproducibility to your data analysis. You can act at several levels:

  • By providing your code.
  • By providing your execution environment.
  • By providing your workflows.
  • By providing your data analysis execution.

Solutions

  • Make your code available. If you have to develop some software for your data analysis, it is always a good idea to publish your code. The git versioning system offers both a way to release your code but offers also a versioning system. You can also use Git to interact with your software users. Be sure to specify a license for your code (see the licensing section).
  • Use package and environment management system. By using package and environment management systems like Conda and its bioinformatics specialized channel Bioconda, researchers that have got access to your code will be able to easily install specific versions of tools, even older ones, in an isolated environment. They will be able to compile/run your code in an equivalent computational environment, including any dependencies such as the correct version of R or particular libraries and command-line tools your code use. You can also share and preserve your setup by specifying in a environment file which tools you installed.
  • Use container environments. As an alternative to package management systems you can consider container environments like Docker or Singularity.
  • Use workflow management systems. Scientific Workflow management systems will help you organize and automate how computational tools are to be executed. Compared to composing tools using a standalone script, workflow systems also help document the different computational analyses applied to your data, and can help with scalability, such as cloud execution. Reproducibility is also enhanced by the use of workflows, as they typically have bindings for specifying software packages or containers for the tools you use from the workflow, allowing others to re-run your workflow without needing to pre-install every piece of software it needs. It is a flourishing field and many other workflow management systems are available, some of which are general-purpose (e.g. any command line tool), while others are domain-specific and have tighter tool integration. Among the many workflow management systems available, one can mention
    • Workflow platforms that manage your data and provide an interface (web, GUI, APIs) to run complex pipelines and review their results. For instance: Galaxy and Arvados (CWL-based, open source).
    • Workflow runners that take a workflow written in a proprietary or standardized format (such as the CWL standard) and execute it locally or on a remote compute infrastructure. For instance, toil-cwl-runner, the reference CWL runner (cwltool), Nextflow, Snakemake, Cromwell.
  • Use notebooks. Using notebooks, you will be able to create reproducible documents mixing text and code; which can help explain your analysis choices; but also be used as an exploratory method to examine data in detail. Notebooks can be used in conjunction with the other solutions mentioned above, as typically the notebook can be converted to a script. Some of the most well-known notebooks systems are: Jupyter, with built-in support for code in Python, R and Julia, and many other kernels; RStudio based on R. See the table below for additional tools.

How can you use package and environment management systems?

Description

By using package and environment management systems like Conda and its bioinformatics specialized channel Bioconda, you will be able to easily install specific versions of tools, even older ones, in an isolated environment. You can also share and preserve your setup by specifying in a environment file which tools you installed.

Considerations

Conda works by making a nested folder containing the traditional UNIX directory structure bin/ lib/ but installed from Conda’s repositories instead of from a Linux distribution.

  • As such Conda enables consistent installation of computational tools independent of your distribution or operating system version. Conda is available for Linux, macOS and Windows, giving consistent experience across operating systems (although not all software is available for all OSes).
  • Package management systems work particularly well for installing free and Open Source software, but can also be useful for creating an isolated environment for installing commercial software packages; for instance if they requires an older Python version than you have pre-installed.
  • Conda is one example of a generic package management, but individual programming languages typically have their environment management and package repositories.
  • You may want to consider submitting a release of your own code, or at least the general bits of it, to the package repositories for your programming language.

Solutions

  • MacOS-specific package management systems: Homebrew, Macports.
  • Windows-specific package management systems: Chocolatey and Windows Package Manager winget.
  • Linux distributions also have their own package management systems (rpm/yum/dnf, deb/apt) that have a wide variety of tools available, but at the cost of less flexibility in terms of the tool versions, to ensure they exist co-installed.
  • Language-specific virtual environments and repositories: rvm and RubyGems for Ruby, pip and venv for Python, npm for NodeJS/Javascript, renv and CRAN for R, Apache Maven or Gradle for Java etc.
  • Tips and tricks to navigate the landscape of software package management solutions:
    • If you need multiple tools/programming languages, but your machines have different OS types or versions, list packages in a Conda environment.yml.
    • If you need conflicting versions of some tools/libraries for different operations, make separate Conda environments.
    • If you need a few open source libraries for my Python script, none which require complilation, make a requirements.txt and reference pip packages.

How can you use container environments?

Description

Container environments like Docker or Singularity allow you to easily install specific versions of tools, even older ones, in an isolated environment.

Considerations

In short containers works almost like a virtual machine (VMs), in that it re-creates a whole Linux distibution with separation of processes, files and network.

  • Containers are more lightweight than VMs since they don’t virtualize hardware. This allows a container to run with a fixed version of the distribution independent of the host, and have just the right, minimal dependencies installed.
  • The container isolation also adds a level of isolation, which although not as secure as VMs, can reduce the attack vectors. For instance if the database container was compromised by unwelcome visitors, they would not have access to modify the web server configuration, and the container would not be able to expose additional services to the Internet.
  • A big advantage of containers is that there are large registries of community-provided container images.
  • Note that modifying things inside a container is harder than in a usual machine, as changes from the image are lost when a container is recreated.
  • Typically containers run just one tool or applications, and for service deployment this is useful for instance to run mySQL database in a separate container from a NodeJS application.

Solutions

  • Docker is the most well-known container runtime, followed by Singularity. These require (and could be used to access) system administrator privileges to be set up.
  • uDocker and Podman are also user space alternatives that have compatible command line usage.
  • Large registries of community-provided container images are Docker Hub and RedHat Quay.io. These are often ready-to-go, not requiring any additional configuration or installations, allowing your application to quickly have access to open source server solutions.
  • Biocontainers have a large selection of bioinformatics tools.
  • To customize a Docker image, it is possible to use techniques such as volumes to store data and Dockerfile. This is useful for installing your own application inside a new container image, based on a suitable base image where you can do your apt install and software setup in a reproducible fashion - and share your own application as an image on Docker Hub.
  • Container linkage can be done by container composition using tools like Docker Compose.
  • More advanced container deployment solutions like Kubernetes and Computational Workflow Management systems can also manage cloud instances and handle analytical usage.
  • Tips and tricks to navigate the landscape of container solutions:
    • If you just need to run a database server, describe how to run it as a Docker/Singularity container.
    • If you need several servers running, connected together, set up containers in Docker Compose.
    • If you need to install many things, some of which are not available as packages, make a new Dockerfile recipe to build container image.
    • If you need to use multiple tools in a pipeline, find Conda or container images, compose them in a Computational Workflow.
    • If you need to run tools in a cloud instance, but it has nothing preinstalled, use Conda or containers to ensure installion on cloud VM matches your local machine.
    • If you just need a particular open source tool installed, e.g. ImageMagick, check the document how to install: For Ubuntu 20.04, try apt install imagemagick.

More information

Related RDMkit pages in "Tool assembly"

Relevant tools and resources

Skip tool table
Tool or resource Description Related pages Registry
Ada Ada is a performant and highly configurable system for secured integration, visualization, and collaborative analysis of heterogeneous data sets, primarily targeting clinical and experimental sources. TransMed Assembly
Amazon Web Services Amazon Web Services Data storage Data transfer TeSS
Arvados With Arvados, bioinformaticians run and scale compute-intensive workflows, developers create biomedical applications, and IT administrators manage large compute and storage resources. Data steward infrastructure Data steward policy Researcher
BIAFLOWS BIAFLOWS is an open-soure web framework to reproducibly deploy and benchmark bioimage analysis workflows bio.tools
BIII The BioImage Informatics Index is a registry of software tools, image databases for benchmarking, and training materials for bioimage analysis Data steward infrastructure bio.tools
Bioconda Bioconda is a bioinformatics channel for the Conda package manager Data steward infrastructure bio.tools TeSS
BoostDM BoostDM is a method to score all possible point mutations (single base substitutions) in cancer genes for their potential to be involved in tumorigenesis. Human data bio.tools
CalibraCurve A highly useful and flexible tool for calibration of targeted MS?based measurements. CalibraCurve enables an automated batch-mode determination of dynamic linear ranges and quantification limits for both targeted proteomics and similar assays. The software uses a variety of measures to assess the accuracy of the calibration and provides intuitive visualizations. bio.tools
Cancer Genome Interpreter Cancer Genome Interpreter (CGI) is designed to support the identification of tumor alterations that drive the disease and detect those that may be therapeutically actionable. Human data bio.tools
Chipster Chipster is a user-friendly analysis software for high-throughput data such as RNA-seq and single cell RNA-seq. It contains analysis tools and a large reference genome collection. CSC - Finland Researcher Data steward infrastructure bio.tools TeSS
Common Workflow Language (CWL) An open standard for describing workflows that are build from command line tools Data steward infrastructure Data steward policy Researcher FAIRsharing TeSS
Conda Open source package management system Data steward infrastructure TeSS
DisGeNET A discovery platform containing collections of genes and variants associated to human diseases. Human data Researcher bio.tools
Docker Docker is a software for the execution of applications in virtualized environments called containers. It is linked to DockerHub, a library for sharing container images Data steward infrastructure FAIRsharing TeSS
Galaxy Open, web-based platform for data intensive biomedical research. Whether on the free public server or your own instance, you can perform, reproduce, and share complete analyses. NeLS assembly Marine Metagenomics - Norway Researcher Data steward infrastructure IFB - France bio.tools TeSS
GENEID Geneid is an ab initio gene finding program used to predict genes along DNA sequences in a large set of organisms. Researcher bio.tools
GRAPE 2.0 The GRAPE pipeline provides an extensive pipeline for RNA-Seq analyses. It allows the creation of an automated and integrated workflow to manage, analyse and visualize RNA-Seq data. bio.tools
HumanMine HumanMine integrates many types of human data and provides a powerful query engine, export for results, analysis for lists of data and FAIR access via web services. Data organisation Data steward research Researcher Human data bio.tools TeSS
iCloud Data sharing Data storage Data transfer
IntoGen IntoGen collects and analyses somatic mutations in thousands of tumor genomes to identify cancer driver genes. Human data bio.tools
Jupyter Jupyter notebooks allow to share code, documentation Data steward infrastructure TeSS
LUMI EuroHPC world-class supercomputer Researcher Data steward infrastructure CSC - Finland bio.tools
Meta-pipe META-pipe is a pipeline for annotation and analysis of marine metagenomics samples, which provides insight into phylogenetic diversity, metabolic and functional potential of environmental communities. Marine Metagenomics - Norway bio.tools TeSS
Nextflow Nextflow is a framework for data analysis workflow execution Data steward infrastructure bio.tools TeSS
OHDSI Multi-stakeholder, interdisciplinary collaborative to bring out the value of health data through large-scale analytics. All our solutions are open-source. Researcher Data steward research Data storage TransMed Assembly bio.tools
OpenEBench ELIXIR benchmarking platform to support community-led scientific benchmarking efforts and the technical monitoring of bioinformatics reosurces Data steward research Data steward infrastructure bio.tools
OpenStack OpenStack is an open source cloud computing infrastructure software project and is one of the three most active open source projects in the world Data storage TransMed Assembly IFB - France TeSS
OTP One Touch Pipeline (OTP) is a data management platform for running bioinformatics pipelines in a high-throughput setting, and for organising the resulting data and metadata. Human data Documentation and metadata Data management plan bio.tools
OwnCloud Cloud storage and file sharing service Data storage Data steward infrastructure Data transfer
PAA PAA is an R/Bioconductor tool for protein microarray data analysis aimed at biomarker discovery. Researcher Human data bio.tools
PIA - Protein Inference Algorithms PIA is a toolbox for mass spectrometrey based protein inference and identification analysis. Researcher bio.tools
PMut Platform for the study of the impact of pathological mutations in protein stuctures. Human data bio.tools
R Markdown R Markdown documents are fully reproducible. Use a productive notebook interface to weave together narrative text and code to produce elegantly formatted output. Use multiple languages including R, Python, and SQL. Researcher TeSS
Reva Reva connects cloud storages and application providers Data transfer bio.tools
Rstudio Rstudio notebooks allow to share code, documentation Data steward infrastructure Researcher bio.tools TeSS
Rucio Rucio - Scientific Data Management Data storage Data transfer
ScienceMesh ScienceMesh - frictionless scientific collaboration and access to research services Data storage Data transfer
semares All-in-one platform for life science data management, semantic data integration, data analysis and visualization Researcher Data steward research Documentation and metadata Data steward infrastructure Data storage
Singularity Singularity is a container platform. Data steward infrastructure TSD for sensitive data - Norway TeSS
Snakemake Snakemake is a framework for data analysis workflow execution Data steward infrastructure bio.tools TeSS
tranSMART Knowledge management and high-content analysis platform enabling analysis of integrated data for the purposes of hypothesis generation, hypothesis validation, and cohort discovery in translational research. Researcher Data steward research Data storage TransMed Assembly bio.tools
XNAT Open source imaging informatics platform. It facilitates common management, productivity, and quality assurance tasks for imaging and associated data. Researcher TransMed Assembly XNAT-PIC
XNAT-PIC Pipelines Analysing of single or multiple subjects within the same project in XNAT Researcher Data steward research XNAT-PIC