Note: These tutorials are incomplete. More complete versions are being made available for our members. Sign up for free.


These tutorials present over half a century of developments in the computing world to biologists and bioinformaticians. With biology rapidly turning into a computationally intensive field of research, biologists are getting interested in learning about various computational approaches to efficiently process their data. However, the computing world itself is so vast and rapidly advancing that even the practitioners cannot keep abreast of the newest changes. Biologists unfamiliar with the core concepts in computing get lost among a range of possibilities, such as Burrows Wheeler transform, hybrid computing, hashing, Hadoop, multi-core, Bloom filter, cloud computing, 512 GB RAM, shared memory, clusters, GPU, etc.

In these tutorials, we explain those key concepts to help biologists and bioinformaticians navigate through the field. The discussion starts with a presentation of commonly used von Neumann computing architecture and evolution of programming languages. In Section 2, we cover various algorithmic constructs, such as hashing, Burrows Wheeler transform, Bloom filter, etc. Section 3 discusses larger software packages, such as SQL databases and Hadoop, which work at the interface of software and hardware. Section 4 explains how hardware upgrades (SSD, cache memory, multi-core, GPU, FPGA, etc.) fit in the big picture.

Instead of covering every hardware and software-related concepts, these tutorial focus on providing introductory description of few broad categories. We expect that the general understanding developed from our presentation will help readers find out what to look for in their in-depth exploration of related topics. As another feature, our tutorial attempts to provide hardware-based perspectives even when software algorithms are discussed. We believe a proper understanding of computing architecture will help bioinformaticians design their programs better.

Web Statistics