# Proteomics/Print version

Proteomics

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Proteomics

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

# Introduction to Proteomics

### Presentation

 Print version

 « ProteomicsIntroduction to Proteomics » The Process of Proteomics

## What is proteomics?

Information transfer in the central dogma of biology

The focus of proteomics is a biological group called the proteome. The proteome is dynamic, defined as the set of proteins expressed in a specific cell, given a particular set of conditions. Within a given human proteome, the number of proteins can be as large as 2 million. [1]

Proteins themselves are macromolecules: long chains of amino acids. This amino acid chain is constructed when the cellular machinery of the ribosome translates RNA transcripts from DNA in the cell's nucleus. [2] The transfer of information within cells commonly follows this path, from DNA to RNA to protein.

Proteins can be organized in four structural levels:

• Primary (1°): The amino acid sequence, containing members of a (usually) twenty-unit alphabet
• Secondary (2°): Local folding of the amino acid sequence into α helices and β sheets
• Tertiary (3°): 3D conformation of the entire amino acid sequence
• Quaternary (4°): Interaction between multiple small peptides or protein subunits to create a large unit

Each level of protein structure is essential to the finished molecule's function. The primary sequence of the amino acid chain determines where secondary structures will form, as well as the overall shape of the final 3D conformation. The 3D conformation of each small peptide or subunit determines the final structure and function of a protein conglomerate. [3]

There are many different subdivisions of proteomics, including:

Proteomics has both a physical laboratory component and a computational component. These two parts are often linked together; at times data derived from laboratory work can be fed directly into sequence and structure prediction algorithms. Mass spectrometry of multiple types is used most frequently for this purpose. [5]

## The importance of proteomics

Proteomics is a relatively recent field; the term was coined in 1994, and the science itself had its origins in electrophoretic separation techniques of the 1970's and 1980's. [6] The study of proteins, however, has been a scientific focus for a much longer time. Studying proteins generates insight on how proteins affect cell processes. Conversely, this study also investigates how proteins themselves are affected by cell processes or the external environment.

Proteins provide intricate control of cellular machinery, and are in many cases components of that same machinery. [7] They serve a variety of functions within the cell, and there are thousands of distinct proteins and peptides in almost every organism. This great variety comes from a phenomenon known as alternative splicing, in which a particular gene in a cell's DNA can create multiple protein types, based on the demands of the cell at a given time.

The goal of proteomics is to analyze the varying proteomes of an organism at different times, in order to highlight differences between them. Put more simply, proteomics analyzes the structure and function of biological systems. [8] For example, the protein content of a cancerous cell is often different from that of a healthy cell. Certain proteins in the cancerous cell may not be present in the healthy cell, making these unique proteins good targets for anti-cancer drugs. The realization of this goal is difficult; both purification and identification of proteins in any organism can be hindered by a multitude of biological and environmental factors. [9]

## Proteomics Workflows

The first step of proteomics is sample preparation. In this step, we are trying to extract protein from cells. In the second step, we use methods such as 2D electrophoresis to separate different proteins. Then we try to cut proteins into peptides since peptides are easier to detect. In the forth step, we use mass spectrometry to detect peptides and peptides fragments. Finally, we can then determine the sequence of the protein by interpreting all the data obtained.

Because Proteomics is growing at a very rapid pace, there is a shift in the field away from a specialized/focused way of conducting studies and towards a more global perspective. Broad-based proteomics presents a unique perspective on the field of proteomics because it allows for one to take on this general perspective by setting out to understand the proteome as a whole. A critical aspect to this strategy is planning ahead; and in doing so, the most appropriate plans and technologies can be implemented in the most efficient manner. By developing a strategy tailored to understanding a particular proteome, problems and setbacks can be avoided during the study.

The first step when utilizing broad-based proteomics is to develop a hypothesis specific to the proteome being studied. It is best to choose organisms that already have a great deal of genomic information available, since the genome is always a useful supplement to proteomic information. Once the a hypothesis and organism are established, the proper technologies should be chosen; and these technologies should be compatible with whatever biological factors are present (i.e. sample type). Some important and relevant proteomic methods include HPLC, Mass Spectrometry, SDS-PAGE, two-dimensional gel electrophoresis, and perhaps in silico protein modeling.

Since there are multitudes of sample type, sample preparation, and analytical technology combinations possible, it is obvious why careful planning from a broad-based proteomic perspective is critical. By planning upfront, an efficient proteomic study can be conducted. And when the efforts of many broad-based proteomic studies are taken together, understanding the proteome in its entirety becomes a realistic possibility.

## References

1. ^ American Medical Association. "Proteomics." http://www.ama-assn.org/ama/pub/category/3668.html
2. ^ Hartl, Daniel L., Jones, Elizabeth W. "Genetics: Analysis of Genes and Genomes". Jones and Bartlett Publishers: Boston, 2005.
3. ^ Weaver, Robert F. "Molecular Biology, 2nd Edition". McGraw Hill: Boston, 2002.
4. ^ Twyman, Richard. "Proteomics." http://genome.wellcome.ac.uk/doc_wtd020767.html
5. ^ Colinge, Jacques and Keiryn L. Bennett. "Introduction to Computational Proteomics". PLoS Comput Biol. 2007 July; 3(7): e114.
6. ^ "History of Proteomics." Australian Proteome Analysis Facility. http://www.proteome.org.au/History-of-Proteomics/default.aspx
7. ^ Graves, P. R., T. A. J. Haystead. "Molecular Biologist's Guide to Proteomics". Microbiology and Molecular Biology Reviews: Vol.66 No.1, 2002.
8. ^ "Proteomics Overview." http://www.proteomicworld.org/
9. ^ van Wijk, K. J. "Challenges and Prospects of Plant Proteomics". Plant Physiol. 2001 June; 126(2): 501-508.

Chapter Written by J. Reuter (Zel2008) and S. Lafergola (DieselSandwich)

## Articles Summarized

### Advances in Proteomic Workflows for Systems Biology

Main Focus

The article summarizes recent improvements as well as some principal limitations of shortgun tandem mass spectrometry based proteomics. Furthermore, it also briefly introduces steps of targeted driven quantitative proteomics.

Summary

In recent years, great improvements have been made in all the parts of non targeted mass spectrometry based proteomics including sample preparation, data acquisition, data processing and analysis. In the sample preparation process, with the introduction of IEF separation method, resolution obtained from classical two dimensional chromatography peptide separation is greatly improved. Improvements are also made in the field of data quality which is increased by the development of highly reproducible capillary chromatography methods and quantitative analysis by stable isotope labeling method. High mass resolution and accuracy could be achieved now by different types of mass spectrometry such as TOF-TOF,Q-TOF in the data acquisition process. Furthermore, different types of mass analyzers and ion sources have been combined to increase the proteome coverage. With the development of database search tools, the quality of proteomics data could be more accurately assessed and estimated in the data processing and analysis process.

Despite all these improvements achieved, limitations exist in shotgun approaches. For example, shotgun MS datasets are extremely redundant which greatly affect the identification of peptides present in proteomic samples. The existence of semi-tryptic or non-tryptic peptides in samples made the sample more complex. Saturation effect greatly reduces the discovery rate of new proteins. Many peptides that detected by Mass Spectrometry could not be identified, making it difficult to compare sample to sample.

The limitations of shotgun approaches made the development of targeted driven quantitative proteomics necessary. The first step of targeted driven quantitative proteomics is protein and peptide selection. This step could be finished both experimentally and computationally. For the multiple reaction monitoring (MRM) and data analysis step, multiple reaction monitoring was applied to proteomics data analysis. Relevance to the course: this source is a brief overview of recent improvements in targeted mass spectrometry (one method of proteomics) based proteomics as well as some limitations. It also introduced another field of proteomics: targeted driven quantitative proteomics.

New Terms

Electrospray ionization
A technique used in mass spectrometry to produce ions.It is especially useful in producing ions from macromolecules because it overcomes the propensity of these molecules to fragment when ionized (http://en.wikipedia.org/wiki/Electrospray_ionization)
Matrix-assisted laser desorption/ionization (MALDI)
A soft ionization technique used in mass spectrometry, allowing the analysis of biomolecules (biopolymers such as proteins,peptides and sugars) and large organic molecules (such as polymers, dendrimers and other macromolecules), which tend to be fragile and fragment when ionized by more conventional ionization methods(http://en.wikipedia.org/wiki/MALDI-TOF)
PeptideAtlas
A multi-organism, publicly accessible compendium of peptides identified in a large set of tandem mass spectrometry proteomics experiments(http://www.peptideatlas.org/)
Multiple reaction monitoring
MRM experiments, using a triple quadrupole instrument, are designed for obtaining the maximum sensitivity for detection of target compounds. This type of mass spectrometric experiment is widely used in detecting and quantifying drug and drug metabolites in the pharmaceutical industry(http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2291721)
FT-ICR mass spectrometry
Fourier transform ion cyclotron resonance mass spectrometry, also known as Fourier transform mass spectrometry, is a type of mass analyzer (or mass spectrometer) for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field(http://en.wikipedia.org/wiki/Fourier_transform_ion_cyclotron_resonance)

Course Relevance

This source is about non targeted mass spectrometry and targeted approaches which are important methods in the identification of proteins(an important step in proteomics).

### Broad-Based Proteomic Strategies: A Practical Guide to Proteomics and Functional Screening

Main Focus

This article summarizes what broad-based proteomics is and how one can design a study using this global-view strategy. It first briefly looks at the current technology in proteomics and then discusses how these technologies can be incorporated into a study.

Summary

Proteomics as a field is becoming a very daunting one to enter because many studies are getting lost in the complicated focused details. To help assist with this challenge, a researcher can employ broad-based proteomics. Broad-based proteomics is a strategy where careful planning is employed upfront to answer a question about a proteome (for instance, comparisons between a tissue in a diseased state and a normal state) using the most appropriate and applicable technologies available. By developing a strategy at the beginning of a proteomics study, possible setbacks during the study are avoided.

The first step is to develop a general hypothesis that is specific to the problem or issue that is being studied. Since proteomics mirrors genomics, a proteomic study is increasingly difficult when the genome of the model organism isn't known. For this reason, organisms where the majority of the genome is known (80% or greater) should be chosen. Once a proper organism has been chosen for study, the next factors to consider are the type of data that will be generated and also the sample source. Some proteomic methods yield qualitative data, while others yield quantitative; so the type of data needed should be determined before a method is chosen. At the same time, the source of the sample is important in determining the extraction and purification methods. Typical sample types include: urine, blood (plasma/serum) and mucosal secretions. Protein concentration within the sample is important, and one should expect reasonable extraction if the protein can be visualized on a coomassie blue stained gel (> 300 ng). The separation technique chosen should reflect the characteristics of the protein(s) of choice (hydrophobic vs hydrophilic, molecular mass, etc).

Another major factor in the planning process is estimating the difficulty in the preparation of the fractioned sample for mass spectrometry identification. Each mass spectrometry technique requires varying degrees of preparation, and some are much more complicated than others (2DE with MS/MS analysis requires greater preparation than HPLC with MS, for instance). Since mass spectrometry is often the step where a lot of proteomic studies encounter difficulty (both in preparation and in interpretation of the results), it is very important to choose a method that is appropriate for the protein sample.

With the advent of proteomic databases in recent years, bioinformatics has had an increasing presence in proteomic studies. For this reason, almost all proteomic studies should incorporate bioinformatics; and consequently it's important for the research team to have some bioinformatics knowledge. And depending on how much data will be received at the end of the study (depending on the analysis methods chosen), the research team can determine how much bioinformatic analysis should be needed.

A final factor to consider is whether to bring in outside assistance or to attempt the study in a more self-contained way. Keeping it self-contained allows for the research team to keep its data integrated and also keeps miscommunication to a minimum. Bringing in outside help, on the other hand, could allow a researcher to tackle problems that would be large and normally not solvable with a smaller team. While bringing in outside assistance seems promising, it's important to not lose control over the data and to make sure that the team is not spread out trying to accomplish more than it can handle.

Since there are many ways to study a cell's proteome, careful planning should be implemented at all stages of a proteomics study. Through broad-based proteomics, a researcher can define a test plan before any actual study is performed. And when used appropriately, this strategy can lead to productive and efficient projects that will bring science one step closer to understanding the proteome as a whole.

New Terms

Isoforms
A set of different proteins that form because of single nucleotide polymorphisms in the genomic sequence. ( http://en.wikipedia.org/wiki/Isoform )
Single-nucleotide polymorphism (SNP)
a DNA sequence variation occurring when a single nucleotide in the genome differs between members of a species. ( http://en.wikipedia.org/wiki/Single_nucleotide_polymorphism )
Post-translational modification (PTM)
the chemical modification of a protein after it has been translated. It is usualy one of the last steps in protein biosynthesis for most proteins. ( http://en.wikipedia.org/wiki/Posttranslational_modification )
Subproteome
a subfractioned subset of the proteome. Often these are linked to area of the cell (organelle for instance) or by chemical properties.
Peptide mass fingerprinting (PMF)
an analytical technique for protein identification. The unknown protein of interest is first cleaved into smaller peptides and after mass is determined using mass spectrometry, their masses are compared to either a database containing known protein sequences or a genome. ( http://en.wikipedia.org/wiki/Peptide_mass_fingerprinting )

Course Relevance

## Websites Summarized

### The Association of Bimolecular Resource Facilities: Proteomics Research Group (PRG)

Website committee: Pamela Scott Adams, Michelle Detwiler, David Mohr James Ee, Dr. Xiaolog Yang, Dr. Len Packman, Dr. Anthony Yeung, http://www.abrf.org/index.cfm/group.show/Proteomics.34.htm#R_3 (3/25/09)

Main Focus

This web page is about how the Association of Bimolecular Resource Facilities relates to proteomics. Of particular importance is the Proteomics Research Group within the ABRF.

Summary

The Association of Bimolecular Resource Facilities (ABRF) is an international association of research facilities and laboratories that is focused on core research in Biotechnology. The association encourages the sharing of information through conferences, a quarterly journal, and group studies. The ABRF has a heavy influence on the field of proteomics, and there are five main research groups (RG) that deal with proteomics in some way: Protein Expression (PERG), Protein Sequencing (PSRG), Protein Informatics (iPRG), Proteomics (PRG), and Proteomics Standards (sPRG).

Of particular importance, the Proteomics Research Group allows for researchers throughout the world in the field of proteomics to share their protein analysis information freely. Obviously, since understanding the proteome is about bringing together information on many different proteins (which is information that requires a great amount of effort/time/money to achieve), the sharing of protein/subproteomic information is imperative to beginning to understand a proteome in its entirety. This website has numerous links to studies performed by research groups throughout the world.

New Terms

De Novo Peptide Sequencing
Peptide sequencing that is performed without any prior knowledge of the amino acid sequence. (http://www.ionsource.com/tutorial/DeNovo/DeNovoTOC.htm)
Quantitative Proteomics
Has the goal of obtaining quantitative information about all the proteins in a particular sample. This is useful because it allows for one to see the differences in protein samples. (http://en.wikipedia.org/wiki/Quantitative_proteomics)

Course Relevance

This is an overview of the Association of Biomolecular Resource Facilities (ABRG) and how it relates to proteomics. There is a great deal of relevant information on this website that those in proteomics will find useful.

### Introduction to Proteomics

Writer/Producer: Rick Groleau,Subject Matter Expert: Hanno Steen, PhD,Designer: Peggy Recinos,Developer: Jeffrey Testa, http://www.childrenshospital.org/cfapps/research/data_admin/Site602/mainpageS602P0.html (28 March 2009)

Main Focus

This web page is about the importance and challenges in proteomics. It also introduces major steps of proteomics briefly.

Summary

Proteomics is important for us to understand biological processes since all the functions are accomplished by proteins in cell.But as the number of proteins are so large and amino acids(which are units of protein) are so small, the study is quite challenging.There are five steps to analyze protein sequences: sample preparation,separation,ionization,mass spectrometry and informatics.First of all, we obtain cells and extract proteins from the cells.Then we use methods such as 2D electrophoresis to separate proteins. Next, we use protease to cut proteins into peptides.Mass spectrometry allows us to identify individual peptides as well as peptides fragments.Finally, by interpreting the data, we are able to determine the sequence of proteins.

New Terms

Biopsy
A biopsy is a medical test involving the removal of cells or tissues for examination. It is the removal of tissue from a living subject to determine the presence or extent of a disease(http://en.wikipedia.org/wiki/Biopsy)
TOF
The time of flight (TOF) describes the method used to measure the time that it takes for a particle, object or stream to reach a detector while traveling over a known distance(http://en.wikipedia.org/wiki/Time-of-flight)
The quadrupole mass analyzer is one type of mass analyzer used in mass spectrometry.It consists of 4 circular rods, set perfectly parallel to each other.In a quadrupole mass spectrometer the quadrupole mass analyzer is the component of the instrument responsible for filtering sample ions, based on their mass-to-charge ratio (m/z).Ions are separated in a quadrupole based on the stability of their trajectories in the oscillating electric fields that are applied to the rods(http://en.wikipedia.org/wiki/Quadrupole_mass_analyzer)
Electronspray ionization
Electrospray ionization (ESI) is a technique used in mass spectrometry to produce ions.It is especially useful in producing ions from macromolecules because it overcomes the propensity of these molecules to fragment when ionized(http://en.wikipedia.org/wiki/Electrospray_ionization)
Dalton
Dalton is the unit of measurement for atomic mass. One Dalton is equal to 1/12th the mass of one atom of carbon12(http://www.childrenshospital.org/cfapps/research/data_admin/Site602/mainpageS602P1.html)

Course Relevance

This is an overview of proteomics. It summarizes the procedures and importance of proteomics very briefly.

### Introduction to Proteomics

Institute of Biology and Medical Genetics of the First Faculty of Medicine of Charles University and the General Teaching Hospital,http://biol.lf1.cuni.cz/ucebnice/en/proteomics.htm (6 April 2009)

Main Focus

This website discusses the aims and definitions of proteomics. It also introduces two important methods in proteomcis studies - 2D protein electrophoresis and mass spectrometry as well as proteomics in medicine

Summary

Proteomics is a broad field which includes expression proteomics, protein distribution in subcellular compartments of the organelles,post-translational modifications of the proteins,structural proteomics and functional proteomics, clinical proteomics and so on. Even though analysis of the expression on transcripts level is possible with the introduction of RNA/cDNA microarray, proteomics is still important since not all mRNA will be translated and the processes such as RNA splicing, posttranslational protein modifications exist.

Two-dimensional (2D) protein electrophoresis is commonly used to separate proteins based on their PI and mass. Mass spectrometry is an important method in proteomics since it cannot only be used for protein identification but can also be used for protein posttranslational modification analysis.

One of the major application of proteomics in medicine is the identification of markers in all the steps to treat diseases. Other applications include drug discovery and pharmacoproteomics.

New Terms

Human Proteome Organization(HUPO)
The Human Proteome Organisation (HUPO) is an international scientific organization representing and promoting proteomics through international cooperation and collaborations by fostering the development of new technologies, techniques and training(http://www.hupo.org/)
structural proteomics
Structural proteomics is an international collaboration project for solving 3D protein structures at a proteome scale(http://en.wikipedia.org/wiki/Structural_proteomics)
Swedish Human Protein Atlas
The Swedish Human Protein Atlas program (HPA), funded by the (non-profit) Knut and Alice Wallenberg Foundation, invites submission of antibodies from both academic and commercial sources to be included in the human protein atlas (http://www.proteinatlas.org)
Posttranslational modification (PTM)
Posttranslational modification (PTM) is the chemical modification of a protein after its translation. It is one of the later steps in protein biosynthesis for many proteins(http://en.wikipedia.org/wiki/Posttranslational_modification)
Isoelectric point
Isoelectric point is such a pH value, where the overall protein charge equals to zero(http://biol.lf1.cuni.cz/ucebnice/en/proteomics.htm)

Course Relevance

This website gives brief definition and aims of proteomics.It also introduces principles of 2D-electrophoresis and mass spectrometry which are important methods in proteomics.

Contact: jxr0084@rit.edu, sfl9376@rit.edu

# Protein Sample Preparation

### Presentation

 Print version

 « Protein Sample PreparationIntroduction » Previous Chapter - Introduction Protein Handling and Storage

## Introduction

As technological advances are made in the field of proteomics, it is seen that advances are necessary in the preparation of protein samples prior to any particular procedure. A number of issues arise in this respect; including sample clean-up, fractionation, enrichment, and the also sample condition optimization. Considerations of this nature can be crucial in obtaining relevant results from an experiment; some so much so that experts feel the field of proteomics is currently being limited by the lack of significant advancement in sample preparation techniques.

This facet of proteomics is becoming particularly critical in the case of high throughput protocols where the necessary conditions of a sample in one stage may directly conflict with the efficacy of a second stage. For example, during the initial step in 2D electrophoresis, isoelectric focusing, all proteins in a sample are given a net charge of zero; while the second step, gel electrophoresis, requires a negative charge on all products in the sample in order to induce movement through the gel matrix.

Many companies offer pre-packaged kits that will allow you to prepare samples for many different techniques. They also offer many protein samples, and other protein technologies. Many of these companies are also on the forefront of protein analysis technology. Some examples are:

This section is part of an ongoing project at the Rochester Institute of Technology, involving the Bioinformatics department. Currently the project is being worked on by a Proteomics Class taught by Dr. Paul Craig.

# Plant Proteomics about Two Dimensional Gel Electrophoresis

### Presentation

 Print version

This page contains protocols that are frequently used in proteomics. You are welcome to add protocols this chapter.

1. Plant Proteomics about Two Dimensional Gel Electrophoresis

# Protein Separations - Chromatography

### Presentation

 Print version

 « Protein Separations - ChromatographyProtein Separations - Chromatography » Previous Chapter - Protein Sample Preparation Chromatography Theory

Chapter written by: Laura Grell and Alexander Butarbutar
Contact llg3875@rit.edu or nbb3924@rit.edu for contributions
Chapter modified by Kai Burnett and Dalia Ghoneim
Contact kab9783@rit.edu or dxg6098@rit.edu

## Introduction

HPLC

(Res1) To obtain a pure protein sample, a protein must be isolated from all other proteins and cellular components. This can prove to be a difficult task as a single protein often makes up only 1% of the total protein concentration of a cell. Therefore 99% of the protein components of a sample must be removed before it can be classified as pure. A task that is equally challenging is keeping the protein in its active form. When we purify proteins we remove them from their natural environments. As a result, it is necessary to simulate the pH, salt concentration and reducing conditions in which they are normally found. In the process of obtaining an active and pure sample we want to minimize the number of steps taken in order to maximize the yield at the end of the separation. Finally, since proteins are made with the intention of only functioning for a short period of time, it is also critical to obtain our sample as quickly as possible. All these components of protein separations can be successfully achieved by a group of separation methods collectively known as chromatography.

There are several properties of proteins that can be taken advantage of to separate proteins. Different types of chromatography take advantage of different properties. Proteins can be separated by:

• size
• shape
• hydrophobicity
• affinity to molecules
• charge

A Typical Column

In this chapter several different chromatographic methods will be introduced and described. While the methods outlined below all use different characteristics of proteins to separate proteins from one another, they all utilize an insoluble stationary phase and a mobile phase that passes over it. The mobile phase is commonly a liquid solution. It contains the protein we want to isolate. The stationary phase on the other hand is made up of a grouping of beads, usually based on a carbohydrate or acrylamide derivative, that are bound to ionically charged species, hydrophobic characters, or affinity ligands. Much of the success of chromatography is associated with the selection of an appropriate stationary phase.

In column chromatography, when a protein sample is applied to the column, it equilibrates between the stationary phase and the mobile phase. Depending on the type of chromatography, proteins with certain characteristics will bind to the stationary phase while those lacking the sought characteristics will remain in the mobile phase and pass through the column. For example in ion exchange chromatography, a positively charged protein would bind to a negatively charged stationary phase, while the negatively charge protein will be eluted from the column with the mobile phase. The final step involves displacing the protein from the stationary phase, also known as elution, by introducing a particle which will compete with the protein binding site on the stationary phase. Today various commercial column are readily available, specifically Bio-Rad, Sigma-Aldrich,GE Healthcare and Omnifit offers a wide variety of chromatography column.

The image above is a chromatogram that shows the results of a separation based on signals interpreted by a detector.

tm - the time required for the mobile phase to travel the entire length of the column

tr - the time required for a specific protein to elute from the column

## Resources

1. Craig, P. Designing a Separation
2. Florida State University, "Chromatography" Michael Blaber's Biochemistry Lab
3. GE Healthcare - [10] - [11]
4. BioPharm International Guide Basics of Chromatography (2003).
5. Bio-Rad Chromatography Protein Purification || Green Fluorescent Protein Chromatography Kit
6. BioForum Topics In Chromatography
7. Journal of Chromatographic Science
8. M.Isabel Pedraza Mayer Chromatography Database

## References

1. Harris, D.C. "Quantitative Chemical Analysis; 6th Edition", W.H. Freeman and Company: New York..
2. Patrick McKay An Introduction to Chromatography Senior Research Associate, Department of Recovery Sciences, Genentech, Inc *.

.* Denotes Free Article

# Protein Separations- Electrophoresis/Introduction to Electrophoresis

### Presentation

 Print version

Introduction to Electrophoresis

## Definitions

e•lec•tro•pho•re•sis (ĭ-lĕk'trō-fə-rē'sĭs) n. [12]
1) The migration of charged colloidal particles or molecules through a solution under the influence of an applied electric field usually provided by immersed electrodes. Also called cataphoresis.
2) A method of separating substances, especially proteins, and analyzing molecular structure based on the rate of movement of each component in a colloidal suspension while under the influence of an electric field.

an•a•lyte (a-nə-līt) n. [13]
A chemical substance that is the subject of chemical analysis.

## Electrophoresis Theory

Separation by electrophoresis depends on differences in the migration velocity of ions or solutes through a given medium in an electric field. The electrophoretic migration velocity of an analyte is:

${\displaystyle v_{p}=\mu _{p}E}$

where E is the electric field strength and ${\displaystyle \mu _{p}}$ is the electrophoretic mobility.

The electrophoretic mobility is inversely proportional to frictional forces in the buffer, and directly proportional to the ionic charge of the analyte. The forces of friction against an analyte are dependent on the analyte's size and the viscosity (η) of the medium. Analytes with different frictional forces or different charges will separate from one another when they move through a buffer. At a given pH, the electrophoretic mobility of an analyte is:

${\displaystyle \mu _{p}={\frac {z}{6\pi \eta r}}}$

where r is the radius of the analyte and z is the net charge of the analyte.

Differences in the charge to size ratio of analytes causes differences in electrophoretic mobility. Small, highly charged analytes have greater mobility, whereas large, less charged analytes have lower mobility. Electrophoretic mobility is an indication of an analyte's migration velocity in a given medium. The net force acting on an analyte is the balance of two forces: the electrical force acting in favor of motion, and the frictional force acting against motion. These two forces remain steady during electrophoresis. Therefore, electrophoretic mobility is a constant for a given analyte under a given set of conditions.[14]

## Applications of Electrophoresis

Electrophoresis has a wide variety of applications in proteomics, forensics, molecular biology, genetics, biochemistry, and microbiology.

One of the most common uses of electrophoresis is to analyze differential expression of genes. Healthy and diseased cells can be identified by differences in the electrophoretic patterns of their proteins. Proteins themselves can also be characterized in this way, and some sense of their structure can be derived from the masses of fragments inside the gel. [15]

There are many different types of electrophoresis, and each can be used for something different. Two-dimensional (2-D) electrophoresis, for example, has the ability to discern many more proteins than most of its contemporaries. Many of these methods will be discussed in detail throughout this chapter.

## References

1. ^ The American Heritage Dictionary of the English Language, Fourth Edition. http://www.bartleby.com/61/
2. ^ The Merriam-Webster Online Dictionary. http://www.m-w.com
3. ^ Mans, Andreas et al. Bioanalytical Chemistry. Imperial College Press, 2004.
4. ^ Twyman, Richard. "Two-dimensional polyacrylamide gel electrophoresis." http://genome.wellcome.ac.uk/doc_wtd021045.html

## Microfluidic Electrophoresis

### What Will Proteomics Gain from Microfluidic?

Proteomics contributes significantly to the discovery of proteins and their functions that influence the behaviors of an organism. Recent studies have focused on its roles to explore the proteins in a single cell and tissue level as they represent a fingerprint for each individual especially in terms of how a disease exhibits.[1] Due to a limited amount of sample from a single cell or tissue, the study of proteome in these levels becomes a big challenge for a regular benchtop instrument. To facilitate this level of study, a microfluidic technology is introduced and developed into an essential tool for proteomics.

In addition to the capability to handle small sample, the microfluidic technology plays an important role in miniaturizing the entire system. As a result, a better performance in terms of less material consumption, faster processing time, more automated, and lower cost can be achieved. Another advantage in such microscale is that the mixing between the sample and reagents becomes more effective.[2] A certain chemical process that usually takes hours can be completed in minutes. This benefit allows the microfluidic-based immunoassay to be used to monitor the progress of disease.[1] And, with a reduction in cost, the microfluidic device becomes a perfect candidate for many point-of-care applications.[3]

One of the most intriguing features provided by the microfluidic technology is its highly integrated capability with other systems especially with mass spectrometry.[4] Multiplexing of various assays is another example. This multiplexing potential makes microfluidic-based devices a high throughput solution in (bio)chemistry and biomedicine.[5]

### Electrophoresis in Microfluidic Microsystems

Incorporating biotechnology with microfluidic makes a manipulation of very small volume of biological fluid not only feasible, but also effective especially by means of electrophoresis. Recent research and development efforts have been focused on inventing an electrophoretic microsystem that is fully automated, easy to customize for a specific need, and provides the results consistent with the gold standard. This microfluidic microsystem is usually referred to as a lab-on-a-chip. Among the microfluidic microsystems used in the analytical (bio)chemistry, the most widely used methods to control the transport of biomolecules or analytes are either gel or capillary electrophoresis.

Even though both techniques utilize the fact that biomolecules such as proteins, peptides, and DNA become charged in a buffer, the microfluidic gel electrophoresis operates differently from its capillary counterpart in terms of fluid dynamic. In microfluidic gel electrophoresis, the presence of porous gel medium prevents a bulk flow in a microfluidic channel.[3] On the other hand, the bulk flow becomes an engineering factor that influences the electrokinetics of the analytes as the presence of electroosmotic flow needs to be considered as well in microfluidic capillary electrophoresis.[6] [7] In the analytical viewpoint, their electrophoretic separation methods are also different. The gel-based electrophoretic separation of biomolecules is based on the difference in size or molecular weight, while the capillary-based electrophoresis separation operates by taking the advantage of the difference in charge-to-mass ratio among biomolecules.

In comparison to the gold-standard methods, the analytical results obtained from the microfluidic gel and capillary electrophoresis are consistent with those from the traditional slab gel [8] and capillary electrophoresis [5], respectively. In the design and application viewpoints, however, there are some advantages and disadvantages of these two methods needed to be considered. In terms of engineering design, the microfluidic gel electrophoretic system is much easier to customize for specific applications and multiplexing since no influence from the bulk flow needs to be considered. This makes it more straightforward to integrate extra features like sample preprocessing into the microfluidic gel electrophoresis.[3] Nevertheless, since the gel-sieving medium is not required in the capillary-based microsystem, the reusability of the microfluidic capillary electrophoretic device is much higher than the gel-based counterpart. In terms of bio(analytical) applications, the microfluidic capillary electrophoresis has a drawback such that it operates poorly to analyze the charged particles of similar charge-to-mass ratios.[3] It is worth noting that some microfluidic capillary electrophoretic system is capable of direct interface with mass spectrometry.[5]

In this chapter, the device fabrication process, basic principle of operation, and some clinical applications of both microfluidic gel and capillary electrophoresis are described in detail. Even though this chapter is dedicated mainly to the microfluidic electrophoresis, the integration of additional features like sample preprocessing, detection, and quantification processes are also included. Unquestionably, this integration is made possible by the microfluidic technology.

### Microfluidic Gel Electrophoresis

Utilizing microfluidic technology in gel electrophoresis provides several advantages to the study of proteome in many ways that cannot be achieved by the conventional methods. Faster processing time, more sensitive detection, more automated operation, and highly integrated system are the major benefits of using microfluidic. In addition, the microfluidic technology allows gel electrophoretic system to be easily customized for a specific application. For example, a microfluidic gel electrophoretic system can be designed such that off-chip processing can be eliminated. In fact, it can be integrated into the microfluidic-based system. The integration of sample preparation is one of the practical examples that not only simplify the experiment protocol, but also improve the detection sensitivity.[1]

This section is dedicated to a customized microfluidic gel electrophoresis for immunoassay applications.[3] The important aspects such as the fabrication process, principle of operation, and clinical applications will be discussed in detail.

#### I. Fabrication Process

Gel electrophoresis can be customized for a specific analytical study such as an immunoassay by using microfluidic technology. The customized fabrication process of microfluidic gel electrophoretic immunoassay to be described below is based on the device used by Herr et al.[1] [3] In their study, an antibody was used as a reporter to detect the presence of a particular protein as an antigen, potentially a disease biomarker. A step-by-step fabrication process is described as follows.

Step 1: Fabrication of size-exclusion membrane [3]

Step 1: Fabrication of Size-Exclusion Membrane

Materials:

1. Degassed 22% (15.7:1) acrylamide/bis-acrylamide (6% bis-acrylamide cross-linker)
2. 0.2% (w/v) VA-086 [9] (photoinitiator)
3. 1X Tris/glycine buffer

The first component of the microfluidic gel electrophoretic immunoassay to be fabricated is a size-exclusion membrane. This membrane is used to enhance analyte concentration, thus improving sensitivity of detection. A polyacrylamide gel is utilized to fabricate this membrane. It is designed such that the polyacrylamide gel has a pore size small enough for selectively allowing analyte molecules smaller than 10 kDa to pass through. The fabrication process begins with patterning a glass substrate to create microfluidic channels and chambers. This process is carried out by using regular photolithography and wet etch. Holes are drilled on the top glass cover. Both glass substrate and cover are then bonded together by anodic bonding. After the microfluidic structure is created, a solution of degassed 22% (15.7:1) acrylamide/bis-acrylamide (6% bis-acrylamide cross-linker) and 0.2% (w/v) VA-086 is introduced into the channel, as shown in the diagram. A syringe can be used to load the solution into the channel. The gel precursor solution is left for equilibration for approximately 30 minutes.

The size-exclusion membrane is fabricated using laser photo-polymerization. A 355-nm UV laser sheet is used to pattern the polyacrylamide gel at the specified location, shown in the diagram, to create the membrane profile. The polyacrylamide gel membrane is exposed to the laser until polymerized, which takes approximately 15 seconds. The remaining gel solution is vacuumed out, and the channels are then cleaned by rinsing with buffer.

Step 2: Fabrication of separation channel [3]

Step 2: Fabrication of Separation Channel

Materials:

1. Degassed 8% (37.5:1) acrylamide/bis-acrylamide (2.6% bis-acrylamide cross-linker)
2. 0.2% (w/v) VA-086 [9] (photoinitiator)
3. 1X Tris/glycine buffer

After the size-exclusion membrane is fabricated, the next step is to construct a separation channel. This separation channel is the place where protein separation takes place. It contains a medium porosity polyacrylamide gel. Like the membrane, the separation gel is fabricated by photo-polymerization. To fabricate the separation channel, the separation gel precursor solution is carefully loaded into the microfluidic channel by a syringe. The gel solution is composed of the degassed 8% (37.5:1) acrylamide/bis-acrylamide (2.6% bis-acrylamide cross-linker), 0.2% (w/v) VA-086 photoinitiator, and 1X Tris/glycine buffer. The gel loading direction is indicated by the arrow shown in the diagram. The gel is loaded up to the specified location to define the separation channel.

Gel uniformity is a very important factor for the repeatability in the analysis. To guarantee uniformity, all microfluidic channels on the separation side of the membrane must be filled with gel before polymerization. Therefore, a gel plug is created to prevent the gel leakage during the subsequent gel loading. As shown in the diagram, the gel plug is fabricated by photo-polymerization such that the area not to be polymerized is protected by a dark-field mask. Usually, the photo-polymerization process takes about 10 minutes using a 100-Watt UV source.

File:UCEI Step 3.png

Materials:

1. Degassed 3.5% (37.5:1) acrylamide/bis-acrylamide (2.6% bis-acrylamide cross-linker)
2. 0.2% (w/v) VA-086 [9] (photoinitiator)
3. 1X Tris/glycine buffer

In this step, the remaining microfluidic channels (without gel) on the separation side of the membrane are carefully filled with gel precursor solution as indicated by the arrow shown in the diagram. The solution contains the degassed 3.5% (37.5:1) acrylamide/bis-acrylamide (2.6% bis-acrylamide cross-linker), 0.2% (w/v) VA-086 photoinitiator, and 1X Tris/glycine buffer. This gel solution will generate a polyacrylamide gel with large pore size and define the loading channels.

After both separation and loading channels are filled with gel, the whole microfluidic device is exposed to UV for 15 minutes. As a result, the separation and loading gels are polymerized and define separation and loading channels, respectively. This microfluidic device is now ready to use.

Complete microfluidic gel electrophoretic device [3]

Complete Microfluidic Device

The diagram shows the complete microfluidic gel electrophoretic device after fabrication. The device contains polyacrylamide gels with three different pore sizes. The gel with largest pore size is used in the loading channels to facilitate the electrophoresis of the sample and reagents by preventing bulk flow. The gel with intermediate pore size is used in the separation channel for protein separation. Finally, the gel with smallest pore size is used as a size-exclusion membrane for the enrichment process. The detail descriptions about their functions will be covered in the section entitled Principle of Operation.

In the device design, all the loading channels are connected to the loading areas, which are holes drilled at the beginning (before anodic bonding). The sample and reagents are loaded into there. In addition, some holes are used as reservoirs for waste collection. These are the places to which the electric current flows. It is worth noting that in addition to being the loading areas, they are also used as the insertion points for electrodes, which are connected to a programmable supply voltage source.

When not in use, this microfluidic gel electrophoresis device must keep submerging in the buffer and store at 4°C.

#### II. Principle of Operation

Based on the same principle of electrokinetic transport of charged molecules, the microfluidic gel electrophoresis works similarly to the regular slab gel electrophoresis, but with much faster processing, more sensitive detection, and highly automated-integrated systems. In this section, the fundamental operation of the microfluidic gel electrophoresis will be discussed using the microfluidic structure described in the previous section.

The microfluidic gel electrophoretic system to be discussed consists of a structure containing microfluidic channels/reservoirs, power supply, and fluorescence detection system. The channels and reservoirs used to transport the analyte are designed such that they are filled with large pore size polyacrylamide gel. This large pore size gel facilitates the electrokinetic transport of the analyte by preventing the bulk flow.[3] The power supply is designed so that it is programmable. A pair of electrodes and their polarities can be assigned instantaneously. This provides a much better control over the electrokinetic process only attainable in microfluidic environment. Also integrated into the system is the fluorescence detection capability. It is used to detect the analyte of interest and quantify its concentration.

In this section, a step-by-step operation procedure including the principle behind each step will be discussed, assuming that the target analytes are negatively charged. Note that the discussion will mainly focus on the immunoassay application, which is more generalized to the study by Herr et al.[3] A little modification can be made to the system to be used in other applications.

As a tool for immunoassay study, a reporter specific to a particular protein of interest is used. Acting like a receptor with high affinity for a target protein, an antibody can be used as the reporter for the detection of protein or antigen being investigated. Without loss of generality, the term reporter will be used throughout this section.

To begin using the device, the microfluidic channels need to be filled with buffer solution. Then, the reporter solution of known concentration together with sample solution is loaded into the designated reservoir. Typically for the microfluidic device, the volume required per analysis is roughly in the order of a few tens of microliters. For the purpose of detection and quantification, the reporter is usually labeled with fluorescence tag. The detection and quantification methods will be discussed in detail later.

After the fluorescently labeled reporter is loaded, a pair of electrodes connected to the reporter and sample waste reservoirs are activated as shown in the diagram. The electric potential is then applied across both electrodes. As the reporter molecules become charged in the buffer, they are electrokinetically moved toward the sample waste reservoir as indicated by the red arrows. The reporter molecules, however, are blocked by the size-exclusion membrane, which allows only particles of size less than 10 kDa to pass through. In this loading step, only the ionic buffer can penetrate the membrane. At the end of this first electrophoresis, the increasing number of the fluorescently labeled reporter molecules gathers at the membrane on the gel side as shown in the inset.

The next process is to load the sample electrokinetically to combine with the reporter, which already gathered at the membrane. Like the reporter, the analyte molecules become charged in the buffer. To begin the electrophoresis, the electrode contacted to the reporter reservoir is deactivated. The electrode contacted to the sample reservoir is switched on instead. The potential across these two electrodes causes the charged molecules of proteins in the sample to move electrokinetically toward the sample waste reservoir as indicated by red arrows.

Once the charged molecules of the analyte arrive the membrane, some of them whose size less than 10 kDa pass through the membrane and are collected in the sample waste reservoir. Only the larger molecules including the protein of interest stay on the gel size of the membrane as shown in the inset. This process helps increase the concentration of the analyte, improving the probability of binding between the reporter and target protein. However, the non-target proteins become more concentrated as well and might cause a reduction in signal-to-noise ratio level. Using the reporter of high binding specificity can alleviate this problem. It is worth noting that the sensitivity and dynamic range of the microfluidic gel electrophoretic immunoassay depend on this enrichment process.

It needs to be designed carefully so that the electrophoresis occurs long enough so that more reporters and the target proteins are bound together. Normally, the concentration of the reporter used is much higher than that of the target protein. As a result, all the target protein molecules would be more likely to be detected by the reporter molecules. At the end of this second electrophoresis, the reporter and its complex including the remaining non-target proteins stay on the gel side of the membrane.

Switching electrodes to avoid running electric current through the membrane during electrophoretic separation [3]

Avoid Running Electric Current Through Membrane During Separation

It has been reported that the gel electrophoretic separation experienced irreproducible results when running the electric current through the membrane.[3] Therefore, it is necessary to avoid applying the electric potential across the membrane especially during the separation process. One possible solution is to program the power supply such that the electrodes are switched before the target analytes enter the separation gel. The diagram shows the switching process that satisfies this requirement.

According to the diagram, the electric potential is initially applied across the membrane just to transport the analytes away from the membrane. Before the analytes enter the separation gel, the electrode at the upper left buffer reservoir is deactivated. At the same time, a new electrode with the same polarity at the upper right buffer reservoir is turned on. It can be seen that the flow of negatively charged analytes remains in the same course toward the buffer waste reservoir.

It is worth noting that there are other combinations of electrodes that can be used to perform this step as well. The concept is simply not to allow electrophoresis to occur across the membrane during the separation step. At the end of the third electrophoresis, all charged molecules are ready for the gel electrophoretic separation.

Electrophoretic separation of unbound reporter and its complex detectable by laser-induced fluorescence [3]

Gel Separation

In this step, the reporter molecules and their complex are separated by means of gel electrophoresis. The process continues by electrokinetically carrying the charged molecules into the separation gel. Since the reporter molecules and their complex have similar charge-to-mass ratio, the separation between these two species is based only on the difference in their molecular weights (MWs). It is worth noting that all non-target proteins are not investigated in this immunoassay. Only the reporter molecules and their complex are under examination.

Since the reporter molecules are labeled with fluorescent dye, a single-point laser can be used to induce the fluorescence. This laser-induced fluorescence (LIF) is monitored by a detector that will detect the presence of the unbound reporter and its complex, and then relate the intensity of the detected fluorescence to the concentration of both species. This fluorescence detection system is placed across the separation channel near the buffer waste reservoir. The concepts of detection and quantification are described next.

Fluorescent detection, electropherogram, and gel-like plots [3]

Fluorescence Detection and Results

The single-point laser-induced fluorescence (LIF) is a mean used to detect and quantify the reporter molecules and their complex. When the molecules across the laser beam, the attached dyes emit fluorescence whose intensity is measured by the detector. The detector then generates the electropherogram corresponding to the measured intensity. The area under the electropherogram peak can be related to the concentration of the target protein.

The unbound reporter can be distinguished from the complex based on the fact that the unbound reporter molecule has lower molecular weight (MW) than its complex. Therefore, the unbound reporter molecules travel electrokinetically faster in the separation gel and are detected first. The corresponding intensity gives the first peak in the electropherogram.

In case of a single pair of ligand and receptor, there will normally be two peaks in the electropherogram. Usually the first peak belongs to the receptor or reporter of smaller MW, followed by the second peak corresponding to the ligand-receptor complex of larger MW. This information can also be represented by gel-like plots, which are computer-generated. In the gel-like plots, the topmost band represents the unbound reporter molecules that arrive first. The brightness of the band conveys the information about the degree of fluorescence intensity detected. The width of the band corresponds to the width of the peak, which conveys the information about the migrating time of the analyte.

It is worth mentioning that the probability of binding between both receptor and ligand is a significant factor to the sensitivity and accuracy of the immunoassay. Therefore, the enrichment process plays an important role in raising this probability. To enhance this enrichment process, either preprocessing the sample (off-chip) or increasing the period of the electrophoretic sample loading is proven to be useful.

#### III. Clinical Applications

One of clinical applications reported was the use of this microfluidic gel electrophoretic device in assisting oral diagnosis.[1] [3] It was used for early diagnosis of the periodontal disease and to monitor the disease state and its development from human saliva. Periodontal disease or periodontitis is a putative oral disease that destroys collagen and causes a major tissue damage, connective tissue attachment loss, and bone loss. Found in the periodontitis patient saliva were matrix metalloproteinase-8 (MMP-8), interleukin-1 beta (IL-1B), and C-telopeptide pyridinoline cross-links (ICTP). These proteins were identified as disease biomarkers and became the target proteins for detection and monitoring this oral disease.[1] Herr et al. demonstrated the detection and quantification of MMP-8 using this microfluidic gel electrophoretic device.

In their study, the monoclonal antibody for MMP-8 was used as a reporter, which had the specificity for binding only to the MMP-8 in the saliva. The use of this monoclonal antibody (${\displaystyle a}$MMP-8) had an advantage in that it eliminated the need for a matched pair antigen-antibody. In the reporter mixture, the bovine serum albumin (BSA) protein standard was also added as a reference. Both ${\displaystyle a}$MMP-8* and BSA* were fluorescently labeled for the detection purpose. Note that the asterisk is used to denote the fluorescently labeled analyte. According to their study, 1 nM of ${\displaystyle a}$MMP-8* and 1 nM of BSA* were used in the mixture.

In the microfluidic device, ${\displaystyle a}$MMP-8* bound only to MMP-8 biomarker forming a complex of similar charge-to-mass ratio. The unbounded antibody and its complex were separated from each other by the separation gel based on the difference in their sizes and detected individually as described in the Principle of Operation for microfluidic gel electrophoresis. In this immunoassay application, the BSA* with the highest mobility was detected first, followed by the remaining ${\displaystyle a}$MMP-8* and MMP-8 complex, respectively. The corresponding peaks were shown in the electropherogram and gel-like plots. The peak area or the width of the band in gel-like plot was used to calculate the concentration of MMP-8.

To be able to quantify the concentration of the endogenous MMP-8 in the saliva sample collected from the patients, a calibration curve was required to be generated first. The calibration curve was obtained from the analyses of known MMP-8 concentrations. A series of experiments were performed by adding known concentrations of the recombinant MMP-8 in the diluted saliva from healthy patients and then running the microfluidic gel electrophoresis. From the electropherogram, the peak areas of the MMP-8 complex were normalized with the peak area of BSA* and then plotted against the corresponding concentrations. A nonlinear least-squares fitting method using a four-parameter logistic model [3] was used to obtained the calibration curve. Based on this calibration curve, the concentration of the endogenous MMP-8 complex was predicted from its normalized peak area. It was reported that the average concentrations of the MMP-8 in the healthy and disease subjects were 64.6 +/- 16.4 ng/mL and 623.8 +/- 204.0 ng/mL, respectively.[3] It is worth noting that the concentration of MMP-8 in the patients classified as periodontally diseased exhibited the dynamic activity of the disease.

The validity of the results obtained from the microfluidic gel electrophoresis was verified by comparing with those obtained from the conventional enzyme-linked immunosorbent assay (ELISA). As reported by Herr et al., the results obtained from the microfluidic device were highly correlated with those from ELISA with r2 = 0.979, where r is Pearson product-moment correlation coefficient. This microfluidic-based immunoassay, however, had several advantages over the conventional counterpart in that it bypassed the time-consuming reaction and washing steps required by ELISA, was more automated, required only single antibody making it applicable for wider range of applications, and needed much less amount of saliva sample per analysis. In addition, it did not require surface immobilization of the antibody.

In addition to measuring the concentration of MMP-8 in the periodontitis patients, a clinical examination was also performed to correlate the analytical data with physiological symptoms. It was found that MMP-8 was highly correlated with bone and tissue loss, but having no correlation with bleeding upon probing. With highly sensitive detection of MMP-8 biomarker, this microfluidic-based immunoassay could provide early diagnosis that would improve the clinical treatment of this disease. Furthermore, with the use of MMP-8 inhibitor to reduce collagen degradation, it was promising that this microfluidic gel electrophoretic device could be used to monitor and track the progress of MMP-8 inhibitor therapy as well.

### Microfluidic Capillary Electrophoresis

In this section, another type of microfluidic electrophoresis will be described. Unlike microfluidic gel electrophoresis, microfluidic capillary electrophoresis operates on the principle of electrokinetic of bulk flow. The difference in charge-to-mass ratio of analytes is the fundamental of electrophoretic separation in capillary microfluidic channel. Without the need of sieving medium, the microfluidic capillary electrophoresis becomes more favorable than its gel counterpart in terms of the ease in fabrication and reusability. The detail fabrication process and principle of operation including clinical applications of this device are described as follows. Note that the fabrication process and principle of operation explained below are based on the published work by Backofen et al. 2002.[10]

#### I. Fabrication Process

The fabrication of microfluidic structure using PDMS is easier and much less expensive than using glass. The process begins with creating a master for molding the PDMS, which can be obtained by patterning a photoresist. After developing, a mixture of PDMS and cross-linker is poured over the patterned photoresist and then cured. The PDMS is peeled out and cut to create an access to the reservoir. In the final step, this patterned PDMS is attached permanently to a glass base and ready to use. The detail step-by-step procedure is described below.

Diagram describing a fabrication process to create a master for PDMS molding using negative photoresist and standard photolithography [10]

Creating a Master for PDMS Molding

Materials:[10]

2. 4-inch silicon wafer
3. SU-8 50 negative photoresist
4. Propylene glycol methyl ether acetate developer

Beginning with a preparation process, a photomask is created first. The photomask used is usually a contact photomask that can be made of glass or simply a transparency with the designed microfluidic structural layout printed on it. The photomask is used to transfer this microfluidic pattern on to a photoresist, which is spin-coated onto a silicon wafer. It is worth noting that the choice of photoresist to be used affects the design of photomask. For example, if using the negative photoresist, all the microfluidic patterns need to be designed using clear field so that the UV light can go through and polymerize the contact areas of the negative photoresist. The unexposed areas will be washed away in a developer. On the other hand, the designed patterns are the dark field of photomask for the positive photoresist.

Before coating the photoresist, the silicon wafer is needed to be cleaned first. A standard RCA cleaning process is applicable. Then, the negative photoresist is spin-coated on the silicon wafer and pre-baked. Exposing under the UV light, the coated negative photoresist is patterned with the designed microfluidic layout from the photomask. The exposed photoresist is then post-baked and developed. As a result, only the exposed areas of the negative photoresist remain as shown in the diagram.

Diagram showing the PDMS molding process using the developed, patterned photoresist [10]

Molding PDMS to Fabricate Microfluidic Structure

Materials:[10]

1. PDMS oligomer

After the master for PDMS molding is ready, a mixture of PDMS and cross-linker is prepared. A 10:1 ratio of the polymer and cross-linker is applicable.[10] The degassed mixture is then poured onto the wafer to cover the entire photoresist master. Finally, the molded PDMS is cured in the oven at 70 °C for 1 hour.[10] This process step is summarized in the diagram. In addition, a cross-sectional view of the PDMS molded on the wafer is also provided.

After curing, the PDMS is peeled out of the wafer. The PDMS with the transferred microfluidic structure is further processed by creating an opening for each circular reservoir region. This opening will be used to load sample or reagent and a place to apply vacuum and pressure, which is regulated by a syringe.

Diagram showing the complete microfluidic capillary electrophoretic device [10]

Finalize Microfluidic Capillary Electrophoretic Device

Materials:

1. Glass plate
2. Needle probe

A glass plate with coated patterned electrodes is used to enclose the microfluidic channels and reservoirs, and at the same time, provides electrical connections to the fluid in reservoirs. The electrodes can be fabricated on glass substrate by means of evaporation of chrome and platinum. Chrome is used as an adhesive layer between glass substrate and platinum electrodes. The patterning of the metals can be achieved by a regular photolithography, which requires another photomask (a transparency), to form the electrodes on glass. It is worth noting that a combination of patterned electrodes and needle probe is used in the final microfluidic capillary electrophoretic system as shown in the diagram. The platinum needle probe is used to supply high electric potential for electrophoretic purpose and simultaneously as an electrochemical sensor.

The PDMS is permanently attached to the glass plate by oxidation using plasma. A covalent bonding between oxygen and silicon atoms (O-Si-O) provides a permanent seal between both materials. The complete microfluidic capillary electrophoretic device is shown in the diagram.

#### II. Principle of Operation

In general, the operation of microfluidic capillary electrophoretic device can be described by three fundamental steps – loading sample/reagents, forming sample plug, and electrophoretic separation. The loading process described here involves manual loading using syringes to regulate the flow. After all sample and regents are loaded, a sample plug is formed. This step needs to be designed carefully since the concentration of the analytes relies on this step. In the last step, the analytes are separated electrophoretically. Unlike the size-based separation in the (microfluidic) gel electrophoresis, the separation by (microfluidic) capillary electrophoresis is based on the differences in charge-to-mass ratios of the analytes.

It is worth noting that the microfluidic capillary electrophoretic system being discussed here is used as an analytical platform where all the sample preprocessing steps are performed off-chip. The following description focuses mainly on the analysis of negatively charged analyte in the sample. Note that the principle of operation described below is based in part on the published work by Backofen et al. 2002.[10]

All microfluidic channels filled manually and equilibrated by electrophoresis [10]

The system setup begins with loading the microfluidic reservoirs with a buffer and preprocessed sample solution such that all the reservoirs except the sample reservoir contain the buffer. Continuing with filling the microfluidic channels with buffer, syringes are connected to the top openings of the reservoirs and used to regulate the flow. Vacuum and pressure are applied until the channels are filled with the buffer.

The next process is to equilibrate the sample and buffer by means of electrophoresis. To do so, the high voltage sources are connected to the system as shown in the diagram. One possible configuration is the use of multiple supply units such as two units of U1 = 0.5 kV and a single unit of U2 = 1.2 kV. This process allows the buffer and sample to localize in the channels as shown in the diagram.

Sample solution flowing through the small connecting channel and entering the channels leading to the buffer and buffer waste reservoirs [10]

Forming Sample Plug

For the sample to be analyzed, a predefined and controllable portion of the sample needs to be injected into the separation channel. This predefined and controllable portion of the sample is referred to as a sample plug. According to the microfluidic structure being discussed, the sample plug is formed by a hydrodynamic flow of the sample solution due to the difference in the solution levels in the reservoirs. This hydrodynamic flow only occurs when all the high voltage sources are switched off.

As depicted by the diagram, the portion of sample solution flows through the small connecting channel into the channels leading to the buffer and buffer waste reservoirs. This sample portion can be divided into two ports indicated by blue and yellow dashed outlines as shown in the inset. The blue outline is the sample plug whose volume can be specified by design. The yellow dashed outline indicates the portion of the sample that will flow back toward the sample waste reservoir when switching the high voltage sources back on.

Sample plug electrokinetically moving along the separation channel toward the buffer waste reservoir [10]

Electrophoretic Separation

When switching on all the high voltage sources, the negatively charged analyte molecules and buffer ions start to flow again by electrophoresis. At the small connecting channel, the portion of sample solution begins to split into two parts and then move away in the opposite directions. As depicted in the inset, the part of sample solution indicated by blue-dashed outline, or the sample plug, electrokinetically moves along the separation channel toward the buffer waste reservoir. On the other hand, the part of sample solution outlined by the yellow dashed line electrokinetically move away toward the sample waste reservoir. The small connecting channel is then filled by the buffer as occurred originally.

In the separation channel, the negatively charge analytes in the sample plug are separated due to the differences in their charge-to-mass ratios. Each analyte is detected by the electrochemical sensor placed at the end buffer waste reservoir. For the electrochemical sensor used in this microfluidic capillary electrophoretic system, the detection of each analyte is based on a reduction in voltage drop across the needle probe and reference electrode. These voltage drops can be plotted in the electropherogram and can be used to study the analyte components such as proteins or peptides in the sample solution.

#### III. Clinical Applications

The detection and quantification of biomarkers in patients with skin lesions reported by Guzman et al. 2008 [5] is one of the clinical applications for the microfluidic capillary electrophoresis. Based on the same fundamental concept described previously, a more sophisticated design of microfluidic capillary electrophoresis so-called the immunoaffinity capillary electrophoresis (IACE) was used in their study. IACE is a lab-on-a-chip that utilizes the microfluidic technology to incorporate the affinity-based purification, enrichment, and electrophoretic separation processes in one single microchip.

In the clinical study of biomarkers for this inflammatory disease, IACE was used to analyze the micro-dissected samples from the patients with different stages of skin damage. Twelve different antibodies were used to capture twelve corresponding target proteins/peptides being considered as biomarkers. This affinity-binding also assisted the isolation of target analytes from non-target analytes in microfluidic environment, thus improving the subsequent enrichment process and reducing background noise during the analysis. As reported, the integrated enrichment feature increased the sensitivity and enhanced the low detection limit of IACE. The concentration as low as a few nanogram per milliliter could be detected and quantified. With this highly sensitive capability, it was also reported that IACE could be used to monitor the progressive patterns of the disease. The results obtained by IACE were proven consistent with the gold standard like the traditional histopathology.

### Notes

1. Herr AE, Hatch AV, Giannobile WV, Throckmorton DJ, Tran HM, Brennan JS, Singh AK. "Integrated microfluidic platform for oral diagnostics" Ann NY Acad Sci. Author manuscript available in PMC (2008).
2. Xu X, Li L, Weber SG. "Electrochemical and optical detectors for capillary and chip separations" Trends Analyt Chem. Author manuscript available in PMC (2008).
3. Herr AE, Hatch AV, Throckmorton DJ, Tran HM, Brennan JS, Giannobile WV, Singh AK. "Microfluidic immunoassays as rapid saliva-based clinical diagnostics" Proc of Nat Acad Sci. 104(13):5268-5273 (2007).
4. Barry R, Ivanov D. "Microfluidics in biotechnology" J Nanobiotechnology 2:2 (2004).
5. a b c d Guzman NA, Blanc T, Phillips TM. "Immunoaffinity capillary electrophoresis as a powerful strategy for the quantification of low-abundance biomarkers, drugs, and metabolites in biological matrices" Electrophoresis. Author manuscript available in PMC (2009).
6. Gong M, Wehmeyer KR, Limbach PA, Arias F, Heineman WR. "On-line sample preconcentration using field-amplified stacking injection in microchip capillary electrophoresis" Anal Chem. Author manuscript available in PMC (2008).
7. Gong M, Wehmeyer KR, Stalcup AM, Limbach PA, Heineman WR. "Study of injection bias in a simple hydrodynamic injection in microchip capillary electrophoresis" Electrophoresis. Author manuscript available in PMC (2008).
8. Hsieh J-F, Chen S-T. "Comparative studies on the analysis of glycoproteins and lipopolysaccharides by the gel-based microchip and SDS-PAGE" Biomicrofluidics 1 (2007).
9. a b c Wako Chemicals ( http://www.wakousa.com/specialty/specialty_focus.html )
10. Backofen U, Matysik F-M, Lunte CE. "A chip-based electrophoresis system with electrochemical detection and hydrodynamic injection" Anal Chem. Author manuscript available in PMC (2008).

# Protein Separations - Centrifugation

### Presentation

 Print version

## Introduction to Centrifugation

Tabletop centrifuge

Centrifugation is one of the most important and widely applied research techniques in biochemistry, cellular and molecular biology, and in medicine. In the field of proteomics it plays a vital role in the fundamental and necessary process of isolating proteins. This process begins with intact cells or tissues. Before the proteins can be obtained, the cells must be broken open by processes such as snap freezing, sonication, homogenization by high pressure, or grinding with liquid nitrogen. Once the cells have been opened up all of their contents; including cell membranes, RNA, DNA, and organelles will be mixed in the solvent with the proteins. Centrifugation is probably the most commonly used method for separating out all the non protein material. Within the centrifuge samples are spun at high speeds and the resulting force causes particles to separate based on their density.

## Uses of Centrifugation

Centrifugation is capable of:

• Removing cells or other suspended particles from their surrounding milieu on either a batch or a continous-flow basis
• Separating one cell type from another
• Isolating viruses and macromolecules, including DNA, RNA, proteins, and lipids or establishing physical parameters of these particles from their observed behavior during centrifugation
• Separating from dispersed tissue the various subcellular organelles including nuclei, mitochondria, cholorplasts, golgi bodies, lysosomes, peroxisomes, glyoxysomes, plasma membranes, endoplasmic reticulum, polysomes, and ribosomal subunits.

Once the mixture of proteins has been isolated using centrifugation the scientist is then able to use one of several methods to separate out individual proteins for further study. For more information on protein purification/separation see Protein Separations – Chromatography and Protein Separations– Electrophoresis.

Next section: History of the Centrifuge

## References

### Subscription Based References

1. Sheeler, P. "Centrifugation in Biology and Medical Science." Dept of Biology, California State University, Northridge, California

### Open Access References

1. Claude, A. & Potter, J. S. "Isolation of Chromatin Threads From The Reasting Nucleus of Leukemic Cells" The Journal of Experimental Medicine.

# Emerging and Miscellaneous Proteomics Technologies

### Presentation

 Print version

# Emerging and Miscellaneous Technologies in Proteomics

This part of the book will be an area where proteomics techniques that have been newly developed will be discussed. Techniques that do not currently fit into any other part of the book can also be added to this page. As chapters or sections are added elsewhere that discuss these techniques in the context of a greater proteomics problem, the information on this page will be moved to those pages.

## X-Ray Tomography

Description and Discussion of X-ray Tomography.

A new branch of X-ray microscopy is being used in proteomics analysis. This is called X-ray Tomography. This method uses projected images to calculate and reconstruct a 3D object. This technology is being used in proteomics to determine the location of labeled proteins or large complexes within a cell. This technique can also be used in conjunction with images of cells from light based microscopes to help identify where a protein is located and how this location factors in to its function and identification.

## Introduction to Proteoinformatics

Proteoinformatics is the use of bioinformatics and computational biology techniques solely within the realm of protein identification and proteomics. Proteoinformatics is currently in its infancy and the largest work being done is on standardizing databases and data submission. Other proteoinformatic work is being done on the image analysis of 2D gels and other images in proteomics used to help identify and annotate proteins in the proteome.

### Protein Identification Database

#### What are Protein Identification Databases?

Protein Identifications Databases such as ProFound at Rockefeller University and Protein Prospector at the UCSF Mass Spectrometry Facility are used to help identify proteins found with proteomics techniques such as mass spectrometry. Digestion of proteins into peptide fragments allows each protein to break apart in a different way, resulting in a unique peptide fingerprint that can be used to identify the protein. The masses of these fragments as well as the molecular weights and isoelectric points are what is stored in many of these databases. This data can be used to perform high-throughput protein identification.

#### The Future of Protein Identification Databases

In order to continue advancing the cause of mapping the human proteome, international databases need to be established which integrate both transcriptome and proteome data. The Human Proteome Organization is currently working on establishing a defined standard for data submission and annotation for the many different proteomics techniques currently used to identify and annotate proteins.

### New Techniques in Image Analysis

According to the Image Analysis Wikipedia page, "Image analysis is the extraction of meaningful information from images." In terms of Proteomics, image analysis can be used to compare different images generated using proteomics techniques, such as 2D-PAGE gel images. New programs are being developed that will help to optomize and automate the process of locating a protein spot between two gel images in order to identify the differences between 2D-PAGE gels. Other programs can be used to help clean up and remove variability between these images as well.

## Laser Capture Microdissection

Diagram of Laser Capture Microdissection

Laser capture microdissection or LCM is a process that isolates and removes distinct populations from a tissue. This will facilitate the comparison of diseased tissue with normal tissue from an organism.

In LCM, an infrared laser beam melts a thermosensitive polymer film that traps a specific group of cells. This polymer film is then extracted and moved to a test tube where an extraction buffer is used to remove the groups of cells for more advanced proteomics analysis such as 2D-PAGE, Ion Chromatography, etc. This technology will become more useful as systems with higher sensitivity for analysis of smaller amounts of tissue become developed and realized.

## Proteomic Complex Detection using Sedimentation

Approaches such as TAP tagging, which require the addition of fusion proteins, can interfere with protein interactions that would have normally occurred. Many times it takes a great deal of work to express these tagged proteins, so this technique is used to give evidence that there is a stable protein complex detected early on in the proteomics experiment before more laborious approaches are used to isolate and identify the protein complexes of interest. Issues also occur in MS and 2D Gel processes where one cannot be sure that a portion of a gel spot is the desired protein because multiple proteins could be traveling together in that spot as a complex.

This is where proteomic complex detection using sedimentation (ProCoDeS) is applicable. ProCoDeS is a technique for the high-throughput identification of both soluble and membrane proteins that are found in stable complexes. Relative sizes of protein complexes are estimated via their sedimentation in a gradient. In this case a rate zonal gradient or RZG is used to better estimate the relative size of protein complexes. The distribution of a protein of interest in this sedimentation can be detected using classic techniques such as Western Blotting or newer techniques such as ICAT. This can be done for a large number of proteins. Thus, ProCoDeS can be used to identify stable protein complexes. ProCoDeS is especially well suited for the screening of unrefined cellular material to help find new proteins that cannot be discovered because they exist in protein complexes such as proteins found in protein membranes.

Next Chapter: Protein Identification - Mass Spectrometry

## References

1. Hartman, N. T., et al. "Proteomic Complex Detection Using Sedimentation" Anal. Chem., 79, 5, 2078 - 2083, 2007.
2. NCT Proteomics Group "Emerging Technologies" National Institutes of Environmental Health Sciences
3. NCT Proteomics Group "ProteoInformatics" National Institutes of Environmental Health Sciences
4. "Wikipedia: Image Analysis
5. "Wikipedia: Proteomics
6. "Wikipedia: X-ray Tomography

# Protein Identification - Mass Spectrometry

### Presentation

 Print version

MALDI TOF MS

## Introduction

• Mass Spectrometry Overview

Mass spectrometry is a technique in which gas phase molecules are ionized and their mass-to-charge ratio is measured by observing acceleration differences of ions when an electric field is applied. Lighter ions will accelerate faster and be detected first. If the mass is measured with precision then the composition of the molecule can be identified. In the case of proteins, the sequence can be identified. Most samples submitted to mass Spectrometry are a mixture of compounds. A spectrum is acquired to give the mass-to-charge ratio of all compounds in the sample. Mass spectrometry is also known as 'mass spec' or MS for short. Mass spectrometry throws light on molecular mechanisms within cellular systems. It is used for identifying proteins, functional interactions, and it further allows for determination of subunits. Other molecules in cells such as lipid components can also be defined.

A mass spectrometer is composed of several different parts: a source that ionizes the sample, the analyzer that separates the ions based on mass-to-charge ratio, a detector that "sees" the ions, and a data system to process and analyze the results. You can also measure relative abundance of an ion using mass spectrometry. Different compounds have differential ionization capabilities and therefore intensity of your ion is not a direct correlation to concentration.

Mass spectrometry can be a high throughput analytical method due to the ability for a mass spectrum to be measured rapidly and with minimal sample handling as compared to gel methods.

It is an analytical method which has a variety of uses outside of proteomics, such as isotope and dating, trace gas analysis, atomic location mapping, pollutant detection, and space exploration

• History of Mass Spectrometry

The history of this technique finds its roots in the first studies of gas excitation in a charged environment, more than 100 years ago. This pioneering work led to the identification of two isotopes of neon (neon-20 and neon-22) via mass to charge ration discrimination by J.J Thomson in 1913. Over the next fifty years the fundamental basis of the technique was further developed. After the coupling of gas chromatography to Mass Spectroscopy in 1959 by researches at Dow Chemical, the full potential of the technique as a highly accurate, quantitative method for exploring compounds was realized, spurring a wave of developments which continue to the present day. The precision of mass spectrometry led to the discovery of isotopes.

• Implications of Mass Spectrometry for Proteomics Applications

The technique of mass spectrometry is a valuable tool in the field of proteomics. It can be used to identify proteins through variations of mass spectrometry techniques. The most common first approach to proteomics is a bottom-up approach in which the protein is digested by a protease, such a trypsin, and the peptides are then analyzed by peptide mass fingerprinting, collision-induced dissociation, tandem MS, and electron capture dissociation. Once the peptides masses have been determined the mass list can be sent to a database, such as MASCOT, where the list is compared to the masses of all known peptides. If enough peptides match that of a known protein you can identify your protein. If the masses of your peptides do not match a known protein you can sequence your peptide by de novo sequencing using MS/MS methods; where you isolate your peptide and break it along the peptide bond backbone forming y and b ions from which you can determine the sequence. The advantages of the bottom-up approach are that the small size of tryptic peptide ions is easy to handle biochemically than entire protein ions because of their relatively small masses that are easier to be determine. Beside bottom-up approach, another approach is top-down. In top-down approach, the complete proteins are directly analyzed by using mass spectrometer without solution digestion as bottom-up does. The advantages of the top-down approach are that it can sometime provide the complete coverage of the protein. But since whole proteins are hard to handle biochemically compared to small peptide pieces, it makes top-down approach difficult to analyze.

Another use of mass spectrometry in proteomics is protein quantification. By labeling proteins with stable heavier isotopes you can in turn determine the relative abundance of proteins. Companies now produce kits, such as iTRAQ (Applied Biosystems), in order to do this at a high-throughput level.

One of the most powerful ways to identify a biological molecule is to determine its molecular mass together with the masses of its component building blocks after fragmentation. There are two dominant methods for doing this. The first is electrospray ionization (ESI), in which the ions of interest are formed from solution by applying a high electric field. This is done by applying a high electric field to the tip of a capillary, from which the solution will pass through. The sample will be sprayed into the electric field along with a flow of nitrogen to promote desolvation. Droplets will form and will evaporate in a vacuumed area. This causes an increase in charge on the droplets and the ions are now said to be multiply charged. These multiply charged ions can now enter the analyzer. ESI is a method of choice because of the following properties: (1)The "softness" of the phase conversion process allows very fragile molecules to be ionized intact and even in some non-covalent interactions to be preserved for MS analysis. (2)The eluting fractions through liquid chromatography can then be sprayed into the mass spectrometer, allowing for the further analysis of mixtures. (3)The production of multiply charged ions allow for the measurement of high-mass biopolymers. Multiple charges on the molecule will reduce its mass to charge ratio when compared to a single charged molecule. Multiple charges on a molecule also allows for improved fragmentation which in turn allows for a better determination of structure. The second is matrix-assisted laser desorption/ionization (MALDI) in which the molecular ions of interest are formed by pulses of laser light impacting on the sample isolated within an excess of matrix molecules. This enables the determination of masses of large biomolecules and synthetic polymers greater than 200,000 Daltons without degradation of the molecule of interest. The advantages of MALDI are its robustness, high speed, and relative immunity to contaminants and biochemical buffers.

A type of mass spectrometer often used with MALD is TOF or Time of Flight mass spectrometry. This enables fast and accurate molar mass determination along with sequencing repeated units and recognizing polymer additives and impurities. This technique is based on an ultraviolet absorbing matrix where the matrix and polymer are mixed together along with excess matrix and a solvent to prevent aggregation of the polymer. This mixture is then placed on the tip of a probe; then the solvent is removed while under vacuum conditions. This creates co-crystallized polymer molecules that are dispersed homogeneously within the matrix. A pulsing laser beam is set to an appropriate frequency and energy is shot to the matrix, which becomes partially vaporized. In turn the homogeneously dispersed polymer within the matrix is carried into the vapor phase and becomes charged. To obtain a superb signal-to-noise ratio, multiple laser shots are executed. The shapes of the peaks are improved and the molar masses determined are more accurate. Fianlly, in the TOF analyzer the molecules from a sample are imparted identical translational kinetic energies because of the electrical potential energy difference. These ionic molecules travel down an evacuated tube with no electrical field and of the same distance. The smallest ions arrive first at the detector, which produces a signal for each ion. The cumulative data from multiple laser shots yield a TOF mass spectrum, which translates the detector signal into a function of time, which in turn can be used to calculate the mass of the ion.

In addition to these ionization techniques, highly powerful mass analyzers have been developed. These analyzers measure the mass/charge ratio of intact ionized biomolecules, as well as their fragmentation spectra, with high accuracy and high speed. The measurement of fragmentation spectra is called tandem MS or MS/MS. In conjunction with single stage MS (with intact precursor ions) tandem MS can be utilized to help elucidate a protein since the problem of elucidation will reduce to assembling the puzzle pieces of the fragmented protein.

## References

1. American Society for Mass Spectrometry - What is MS?, http://www.asms.org/whatisms/p4.html
2. Mass Spectrometry in the Postgenomic Era
3. Annual Review of Biochemistry Vol. 80: 239-246 (Volume publication date July 2011) DOI: 10.1146/annurev-biochem-110810-095744 https://ted.ucsd.edu/webapps/portal/frameset.jsp?tab_tab_group_id=_2_1&url=%2Fwebapps%2Fblackboard%2Fexecute%2Flauncher%3Ftype%3DCourse%26id%3D_767_1%26url%3D
4. University of Illinois at Urbana-Champaign School of Chemical Sciences http://scs.illinois.edu/massSpec/ion/esi.php
5. University of Southern Mississippi School of Polymers and High Performance Materials http://www.psrc.usm.edu/mauritz/maldi.html

# Protein Primary Structure

### Presentation

 Print version

 « Protein Primary StructureIntroduction » Previous Chapter - Protein Identification Sequencing Methods

# Post-translational Modification

### Presentation

 Print version

 « Post-translational ModificationIntroduction » Previous Chapter - Protein Primary Structure Proteolytic Processing

## Introduction

I. Definition

    A. Spontaneous or enzymatic alteration to one or more of a protein's amino acids
B. Most often manifests as an addition or deletion to a side-chain
C. Can occur at any point during or following full translation of a protein
D. Often drastically effects overall structure and function of protein and associated complexes
E. Are highly conserved among all living organisms


II. Types of modifications

    A. Acetylation
B. Amidation/Deamidation
C. Glycosylation
D. Oxidation
i. S-Glutathionylation
ii. S-Nitrosylation
E. Phosphorylation
i. Histidine
ii. Serine
iii. Theronine
iv. Tyrosine
F. Proteolysis
G. Ubiquitinylation/SUMOylation
H. Others?


III. Manipulating in-vivo modifications

    A. Modifications can be prevented or induced in organism, tissue, and cell based model systems
B. May allow for the detection of target proteins or dissection of related processes and pathways
C. Exogenous introduction of stimuli
i. endocrine/paracrine signals (aka hormones)
ii. environmental (temp, UV, heavy-metals, peroxide, etc.)
iii. antigens (virus, bacteria, allergens, lysates, etc.)
iv. chemical/medicinal activators and inhibitors
D. Genetic Approaches
i. deletion, mutation, or reorganization of genetic elements (enhancers, promoters, genes, etc.)
ii. gene inactivation or silencing by nucleic acid hibridization
iii. transgenics (tansformations and transfections)
i. Qualitative but often hard to quantify
ii. Often expolaratory in nature (observe and report)
iii. Often very large scale in terms of available data


IV. In vitro reconstitution strategies

    A. Often provide a more quantitative, in depth analysis of a particular post-translational modification
B. The protein in question is added to a reaction with the appropriate reagents and/or enzymes
C. Reactions can be followed in real-time more readily


V. Methods of detection

    A. Most commonly used detection method of known modifications is through immuno/Western blotting
B. Unexpected or novel modifications can be detected with a variety of analytical techniques, most notably mass spectrometry
C. The best specific method of detection depends on many factors including the stability, frequency, and scale of the modification(s)
D. In practice ease and cost dictate which methods are used first while the more exhaustive, cumbersome, or expensive methods follow as needed


# Protein - Protein Interactions

### Presentation

 Print version

Chapter edited and updated by: Poulami Barman and Sarah Allen

Contact pxb2979@rit.edu, sea3016@rit.edu

## Introduction

Protein interaction is crucial for every organism. Most proteins function through interaction with other molecules, and often with other proteins. Enzymes interact with their substrates, inhibitors interact with enzymes, transport proteins interact with structural proteins, hormones interact with receptors – and that’s just a few of the interactions that happen in a cell. Some proteins are composed of more than one polypeptide chain, and the interactions between the different peptides are necessary for the whole protein to function. Since they are so essential, protein-protein interactions are an important topic for scientists to understand.

There are many characteristics of a protein-protein interaction that are important. Obviously, it is important to know which proteins are interacting. In many experiments and computational studies, the focus is on interactions between two different proteins. However, you can have one protein interacting with other copies of itself (oligomerization), or three or more different proteins interacting. The stoichiometry of the interaction is also important – that is, how many of each protein involved are present in a given reaction. Some protein interactions are stronger than others, because they bind together more tightly. The strength of binding is known as affinity. Proteins will only bind each other spontaneously if it is energetically favorable. Energy changes during binding are another important aspect of protein interactions. Many of the computational tools that predict interactions are based on the energy of interactions.

In recent years there has been a strong focus on predicting protein interactions computationally. Predicting the interactions can help scientists predict pathways in the cell, potential drugs and antibiotics, and protein functions. However, it has been an ongoing challenge to decipher those interactions. Proteins are large molecules, and binding between them often involves many atoms and a variety of interaction types, including hydrogen bonds, hydrophobic interactions, salt bridges, and more. Proteins are also dynamic, with many of their bonds able to stretch and rotate leading to different conformations. Therefore, predicting protein-protein interactions requires a good knowledge of the chemistry and physics involved in the interactions.

This chapter discusses the characteristics of protein-protein interactions, how they are determined experimentally, and how they are predicted computationally. It also contains a list of databases where you can explore known and predicted protein interactions. The links above will lead you to the various sections.

# Protein Chips

### Presentation

 Print version

 « Protein ChipsIntroduction » Previous Chapter - Protein - Protein Interactions Manufacture

Introduction:

## Introduction

A DNA microarray as seen through a microscope. Protein chips look identical, except each spot corresponds to one of the organism's thousands of proteins, instead of one of it's genes. The intensity of the dot indicates the amount of protein present.

Protein chips, also referred to as protein arrays or protein microarrays, are modeled after DNA microarrays. The success of DNA microarrays in large-scale genomic experiments inspired researchers to develop similar technology to enable large-scale, high-throughput proteomic experiments. Protein chips enable researchers to quickly and easily survey the entire proteome of a cell within an organism. They also allow researchers to automate and parallelize protein experiments.

Protein chips were first developed in 2000 by researchers at Harvard University.[1] Today there are many companies manufacturing protein chips using many types of techniques including spotting and gel methods. The types of protein chips available include "lab on a chip", antibody arrays and antigen arrays, as well as a wide range of chips containing "alternative capture agents" such as proteins, substrates and nucleic acids.

Analysis of protein chips comes with many challenges including dynamic protein concentrations, the sheer number of proteins in a cell's proteome, and the understanding of the probes for each protein. Steps include the reading of the protein levels off the chip, and then the use of computer software to analyze the massive amounts of data collected.

Applications of protein chip experiments include identifying biomarkers for diseases, investigating protein-protein interactions, and testing for the presence of antibodies in a sample. Protein chips have applications in cancer research, medical diagnostics, homeland security and proteomics.

This chapter will demonstrate why protein chips are changing the face of proteomics, and why they will have an even larger impact in the future.

## History

### Nucleic Acid Microarrays

The use of microarrays for gene expression profiling was first published in 1995.[2] This technology allowed scientists to analyze thousands of mRNAs in a single experiment to determine whether expression is different in disease states. Unfortunately, mRNA levels within a cell are often poorly correlated with actual protein abundance.[3] This can be due to many factors including degradation rate of mRNA versus proteins and post-transcriptional controls and modifications. Measuring the amount of protein directly would bypass any mRNA inconsistencies and give a true level of gene function, however traditional protein characterization methods were slow and cumbersome. These combined factors were the impetus behind the creation of protein chips.

### Deficiency of Traditional Protein Characterization Methods

A liquid chromatography / mass spectrometry (LC/MS) instrument. This technique is low throughput compared to protein chips because protein chips can test for thousands of proteins on a single chip in a single experiment.

Before the advent of protein chips, protein measuring and characterization was done using two different methods: 2D gel electrophoresis coupled with mass spectrometry, and liquid chromatography. These methods can separate and visualize a large number of proteins per experiment, however they are time consuming when compared to protein chips. Their process is very low-throughput because of lack of automation. Reproducibility is also a factor because of the large amount of sample handling. A better, more standardized, higher-throughput method needed to be invented for protein measuring and characterization.

### Protein Chip Precursors to Modern Day

The equipment and reagents used in an Enzyme-linked Immunosorbent Assay (ELISA), a precursor of protein chips.

Immunoassays, the precursor to protein chips available since the 1980s, exploit the interactions between antibodies and antigens in order to detect their concentrations in biological samples. Their creation, however, is tedious and expensive. As a response to this, researchers at Harvard University combined the technologies of immunoassays and DNA microarrays to develop the protein chip.[4] In their landmark paper, published in 2000, "Printing Proteins as Microarrays for High-Throughput Function Determination," Gavin MacBeath and Stuart Schreiber described how to create protein chips and demonstrated three types of applications that would benefit from this new technology. The strengths of their approach were the use of readily available materials (i.e. glass slides, polyacrylamide gel), the relative ease of implementation (robotic microarray printers), and compatibility with standard instrumentation.

Within the past five years, many companies, including Biacore, Invitrogen, and Sigma-Aldrich, have begun production of industrial level protein array systems that can be used for drug discovery and basic biological research. Commercial entities have made protein chip research a streamlined and standardized process on the same level as DNA microarrays compared to its inception in 2000.

Academic research plays a huge role in the development and improvement of these technologies. The collaboration of academic research with systems such as the Affymetrix GeneChip and the Human Genome Initiative has allowed for friendly competition, resulting in the advancement of technologies. With more develops come a better understanding and encourages even more research towards these fields.

Affymetrix is a company that has been manufactures microarrays, named GeneChip, since 1992. They have 13 locations across the world with headquarters located in the US (California), UK, Japan, and China.[5]

Next section: Manufacture

## References

1. MacBeath G, Schreiber S. (2000). Printing Proteins as Microarrays for High-Throughput Function Determination. Science. Sep 08; 289 (5485): 1760-1764.
2. Schena M, Shalon D, Davis RW, Brown PO. (1995). Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science. Oct 20; 270 (5235): 467-70.
3. Gygi SP, Rochon Y, Franza B, Abersold R: Correlation between protein and mRNA abundance in yeast. Mol. Cell Biol. 19, 1720-1730 (1999).
4. MacBeath G, Schreiber S. (2000). Printing Proteins as Microarrays for High-Throughput Function Determination. Science. Sep 08; 289 (5485): 1760-1764.
5. "Affymetrix." Wikipedia, The Free Encyclopedia. 5 Feb 2007, 03:19 UTC. Wikimedia Foundation, Inc. Apr 2008 <http://en.wikipedia.org/wiki/Affymetrix>

Chapter written by: Jonathan Keeling and Eric Foster
Contact jwk3970@rit.edu, edf3480@rit.edu, ttl5439@rit.edu

# Proteomics and Drug Discovery

### Presentation

 Print version

 « Proteomics and Drug DiscoveryIntroduction » Previous Chapter - Protein Chips Rational Drug Design

Chapter written by: Piotr Kowalski and Patrick Kenney
Contact: pxk9006@rit.edu, pok7810@rit.edu

This Section:

## Introduction to the Drug Discovery Process

The process of drug discovery within the modern scientific context is quite complex, integrating many disciplines, including structural biology, metabolomics, proteomics, and computer science, just to name a few. The process is generally quite tedious and expensive, given the sheer amount of possibilities of drug-to-target interactions in-vivo, and the necessity of successfully passing rigorous pharmacokinetic studies and toxicology assays prior to even being considered for clinical trials (Burbaum). Though a more detailed explanation is offered further into this text, several key components of the drug discovery process include target selection, lead identification, and preclinical and clinical candidate selection. The schematic on the right outlines the steps involved in the drug discovery process.

The recent boom of the proteomics field, or the analysis of the ever dynamic organismal proteome, has brought many advances with respect to the very nature of how the current drug discovery process is undertaken. The potential the field of proteomics brings in for identifying proteins involved in disease pathogenesis and physiological pathway reconstruction facilitates the ever increasing discovery of new, novel drug targets, their respective modes of action mechanistically, and their biological toxicology (Page).

The challenge in the drug discovery process is to find the exact causes of an underlying disease and find a way to negate them or bring them to normal levels. A mechanistic understanding of the nature of the disease in question is essential if we are to elucidate any target-specific remedy for it. While the causes of many documented clinical problems vary greatly in their nature and origin, in some cases, the cause is found at the protein level, involving protein function, protein regulation, or protein-protein interactions. One example of such a disorder would be alkaptonuria, characterized by a defect in the gene coding for the enzyme homogentisic acid oxidase[16], inhibiting the metabolism of homogentisic acid to maleylacetoacetic acid, within the phenylalanine degradation pathway (Brooker). While the underlying cause of this inborn disease is due to a single gene genetic defect, the clinical manifestations, which include excretion of black urine, are a function of the built up of homogentisic acid resulting from a defective [protein] enzyme.

Recent advances in applied genomics helped in the target identification process, since it allowed for high throughput screening of expressed genes. However, studies have shown that there is a poor correlation between the regulation of transcripts and actual protein quantities. The reasons for this are that genome analysis does not account for post-translational processes such as protein modifications and protein degradation. Therefore, the methods employed in the drug-discovery process started to shift from genomics to proteomics (Burbaum). Analysis of the dynamic organismal proteome, as opposed to the static genome, will certainly bring a much more accurate approach to identifying not only applicable biomarkers that will aid in diagnosis, but also effective remedies for diseases of varying origins.

The field of proteomics faces some daunting challenges, in comparison to genomics, for several reasons. First, protein science lacks an analogue of the polymerase chain reaction (PCR), which can generate many copies of a single, native molecule in vivo (nucleic acids in the case of PCR). However, several recent approaches have been applied in an effort to ameliorate this quandary. Methods of chemical synthesis exist, being limited by yield, particularly when it comes to synthesizing lengthy peptides. In-vivo expression synthesis methods exist as well, however, this approach cannot be applied to producing proteins which may alter normal cellular function. Also, cell-free synthesis ribosome kits can also be employed for accurate and rapid protein synthesis, though the intrinsic presence of ribosome inactivating enzymes contributes to the instability of these systems (Madin). Second, in contrast to DNA, protein levels vary significantly depending on cell type and environment. Third, protein abundance is not directly correlated to protein activity. Protein activity is often determined by post-transcriptional modifications such as phosphorylation. Protein activity, not protein abundance, is of interest in the drug discovery process. Finally, proteins form many interactions with other proteins or small molecules. Elucidation of these interactions would greatly speed up the drug discovery process. One way this is currently being done is through ligand bound x-ray crystallographic studies.

The ideal proteomics technique suited for drug discovery would have the following features: it should be able to separate membrane proteins and detect low abundance proteins, two abilities not quite yet realized, yet required in current separations and analytical techniques. Furthermore, it should be able to identify protein activity independent of protein abundance. It also should reveal protein-protein and protein-small-molecule interactions. This method should also be implemented easily, be automatable, and perform at high-throughput speed. Proteomics researchers are addressing these issues, and new methods are being developed (Burbaum).

Virtual drug libraries are being developed, both in the public and private sectors. These databases contain potential drug compounds; these compounds may or may not exist outside of a computer database, and new compounds developed through various methods of synthesis are continually added. Methods of modifying existing database entries to create new isomers and derivatives are also used, to more adequately cover a range of potential drug compounds. Docking and scoring are implemented using known and hypothetical drug targets on a protein, coupled with the databases of virtual chemical compounds. In docking, various computational methods are used to position a chemical properly within a protein binding site. Genetic algorithms and Monte Carlo methods are two popular algorithms for evolving an optimum binding position. This process screens for chemicals that are potential drugs, which initially are termed as hits. After docking, scoring is carried out using mathematical models. These models determine the chemical binding strength and energy state of the drug-protein complex. Those hits with high ranking scores are suqsequently subjected to in-vivo tests; hits with positive scores in both areas are then known to be leads (Bleicher).

Evaluation of docked and scored complexes are then made, selecting an arbitrary number of top hits to be further screened manually. The first two steps are done entirely in silico; however, the best complexes now need to be examined using software visualization, often in three-dimensional setups. This allows scientists to ensure that the determined docking orientation looks acceptable, and that the scoring is correct based on known interaction energies such as hydrogen bonds and ionic interactions.

The compounds that make it through docking, scoring, and evaluation become drug leads, and are then passed on to undergo drug testing techniques by scientists in a wet lab, to ensure that only compounds with effects relatively unique to the target system and safe to the rest of organism are considered. However, the drug company has already saved much time and money up to this point by having computers do chemical screening, rather than human scientists.

### References:

Open Access Articles

Burbaum, Jonathan, et al. High Resolution Functional Proteomics by Active site Peptide Profiling. PNAS. 2005 Mar:(102), 4996-5001.[[17]]

Madin, Kairat, et al. A highly efficient and robust cell-free protein synthesis system prepared from wheat embryos: Plants apparently contain a suicide system directed at ribosomes. Applied Biological Sciences. 2000 Jan;97(2), 559-564.

Subscription Required Articles

Bleicher, H. Konrad, et al. Hit and Lead Generation: Beyond High-Throughput Screening. Nature Reviews, Drug Discovery. 2003 May:(2), 369-378.

Brooker, J. Robert. Genetics: Analysis and Principles. Benjamin/Cummings Publishers: Menlo Park, CA, 1999. 1st ed., pp. 316-317.

Page, J. Martin, et al. Proteomics as a major new technology for the drug discovery process. Drug Discovery Today. 1999 Feb:(4), 2, 55-62.

Next: Rational Drug Design

# Biomarkers

### Presentation

 Print version

### Mass spectrometry based targeted protein quantification: methods and applications

File:Triple Q.jpg
Cross section of triple Q

### Main Focus

The main focus of the paper was a review of the methods and applications of using mass spectrometry to quantify proteins, especially those that are in a concentration of less than µg/ml concentrations, in an attempt to universalize a procedure.

### New terms

MALDI TOF/TOF
matrix assisted laser desorption/ionization time-of-flight tandem mass spectrometer.
Selected reaction monitoring (SRM)
method in which a specific product ion from a specific parent ion is detected. All other ions with masses not specified in a predetermined range are filtered away leaving only ions with the mass in the range we are looking for. (source http://en.wikipedia.org/wiki/Mass_chromatogram#Selected_reaction_monitoring_.28SRM.29)
type of MS that contains a linear series of three quadrupoles. The first and third set act as mass filters, and the second is a collision cell. This type of MS can “filter” an ion of a known mass. (source http://en.wikipedia.org/wiki/Quadrupole_mass_analyzer)
Hydrazide
class of organic compounds that share a common functional group characterized by a N-N covalent bond with one of the constituents being an acyl group. (source http://en.wikipedia.org/wiki/Hydrazide)
Biomarker
biochemical feature that can be used to measure progress of the effects of treatment or a disease. (source http://www.medterms.com/script/main/art.asp?articlekey=6685)

For this summary, we will focus on protein biomarkers. Some diseases which have protein biomarkers that show promise as a screening tool are breast cancer, Alzheimer's, leukemia, ALS, and Parkinson's [1]. A series of six steps must be accomplished in order to successfully validate a biomarker or set of biomakers: discovery, qualification, verification, assay optimization, validation and commercialization[2]. Once a biomarker is found and accepted, it can be used to possibly predict and prevent the disease it's related to. The summary below focuses on the quantification method of proteins in the search for and identification of protein biomarkers. By finding ways in which to universally quantify proteins, one can search for all biomarkers in one screening rather than multiple screenings, once conclusive biomarkers are identified.

### Summary

With the recent breakthroughs in technology, it is conceivable that is possible to have a “universal” method or approach with minimal restrictions to quantitatively assay a wide number of proteins in search of potential biomarkers. Once a few potential biomarkers are discovered, further research can be done to confirm or refute its use in clinical applications. Another goal is to easily accumulate multiple detections in a single measurement. Measurements are taken by identifying synthetically stable isotopes attached to their respective proteins or peptides. Each isotope mimics the peptide’s endogenous counterparts allowing high selectivity.

Mass spectrometry (MS) provides us with a powerful tool to compare two different protein samples. It can be used for comparing the proteome of a diseased sample against a normal sample at a global scale. This is applied to a wide array of human diseases, with the hope that it will lead to identification of biomarkers or even pathogenesis of a disease. Traditionally, ELISA (enzyme-linked immunosorbent assay) has been the major method for the quantification of proteins with good sensitivity. Even today, it is the “gold standard” for targeted protein quantification. The major drawback with ELISA is the lack of availability of antibodies with high specificity.

First attempts to determine the amount of specific proteins were done using stable isotope dilution methods and MS approximately 20 years ago, starting with atom bombardment MS and deuterium-labeled synthetic polypeptides. Advances in MS instrumentation has increased our ability to detect candidate proteins in complicated biological samples with high sensitivity. To quantify the results, introduction of a stable isotope (containing 13C or 15N, for example) to selected amino acids in a reference peptide sequence provides a peptide with the same physicochemical properties, that can be readily distinguished by MS from the peptide in the target tissue or fluid. Studies have shown that full-length proteins with stable isotopes can be used in quantification of biomarkers in urine and water samples with nanomolar and picomolar level sensitivities respectively.

There is a variety of MS platforms used for quantitative proteomics, some of which are triple quadrupole (triple Q), matrix assisted laser desorption/ionization time-of-flight tandem mass spectrometer (MALDI TOF/TOF), electrospray ionization (ESI) based on QTOF MS, and an ion trap instrument using selective ion monitoring (SIM) mode. The most popular of the platforms above is the triple Q. Demonstrations have shown that it can multiplex and simultaneously target more than 50 peptides for quantification in plasma in a single measurement. For targeted quantitative analysis, coupling liquid chromatography with MALDI greatly enhances the performance of MALDI bases MS. Some advantages of this application include the ability of the techniques to be performed in parallel with each other, it can be made a highly selective, data-driven procedure, and the preservation of the sample to some degree for repeat analysis. This technique is also highlighted by its potential high throughput and excellent resolution.

One of the most important steps in quantification is sample preparation which greatly influences sensitivity. One of the most common steps used is the depletion of highly abundant proteins making it easier to enhance analytical dynamic range and the detection of proteins in low concentrations. One of the techniques performed is strong-cation exchange chromatography (SCX) which has shown to give the ability to detect peptides in the high pg/ml level, giving a 100-fold improvement over direct plasma analysis.

Post-translational modification (PTM) is an important process to understand since it is often involved in tumor progression, but can be a problem to mimic due to the complexity and structure of the sugar chains (as in glycosylation PTM). One experiment extracted N-linked glycopeptides and de-glycosylated. This resulted in the conversion of Asn to Asp and a difference in mass. This was utilized to make a synthetic polypeptide to replicate an N-linked glycopeptide in its glycosylated form.

Once of the main features of MS based quantification is for clinical applications used to identify biomarkers associated with diseases. For example, 177 protein candidates associated with stroke and cardiovascular disease in plasma have been proposed. Some biomarkers affiliated with stroke are S-100b, B-type neurotrophic growth factor, von Willebrand factor and monocyte chemotactic protein-1[3]. Other biomarkers have been proposed to rheumatoid arthritis and breast cancer among others.

One of the main goals of the ability to quantify proteins and peptides is for personalized medicine. As technology advances, we will be able to create techniques that easily assemble multiples detection in a single measurement. Biomarkers from diseases can also be multiplexed in a single assay, allowing us to possibly diagnose multiple diseases. Ideally, a single-step preparation strategy is key, allowing for high throughput and possible an automated process, reducing the amount of human interaction and the chance of human error.

### Relevance to Proteomics Course

The ability to quantify proteins using mass spectrometry is a great tool to compare a large number of proteins from a control sample with a test sample in search of biomarkers. When a noticeable difference is detected, further studies can be performed on those proteins. Major breakthroughs in MS technology give us the capability to universally approach developments to quantify a wide spectrum of proteins with little restriction. It also gives us the ability to make more detections per measurement. In the future, these approaches can give way to personal medicine giving us the ability to screen individuals by detecting multiple biomarkers from a single or multiple diseases.

### References

[1] Pharmaceutical Outsourcing Decisions. SPG Media Limited. (http://www.pharmaceuticaloutsourcing.com/articles/pod003_014_power3.htm)

[2] Rifai, Nader, Gillette, Michael A., and Carr, Steven A. "Protein biomarker discovery and validation: the long and uncertain path to clinical utility". Nature Biotechnology 24, 971 - 983 (2006) (http://www.nature.com/nbt/journal/v24/n8/abs/nbt1235.html)

[3] Reynolds, Mark A., et al. "Early Biomarkers of Stroke". Clinical Chemistry 49 (2003): 1733-1739. Print. (http://www.clinchem.org/cgi/content/abstract/49/10/1733)

# Experimental Protocols

### Presentation

 Print version

This page contains protocols that are frequently used in proteomics. You are welcome to add protocols this chapter.

1. Plant Proteomics about Two Dimensional Gel Electrophoresis

# Contributors list

• Alexander Butarbutar - nbb3924rit.edu
• Andres Javier Gonzalez - ajg3600rit.edu
• Anthony Esposito - age5719rit.edu
• Aubrey Bailey - aubreybaileygmail.com
• Jared Carter - Adarza BioSystems, Inc
• John Boutell - JDBoutellgmail.com
• John Brothers II - jfb4497rit.edu / jb2bu.edu
• Laura Grell - llg3875rit.edu
• Leighton Ing - leighton.inggmail.com
• Lukas Habegger - lh9357rit.edu
• Melissa Wilbert - mlw3559rit.edu
• Mitul Patel (Bioinformatician) - mitul428gmail.com
• Patrick Kenney - pokschrit.edu
• Piotr A. Kowalski - pxk9006rit.edu
• Tom Maxon - TomMaxongmail.com

• Vishal Thovarai - vvt1936rit.edu

1. Template:Pharmaceuticaloutsourcing
2. Rifai, Nader; et al. (2006). Protein biomarker discovery and validation: the long and uncertain path to clinical utility. Nature Biotechnology.
3. Reynolds, Mark A.; et al. (2003). Early Biomarkers of Stroke. Clinical Chemistry.