Principal Component Analysis (PCA) is a very powerful technique that has wide applicability in data science, bioinformatics, and further afield. It was initially developed to analyse large volumes of data in order to tease out the differences/relationships between the logical entities being analysed. It extracts the fundamental structure of the data without the need to build any model to represent it. This 'summary' of the data is arrived at through a process of reduction that can transform the large number of variables into a lesser number that are uncorrelated (i.e. the ‘principal components'), while at the same time being capable of easy interpretation on the original data [@PCAtools] [@BligheK].
PCAtools provides functions for data exploration via PCA, and allows the user to generate publication-ready figures. PCA is performed via BiocSingular [@Lun] - users can also identify optimal number of principal components via different metrics, such as elbow method and Horn's parallel analysis [@Horn] [@Buja], which has relevance for data reduction in single-cell RNA-seq (scRNA-seq) and high dimensional mass cytometry data.
if (!requireNamespace('BiocManager', quietly = TRUE)) install.packages('BiocManager') BiocManager::install('PCAtools')
Note: to install development version direct from GitHub:
if (!requireNamespace('devtools', quietly = TRUE)) install.packages('devtools') devtools::install_github('kevinblighe/PCAtools')
library(PCAtools)
For this vignette, we will load breast cancer gene expression data with recurrence free survival (RFS) from Gene Expression Profiling in Breast Cancer: Understanding the Molecular Basis of Histologic Grade To Improve Prognosis.
First, let's read in and prepare the data:
library(Biobase) library(GEOquery) # load series and platform data from GEO gset <- getGEO('GSE2990', GSEMatrix = TRUE, getGPL = FALSE) mat <- exprs(gset[[1]]) # remove Affymetrix control probes mat <- mat[-grep('^AFFX', rownames(mat)),] # extract information of interest from the phenotype data (pdata) idx <- which(colnames(pData(gset[[1]])) %in% c('relation', 'age:ch1', 'distant rfs:ch1', 'er:ch1', 'ggi:ch1', 'grade:ch1', 'size:ch1', 'time rfs:ch1')) metadata <- data.frame(pData(gset[[1]])[,idx], row.names = rownames(pData(gset[[1]]))) # tidy column names colnames(metadata) <- c('Study', 'Age', 'Distant.RFS', 'ER', 'GGI', 'Grade', 'Size', 'Time.RFS') # prepare certain phenotypes of interest metadata$Study <- gsub('Reanalyzed by: ', '', as.character(metadata$Study)) metadata$Age <- as.numeric(gsub('^KJ', NA, as.character(metadata$Age))) metadata$Distant.RFS <- factor(metadata$Distant.RFS, levels = c(0,1)) metadata$ER <- factor(gsub('\\?', NA, as.character(metadata$ER)), levels = c(0,1)) metadata$ER <- factor(ifelse(metadata$ER == 1, 'ER+', 'ER-'), levels = c('ER-', 'ER+')) metadata$GGI <- as.numeric(as.character(metadata$GGI)) metadata$Grade <- factor(gsub('\\?', NA, as.character(metadata$Grade)), levels = c(1,2,3)) metadata$Grade <- gsub(1, 'Grade 1', gsub(2, 'Grade 2', gsub(3, 'Grade 3', metadata$Grade))) metadata$Grade <- factor(metadata$Grade, levels = c('Grade 1', 'Grade 2', 'Grade 3')) metadata$Size <- as.numeric(as.character(metadata$Size)) metadata$Time.RFS <- as.numeric(gsub('^KJX|^KJ', NA, metadata$Time.RFS)) # remove samples from the pdata that have any NA value discard <- apply(metadata, 1, function(x) any(is.na(x))) metadata <- metadata[!discard,] # filter the expression data to match the samples in our pdata mat <- mat[,which(colnames(mat) %in% rownames(metadata))] # check that sample names match exactly between pdata and expression data all(colnames(mat) == rownames(metadata))
Conduct principal component analysis (PCA):
p <- pca(mat, metadata = metadata, removeVar = 0.1)
screeplot(p, axisLabSize = 18, titleLabSize = 22)
Different interpretations of the biplot exist. In the OMICs era, for most general users, a biplot is a simple representation of samples in a 2-dimensional space, usually focusing on just the first two PCs:
biplot(p)
However, the original definition of a biplot by Gabriel KR [@Gabriel] is a plot that plots both variables and observatinos (samples) in the same space. The variables are indicated by arrows drawn from the origin, which indicate their 'weight' in different directions. We touch on this later via the plotLoadings function.
biplot(p, showLoadings = TRUE, lab = NULL)
One of the probes pointing downward is 205225_at, which targets the ESR1 gene. This is already a useful validation, as the oestrogen receptor, which is in part encoded by ESR1, is strongly represented by PC2 (y-axis), with negative-to-positive receptor status going from top-to-bottom.
More on this later in this vignette.
pairsplot(p)
If the biplot was previously generated with showLoadings = TRUE, check how this loadings plot corresponds to the biplot loadings - they should match up for the top hits.
plotloadings(p, labSize = 3)
eigencorplot(p, metavars = c('Study','Age','Distant.RFS','ER', 'GGI','Grade','Size','Time.RFS'))
The rotated data that represents the observatinos / samples is stored in rotated, while the variable loadings are stored in loadings
p$rotated[1:5,1:5] p$loadings[1:5,1:5]
All functions in PCAtools are highly configurable and should cover virtually all basic and advanced user requirements. The following sections take a look at some of these advanced features, and form a somewhat practical example of how one can use PCAtools to make a clinical interpretation of data.
First, let's sort out the gene annotation by mapping the probe IDs to gene symbols. The array used for this study was the Affymetrix U133a, so let's use the hgu133a.db Bioconductor package:
suppressMessages(require(hgu133a.db)) newnames <- mapIds(hgu133a.db, keys = rownames(p$loadings), column = c('SYMBOL'), keytype = 'PROBEID') # tidy up for NULL mappings and duplicated gene symbols newnames <- ifelse(is.na(newnames) | duplicated(newnames), names(newnames), newnames) rownames(p$loadings) <- newnames
A scree plot on its own just shows the accumulative proportion of explained variation, but how can we determine the optimum number of PCs to retain?
PCAtools provides four metrics for this purpose:
Let's perform Horn's parallel analysis first:
horn <- parallelPCA(mat) horn$n
Now the elbow method:
elbow <- findElbowPoint(p$variance) elbow
In most cases, the identified values will disagree. This is because finding the correct number of PCs is a difficult task and is akin to finding the 'correct' number of clusters in a dataset - there is no correct answer.
Taking these values, we can produce a new scree plot and mark these:
library(ggplot2) screeplot(p, components = getComponents(p, 1:20), vline = c(horn$n, elbow)) + geom_label(aes(x = horn$n + 1, y = 50, label = 'Horn\'s', vjust = -1, size = 8)) + geom_label(aes(x = elbow + 1, y = 50, label = 'Elbow method', vjust = -1, size = 8))
If all else fails, one can simply take the number of PCs that contributes to a pre-selected total of explained variation, e.g., in this case, 27 PCs account for >80% explained variation.
which(cumsum(p$variance) > 80)[1]
The bi-plot comparing PC1 versus PC2 is the most characteristic plot of PCA. However, PCA is much more than the bi-plot and much more than PC1 and PC2. This said, PC1 and PC2, by the very nature of PCA, are indeed usually the most important parts of a PCA analysis.
In a bi-plot, we can shade the points by different groups and add many more features.
biplot(p, lab = paste0(p$metadata$Age, ' años'), colby = 'ER', hline = 0, vline = 0, legendPosition = 'right')
The encircle functionality literally draws a polygon around each group specified by colby. It says nothing about any statistic pertaining to each group.
biplot(p, colby = 'ER', colkey = c('ER+' = 'forestgreen', 'ER-' = 'purple'), colLegendTitle = 'ER-\nstatus', # encircle config encircle = TRUE, encircleFill = TRUE, hline = 0, vline = c(-25, 0, 25), legendPosition = 'top', legendLabSize = 16, legendIconSize = 8.0) biplot(p, colby = 'ER', colkey = c('ER+' = 'forestgreen', 'ER-' = 'purple'), colLegendTitle = 'ER-\nstatus', # encircle config encircle = TRUE, encircleFill = FALSE, encircleAlpha = 1, encircleLineSize = 5, hline = 0, vline = c(-25, 0, 25), legendPosition = 'top', legendLabSize = 16, legendIconSize = 8.0)
Stat ellipses are also drawn around each group but have a greater statistical meaning and can be used, for example, as a strict determination of outlier samples. Here, we draw ellipses around each group at the 95% confidence level:
biplot(p, colby = 'ER', colkey = c('ER+' = 'forestgreen', 'ER-' = 'purple'), # ellipse config ellipse = TRUE, ellipseConf = 0.95, ellipseFill = TRUE, ellipseAlpha = 1/4, ellipseLineSize = 1.0, xlim = c(-125,125), ylim = c(-50, 80), hline = 0, vline = c(-25, 0, 25), legendPosition = 'top', legendLabSize = 16, legendIconSize = 8.0) biplot(p, colby = 'ER', colkey = c('ER+' = 'forestgreen', 'ER-' = 'purple'), # ellipse config ellipse = TRUE, ellipseConf = 0.95, ellipseFill = TRUE, ellipseAlpha = 1/4, ellipseLineSize = 0, ellipseFillKey = c('ER+' = 'yellow', 'ER-' = 'pink'), xlim = c(-125,125), ylim = c(-50, 80), hline = 0, vline = c(-25, 0, 25), legendPosition = 'top', legendLabSize = 16, legendIconSize = 8.0)
biplot(p, colby = 'ER', colkey = c('ER+' = 'forestgreen', 'ER-' = 'purple'), hline = c(-25, 0, 25), vline = c(-25, 0, 25), legendPosition = 'top', legendLabSize = 13, legendIconSize = 8.0, shape = 'Grade', shapekey = c('Grade 1' = 15, 'Grade 2' = 17, 'Grade 3' = 8), drawConnectors = FALSE, title = 'PCA bi-plot', subtitle = 'PC1 versus PC2', caption = '27 PCs ≈ 80%')
biplot(p, lab = NULL, colby = 'ER', colkey = c('ER+'='royalblue', 'ER-'='red3'), hline = c(-25, 0, 25), vline = c(-25, 0, 25), vlineType = c('dotdash', 'solid', 'dashed'), gridlines.major = FALSE, gridlines.minor = FALSE, pointSize = 5, legendPosition = 'left', legendLabSize = 14, legendIconSize = 8.0, shape = 'Grade', shapekey = c('Grade 1'=15, 'Grade 2'=17, 'Grade 3'=8), drawConnectors = FALSE, title = 'PCA bi-plot', subtitle = 'PC1 versus PC2', caption = '27 PCs ≈ 80%')
Let's plot the same as above but with loadings:
biplot(p, # loadings parameters showLoadings = TRUE, lengthLoadingsArrowsFactor = 1.5, sizeLoadingsNames = 4, colLoadingsNames = 'red4', # other parameters lab = NULL, colby = 'ER', colkey = c('ER+'='royalblue', 'ER-'='red3'), hline = 0, vline = c(-25, 0, 25), vlineType = c('dotdash', 'solid', 'dashed'), gridlines.major = FALSE, gridlines.minor = FALSE, pointSize = 5, legendPosition = 'left', legendLabSize = 14, legendIconSize = 8.0, shape = 'Grade', shapekey = c('Grade 1'=15, 'Grade 2'=17, 'Grade 3'=8), drawConnectors = FALSE, title = 'PCA bi-plot', subtitle = 'PC1 versus PC2', caption = '27 PCs ≈ 80%')
There are two ways to colour by a continuous variable. In the first way, we simply 'add on' a continuous colour scale via scale_colour_gradient:
# add ESR1 gene expression to the metadata p$metadata$ESR1 <- mat['205225_at',] biplot(p, x = 'PC2', y = 'PC3', lab = NULL, colby = 'ESR1', shape = 'ER', hline = 0, vline = 0, legendPosition = 'right') + scale_colour_gradient(low = 'gold', high = 'red2')
We can also just permit that the internal ggplot2 engine picks the colour scheme - here, we also plot PC10 versus PC50:
biplot(p, x = 'PC10', y = 'PC50', lab = NULL, colby = 'Age', hline = 0, vline = 0, hlineWidth = 1.0, vlineWidth = 1.0, gridlines.major = FALSE, gridlines.minor = TRUE, pointSize = 5, legendPosition = 'left', legendLabSize = 16, legendIconSize = 8.0, shape = 'Grade', shapekey = c('Grade 1'=15, 'Grade 2'=17, 'Grade 3'=8), drawConnectors = FALSE, title = 'PCA bi-plot', subtitle = 'PC10 versus PC50', caption = '27 PCs ≈ 80%')
The pairs plot in PCA unfortunately suffers from a lack of use; however, for those who love exploring data and squeezing every last ounce of information out of data, a pairs plot provides for a relatively quick way to explore useful leads for other downstream analyses.
As the number of pairwise plots increases, however, space becomes limited. We can shut off titles and axis labeling to save space. Reducing point size and colouring by a variable of interest can additionally help us to rapidly skim over the data.
pairsplot(p, components = getComponents(p, c(1:10)), triangle = TRUE, trianglelabSize = 12, hline = 0, vline = 0, pointSize = 0.4, gridlines.major = FALSE, gridlines.minor = FALSE, colby = 'Grade', title = 'Pairs plot', plotaxes = FALSE, margingaps = unit(c(-0.01, -0.01, -0.01, -0.01), 'cm'))
We can arrange these in a way that makes better use of the screen space by setting 'triangle = FALSE'. In this case, we can further control the layout with the 'ncol' and 'nrow' parameters, although, the function will automatically determine these based on your input data.
pairsplot(p, components = getComponents(p, c(4,33,11,1)), triangle = FALSE, hline = 0, vline = 0, pointSize = 0.8, gridlines.major = FALSE, gridlines.minor = FALSE, colby = 'ER', title = 'Pairs plot', titleLabSize = 22, axisLabSize = 14, plotaxes = TRUE, margingaps = unit(c(0.1, 0.1, 0.1, 0.1), 'cm'))
If, on the bi-plot or pairs plot, we encounter evidence that 1 or more PCs are segregating a factor of interest, we can explore further the genes that are driving these differences along each PC.
For each PC of interest, 'plotloadings' determines the variables falling within the top/bottom 5% of the loadings range, and then creates a final consensus list of these. These variables are then plotted.
The loadings plot, like all others, is highly configurable. To modify the cut-off for inclusion / exclusion of variables, we use rangeRetain, where 0.01 equates to the top/bottom 1% of the loadings range per PC.
plotloadings(p, rangeRetain = 0.01, labSize = 4.0, title = 'Loadings plot', subtitle = 'PC1, PC2, PC3, PC4, PC5', caption = 'Top 1% variables', shape = 24, col = c('limegreen', 'black', 'red3'), drawConnectors = TRUE)
At least one interesting finding is 205225_at / ESR1, which is by far the gene most responsible for variation along PC2. The previous bi-plots showed that this PC also segregated ER+ from ER- patients. The other results could be explored. Also, from the biplots with loadings that we have already generated, this result is also verified in these.
With the loadings plot, in addition, we can instead plot absolute values and modify the point sizes to be proportional to the loadings. We can also switch off the line connectors and plot the loadings for any PCs
plotloadings(p, components = getComponents(p, c(4,33,11,1)), rangeRetain = 0.1, labSize = 4.0, absolute = FALSE, title = 'Loadings plot', subtitle = 'Misc PCs', caption = 'Top 10% variables', shape = 23, shapeSizeRange = c(1, 16), col = c('white', 'pink'), drawConnectors = FALSE)
Further exploration of the PCs can come through correlations with clinical data. This is also a mostly untapped resource in the era of 'big data' and can help to guide an analysis down a particular path.
We may wish, for example, to correlate all PCs that account for 80% variation in our dataset and then explore further the PCs that have statistically significant correlations.
'eigencorplot' is built upon another function by the PCAtools developers, namely CorLevelPlot. Further examples can be found there.
eigencorplot(p, components = getComponents(p, 1:27), metavars = c('Study','Age','Distant.RFS','ER', 'GGI','Grade','Size','Time.RFS'), col = c('darkblue', 'blue2', 'black', 'red2', 'darkred'), cexCorval = 0.7, colCorval = 'white', fontCorval = 2, posLab = 'bottomleft', rotLabX = 45, posColKey = 'top', cexLabColKey = 1.5, scale = TRUE, main = 'PC1-27 clinical correlations', colFrame = 'white', plotRsquared = FALSE)
We can also supply different cut-offs for statistical significance, apply p-value adjustment, plot R-squared values, and specify correlation method:
eigencorplot(p, components = getComponents(p, 1:horn$n), metavars = c('Study','Age','Distant.RFS','ER','GGI', 'Grade','Size','Time.RFS'), col = c('white', 'cornsilk1', 'gold', 'forestgreen', 'darkgreen'), cexCorval = 1.2, fontCorval = 2, posLab = 'all', rotLabX = 45, scale = TRUE, main = bquote(Principal ~ component ~ Pearson ~ r^2 ~ clinical ~ correlates), plotRsquared = TRUE, corFUN = 'pearson', corUSE = 'pairwise.complete.obs', corMultipleTestCorrection = 'BH', signifSymbols = c('****', '***', '**', '*', ''), signifCutpoints = c(0, 0.0001, 0.001, 0.01, 0.05, 1))
Clearly, PC2 is coming across as the most interesting PC in this experiment, with highly statistically significant correlation (p<0.0001) to ER status, tumour grade, and GGI (genomic Grade Index), an indicator of response. It comes as no surprise that the gene driving most variationn along PC2 is ESR1, identified from our loadings plot.
This information is, of course, not new, but shows how PCA is much more than just a bi-plot used to identify outliers!
pscree <- screeplot(p, components = getComponents(p, 1:30), hline = 80, vline = 27, axisLabSize = 14, titleLabSize = 20, returnPlot = FALSE) + geom_label(aes(20, 80, label = '80% explained variation', vjust = -1, size = 8)) ppairs <- pairsplot(p, components = getComponents(p, c(1:3)), triangle = TRUE, trianglelabSize = 12, hline = 0, vline = 0, pointSize = 0.8, gridlines.major = FALSE, gridlines.minor = FALSE, colby = 'Grade', title = '', plotaxes = FALSE, margingaps = unit(c(0.01, 0.01, 0.01, 0.01), 'cm'), returnPlot = FALSE) pbiplot <- biplot(p, # loadings parameters showLoadings = TRUE, lengthLoadingsArrowsFactor = 1.5, sizeLoadingsNames = 4, colLoadingsNames = 'red4', # other parameters lab = NULL, colby = 'ER', colkey = c('ER+'='royalblue', 'ER-'='red3'), hline = 0, vline = c(-25, 0, 25), vlineType = c('dotdash', 'solid', 'dashed'), gridlines.major = FALSE, gridlines.minor = FALSE, pointSize = 5, legendPosition = 'none', legendLabSize = 16, legendIconSize = 8.0, shape = 'Grade', shapekey = c('Grade 1'=15, 'Grade 2'=17, 'Grade 3'=8), drawConnectors = FALSE, title = 'PCA bi-plot', subtitle = 'PC1 versus PC2', caption = '27 PCs ≈ 80%', returnPlot = FALSE) ploadings <- plotloadings(p, rangeRetain = 0.01, labSize = 4, title = 'Loadings plot', axisLabSize = 12, subtitle = 'PC1, PC2, PC3, PC4, PC5', caption = 'Top 1% variables', shape = 24, shapeSizeRange = c(4, 8), col = c('limegreen', 'black', 'red3'), legendPosition = 'none', drawConnectors = FALSE, returnPlot = FALSE) peigencor <- eigencorplot(p, components = getComponents(p, 1:10), metavars = c('Study','Age','Distant.RFS','ER', 'GGI','Grade','Size','Time.RFS'), cexCorval = 1.0, fontCorval = 2, posLab = 'all', rotLabX = 45, scale = TRUE, main = "PC clinical correlates", cexMain = 1.5, plotRsquared = FALSE, corFUN = 'pearson', corUSE = 'pairwise.complete.obs', signifSymbols = c('****', '***', '**', '*', ''), signifCutpoints = c(0, 0.0001, 0.001, 0.01, 0.05, 1), returnPlot = FALSE) library(cowplot) library(ggplotify) top_row <- plot_grid(pscree, ppairs, pbiplot, ncol = 3, labels = c('A', 'B Pairs plot', 'C'), label_fontfamily = 'serif', label_fontface = 'bold', label_size = 22, align = 'h', rel_widths = c(1.10, 0.80, 1.10)) bottom_row <- plot_grid(ploadings, as.grob(peigencor), ncol = 2, labels = c('D', 'E'), label_fontfamily = 'serif', label_fontface = 'bold', label_size = 22, align = 'h', rel_widths = c(0.8, 1.2)) plot_grid(top_row, bottom_row, ncol = 1, rel_heights = c(1.1, 0.9))
It is possible to use the variable loadings as part of a matrix calculation to 'predict' principal component eigenvectors in new data. This is elaborated in a posting by Pandula Priyadarshana: How to use Principal Component Analysis (PCA) to make Predictions.
The pca class, which is created by PCAtools, is not configured to work with stats::predict; however, trusty prcomp class is configured. We can manually create a prcomp object and then use that in model prediction, as elaborated in the following code chunk:
p <- pca(mat, metadata = metadata, removeVar = 0.1) p.prcomp <- list(sdev = p$sdev, rotation = data.matrix(p$loadings), x = data.matrix(p$rotated), center = TRUE, scale = TRUE) class(p.prcomp) <- 'prcomp' # for this simple example, just use a chunk of # the original data for the prediction newdata <- t(mat[,seq(1,20)]) predict(p.prcomp, newdata = newdata)[,1:5]
The development of PCAtools has benefited from contributions and suggestions from:
sessionInfo()
@PCAtools
@BligheK
@Horn
@Buja
@Lun
@Gabriel
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.