View source: R/Normalization.R
ct.normalizeFQ | R Documentation |
This function applies quantile normalization to subsets of samples defined by a provided factor, correcting for library size. It does this by converting raw count values to log2 counts per million and optionally adjusting further in the usual way by dividing these values by user-specified library size factors; then this matrix is split into groups according to the provided factor that are quantile normalized, and then the groups are median scaled to each other before conversion back into raw counts. This method is best used in comparisons for long timecourse screens, where groupwise differences in growth rate cause uneven intrinsic dialation of construct distributions.
Note that this normalization strategy is not appropriate for experiments where significant distortion of the libraries is expected as a consequence of the screening strategy (e.g., strong selection screens).
ct.normalizeFQ(eset, sets, lib.size = NULL)
eset |
An |
sets |
A character or factor object delineating which samples should be grouped together during the normalization step. Must be the same length as the number of columns in the provided eset, and cannot contain 'NA' or 'NULL' values. |
lib.size |
An optional vector of voom-appropriate library size adjustment factors, usually calculated with |
A renormalized ExpressionSet object of the same type as the provided object.
Russell Bainer
data('es') #Build the sample key and library sizes for visualization library(Biobase) sk <- relevel(as.factor(pData(es)$TREATMENT_NAME), 'ControlReference') names(sk) <- row.names(pData(es)) ls <- colSums(exprs(es)) es.norm <- ct.normalizeFQ(es, sets = gsub('(Death|Control)', '', pData(es)$TREATMENT_NAME), lib.size= ls) ct.gRNARankByReplicate(es, sampleKey = sk, lib.size= ls) ct.gRNARankByReplicate(es.norm, sampleKey = sk, lib.size= ls)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.