Description Usage Arguments Value Examples
Classify observations using a Zero-inflated Poisson model.
1 2 |
x |
A n-by-p training data matrix; n observations and p features. Used to train the classifier. |
y |
A numeric vector of class labels of length n: 1, 2, ...., K if there are K classes.Each element of y corresponds to a row of x; i.e. these are the class labels for the observations in x. |
xte |
A m-by-p data matrix: m test observations and p features. The classifier fit on the training data set x will be tested on this data set. If NULL, then testing will be performed on the training set. |
rho |
Tuning parameter controlling the amount of soft thresholding performed, i.e. the level of sparsity, i.e. number of nonzero features in classifier. Rho=0 means that there is no soft-thresolding, i.e. all features used in classifier. Larger rho means that fewer features will be used. |
beta |
A smoothing term. A Gamma(beta,beta) prior is used to fit the Zero-inflated Poisson model.Recommendation is to just leave it at 1, the default value. |
rhos |
A vector of tuning parameters that control the amount of soft thresholding performed. If "rhos" is provided then a number of models will be fit (one for each element of "rhos"), and a number of predicted class labels will be output (one for each element of "rhos"). |
prob0 |
The probability that the read is 0 |
type |
How should the observations be normalized within the Zero-inflated Poisson model, i.e. how should the size factors be estimated? Options are "quantile" or "deseq" (more robust) or "mle" (less robust). In greater detail: "quantile" is quantile normalization approach of Bullard et al 2010 BMC Bioinformatics,"deseq" is median of the ratio of an observation to a pseudoreference obtained by taking the geometric mean, described in Anders and Huber 2010 Genome Biology and implemented in Bioconductor package "DESeq", and "mle" is the sum of counts for each sample; this is the maximum likelihood estimate under a simple Zero-inflated Poisson model. |
prior |
vector of length equal to the number of classes, representing prior probabilities for each class.If NULL then uniform priors are used (i.e. each class is equally likely) |
transform |
should data matrices x and xte first be power transformed so that it more closely fits the Zero-inflated Poisson model? TRUE or FALSE. Power transformation is especially useful if the data are overdispersed relative to the Zero-inflated Poisson model. |
alpha |
if transform=TRUE, this determines the power to which the data matrices x and xte are transformed.If alpha=NULL then the transformation that makes the Zero-inflated Poisson model best fit the data matrix x is computed.(Note that alpha is computed based on x, not based on xte). Or a value of alpha, 0<alpha<=1, can be entered by the user. |
list(.) A list of output, "ytehat" represents The predicted class labels for each of the test observations(rows of xte)."discriminant" represents A m-by-K matrix, where K is the number of classes. The (i,k) element is large if the ith element of xte belongs to class k."ds" A K-by-p matrix indicating the extent to which each feature is under-or over-expressed in each class. The (k,j) element is >1 if feature j is over-expressed in class k, and is <1 if feature j is under-expressed in class k. When rho is large then many of the elemtns of this matrix are shrunken towards 1(no over- or under-expression)."alpha" represents Power transformation used (if transform=TRUE).
1 2 3 4 5 6 7 8 9 | library(SummarizedExperiment)
dat <- newCountDataSet(n=40,p=500, K=4, param=10, sdsignal=0.1,drate=0.4)
x <- t(assay(dat$sim_train_data))
y <- as.numeric(colnames(dat$sim_train_data))
xte <- t(assay(dat$sim_test_data))
prob<-estimatep(x=x, y=y, xte=x, beta=1, type="mle", prior=NULL)
prob0<-estimatep(x=x, y=y, xte=xte, beta=1,type="mle", prior=NULL)
cv.out <- ZIPDA.cv(x=x, y=y, prob0=t(prob))
out <- ZIPLDA(x=x, y=y, xte=xte, rho=cv.out$bestrho, prob0=t(prob0))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.