knitr::opts_chunk$set(echo = TRUE, results = "markup", message = FALSE, warning = FALSE)
We read in input.scone.csv, which is our file modified (and renamed) from the get.marker.names() function. The K-nearest neighbor generation is derived from the Fast Nearest Neighbors (FNN) R package, within our function Fnn(), which takes as input the “input markers” to be used, along with the concatenated data previously generated, and the desired k. We advise the default selection to the total number of cells in the dataset divided by 100, as has been optimized on existing mass cytometry datasets. The output of this function is a matrix of each cell and the identity of its k-nearest neighbors, in terms of its row number in the dataset used here as input.
library(Sconify) # Markers from the user-generated excel file marker.file <- system.file('extdata', 'markers.csv', package = "Sconify") markers <- ParseMarkers(marker.file) # How to convert your excel sheet into vector of static and functional markers markers # Get the particular markers to be used as knn and knn statistics input input.markers <- markers[[1]] funct.markers <- markers[[2]] # Selection of the k. See "Finding Ideal K" vignette k <- 30 # The built-in scone functions wand.nn <- Fnn(cell.df = wand.combined, input.markers = input.markers, k = k) # Cell identity is in rows, k-nearest neighbors are columns # List of 2 includes the cell identity of each nn, # and the euclidean distance between # itself and the cell of interest # Indices str(wand.nn[[1]]) wand.nn[[1]][1:20, 1:10] # Distance str(wand.nn[[2]]) wand.nn[[2]][1:20, 1:10]
This function iterates through each KNN, and performs a series of calculations. The first is fold change values for each maker per KNN, where the user chooses whether this will be based on medians or means. The second is a statistical test, where the user chooses t test or Mann-Whitney U test. I prefer the latter, because it does not assume any properties of the distributions. Of note, the p values are adjusted for false discovery rate, and therefore are called q values in the output of this function. The user also inputs a threshold parameter (default 0.05), where the fold change values will only be shown if the corresponding statistical test returns a q value below said threshold. Finally, the "multiple.donor.compare" option, if set to TRUE will perform a t test based on the mean per-marker values of each donor. This is to allow the user to make comparisons across replicates or multiple donors if that is relevant to the user's biological questions. This function returns a matrix of cells by computed values (change and statistical test results, labeled either marker.change or marker.qvalue). This matrix is intermediate, as it gets concatenated with the original input matrix in the post-processing step (see the relevant vignette). We show the code and the output below. See the post-processing vignette, where we show how this gets combined with the input data, and additional analysis is performed.
wand.scone <- SconeValues(nn.matrix = wand.nn, cell.data = wand.combined, scone.markers = funct.markers, unstim = "basal") wand.scone
If one wants to export KNN data to perform other statistics not available in this package, then I provide a function that produces a list of each cell identity in the original input data matrix, and a matrix of all cells x features of its KNN.
I also provide a function to find the KNN density estimation independently of the rest of the "scone.values" analysis, to save time if density is all the user wants. With this density estimation, one can perform interesting analysis, ranging from understanding phenotypic density changes along a developmental progression (see post-processing vignette for an example), to trying out density-based binning methods (eg. X-shift). Of note, this density is specifically one divided by the aveage distance to k-nearest neighbors. This specific measure is related to the Shannon Entropy estimate of that point on the manifold (https://hal.archives-ouvertes.fr/hal-01068081/document).
I use this metric to avoid the unusual properties of the volume of a sphere as it increases in dimensions (https://en.wikipedia.org/wiki/Volume_of_an_n-ball). This being said, one can modify this vector to be such a density estimation (example http://www.cs.haifa.ac.il/~rita/ml_course/lectures_old/KNN.pdf), by treating the distance to knn as the radius of a n-dimensional sphere and incoroprating said volume accordingly.
An individual with basic programming skills can iterate through these elements to perform the statistics of one's choosing. Examples would include per-KNN regression and classification, or feature imputation. The additional functionality is shown below, with the example knn.list in the package being the first ten instances:
# Constructs KNN list, computes KNN density estimation wand.knn.list <- MakeKnnList(cell.data = wand.combined, nn.matrix = wand.nn) wand.knn.list[[8]] # Finds the KNN density estimation for each cell, ordered by column, in the # original data matrix wand.knn.density <- GetKnnDe(nn.matrix = wand.nn) str(wand.knn.density)
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.