Description Usage Arguments Value Examples
Wrapper function for generating CD plots for each classifiers for each of the seven evaluation measures. Code for CD plots adapted from now archived scmamp R package.
1 | getCDPlots(evalMeasuresDF, emNames = "All", compareBest = F, use_abbr = T)
|
evalMeasuresDF |
A dataframe with the following columns: Model, RepNum, Pass_FScore, Pass_Recall, Pass_Precision, Fail_FScore, Fail_Recall, Fail_Precision, and Accuracy. The rows of the dataframe will correspond to the results of a particular model and a particular round of cross-validation. |
emNames |
A list of names of the evaluation measures to visualize. Accepts the following: Pass_FScore, Pass_Recall, Pass_Precision, Fail_FScore, Fail_Recall, Fail_Precision, and Accuracy. Default is "All". |
compareBest |
Boolean. If T, compare the best performing models from each of the metric sets. Else, compare the models within eachh metric set. Must have at least two metric sets. Default: F. |
use_abbr |
Boolean. If T, use abbreviations for model names in the CD plot (e.g. DecisionTree = DT). Default: T. |
A named list with the following structure: metric_type$plots | rankmatrix$eval_measures, where metric_type is one of the three metric sets (M4, M7, or M11) and eval_measures
1 2 | # Create a list of bar plots for each evaluation measure
getCDPlots(evalMeasuresDF = test_evalMeasures, emNames = c("Pass_FScore", "Fail_FScore"))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.