DCLDE Abstract

CV4E Model results - lessons learned and advanced techniques Everyone knows that CNNs/Deep Learning are trendy and can be extremely accurate, but there are benefits other than just raw power.

Classify wigners of BW clicks using neural networks

Advanced bits Adding ICI measure - had to 10x for it to recognize Selection net - allowing model to not predict * GradCAM / Saliency maps to try and see what the model is learning These show potential issues for BB - seems to suggest that it is learning that BB are low frequency, and thats it. Highlighting areas for potential concern

Leassons learned Differnet filters on differnet datasets caused problems - models are very sensitive to these kinds of structural differences Using the right metrics is important - class balanced accuracy vs overall, Really looking at your results - which things are right, which things are wrong - is really important. This caught the filter difference problem for me. Also highlights potential issue of Zc (70% class) appearing to be associated with noise. Better data is better. Low SNR calls found to have higher error rates, really low calls can be safely removed. * Accuracy % scores may not tell the whole picture - ICIx10 adding makes the predictive confidence higher, means less manual review.

Organization 0. Goal of pres - Quick present results, then more details on topics not normally covered. Lessons learned for future projects, advanced things possible because of the way CNNs work 1. Intro to beakers, wigners for classification 2. Present model and results. ResNet transfer learning from ImageNet , incorporated ICI by attaching to features. Det & event level results. 4. Results are strong, how did we get here - lessons learned along the way that led to strong results 6. Lesson 1 - models cheat. Filter example highlights this 7. Lesson 1b - proper data splits are important. Would not have caught this if blended everything together randomly, your data split should reflect your intended use case as a real test. 8. Saliency maps next - DL models can be a black box, but there are tools to help with that. GradCAM/saliency maps highlight potential future problems. Might show that BB is being classified as much by lack of higher frequency information, so potentially any low frequency noise could get called BB. 9. SelNet - control that frameworks like PyTorch give over entire modeling process let us do potentially cool things. A problem with wigners is that they can get noisy, manual analysts arent using every wigner image. But the model includes them all, we want to avoid some of that. Implemented verison of model in SelNet paper, basically we simultaneously train a selection model that takes the features and just decides whether or not it wants to try and make a decision at all. Shockingly it worked (show grid of low sel pictures). 10. IF TIME talk future selnet possibility with matched template anne project 11. Acknowledge Jay, Jennifer, Emily for hard work labeling. CalTech, CV4E, Resnick for funding where I learned all this.

TITLE: Shannon says it reminded her of to infinity and beyond. Fossa Lightyear.



TaikiSan21/PAMmisc documentation built on Oct. 15, 2024, 9:29 p.m.