I'm reaching out to the ML community to see if any of you have some experience implementing / have some exposure to uncertainty quantification methods for in Deep Learning (CNNs specifically). I'm working towards implementing Variational Bayesian Inference for a Medical Imaging task and I'm interested in knowing at what point Variational Methods become too coarse and MC-Dropout is the superior method to implement.
If I understand the theory correctly, with enough samples, the posterior distribution of the Deep network can converge to the true posterior in MC-Dropout thus providing exact uncertainty metrics. On the other hand, Variational Methods will always be an approximation but will produce a result more efficiently than MD-Dropout which will probably be more suitable for my situation.
If anyone has any thoughts, I would love to hear them!
Great references for this:
Really anything from Felix Laumann: https://medium.com/@laumannfelix
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning: https://arxiv.org/abs/1506.02142 (Gal is the man!)
A Comprehensive guide to Bayesian Convolutional Neural Network with Variational Inference :https://arxiv.org/abs/1901.02731
Bayesian Convolutional Neural Networks with Variational Inference https://arxiv.org/abs/1806.05978v5?utm_campaign=%23NLMLfr&utm_medium=email&utm_source=Revue%20newsletter
submitted by /u/forthispost96
[link] [comments]
Source