- We are excited that our work on inherently interpretable #deeplearning models for #AI for #medicine has been published in @plos.org #digitalhealth! You want to know how to combine the power of deep learning with accessible interpretation? This is for you! ⬇️ journals.plos.org/digitalhealt...
- BagNets are modified ResNets with local receptive fields and an explicit class evidence maps at the end. We discussed there use for interpretable #medical #AI here: proceedings.mlr.press/v227/donteu2...
- The explicit class evidence maps allows to penalize activations, e.g. via a sparseness penalty. This is extremely effective for diseases like #diabetic #retinopathy where lesions are small in initial disease stages: extracted high evidence regions almost always contain lesions.May 13, 2025 07:04
- This not only works much better than established post-hoc techniques, but also works well as a support tool for clinicians: Decisions for difficult cases improve and all decisions become about 20% faster!
- Thus this work shows how #inherently #interpretable models in #AI for #medicine can improve clinical decision making in #ophthalmology! Thanks to everyone involved, especially Kerol Djoumessi (hertie.ai/data-science...) and @lisakoch.bsky.social!