figshare
Browse
Making_Fair_ML_Software_using_Trustworthy_Explanation.pdf (499.75 kB)

Making Fair ML Software using Trustworthy Explanation

Download (499.75 kB)
Version 5 2020-08-19, 00:38
Version 4 2020-07-06, 01:35
Version 3 2020-07-06, 01:19
Version 2 2020-07-06, 01:18
Version 1 2020-07-05, 22:31
conference contribution
posted on 2020-07-06, 01:19 authored by Joymallya ChakrabortyJoymallya Chakraborty, Kewen Peng, TIm MenziesTIm Menzies
Machine learning software is being used in many applications (finance, hiring, admissions, criminal justice) having huge social impact. But sometimes the behavior of this software is biased and it shows discrimination based on some sensitive attributes such as sex, race etc. Prior works concentrated on finding and mitigating bias in ML models. A recent trend is using instance-based model-agnostic
explanation methods such as LIME to find out bias
in the model prediction. Our work concentrates on finding shortcomings of current bias measures and explanation methods. We show how our proposed method based on K nearest neighbors can overcome those shortcomings and find the underlying bias of black box models. Our results are more trustworthy and helpful for the practitioners. Finally,We describe our future framework combining explanation and planning to build fair software.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC