English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113656/144643 (79%)
Visitors : 51714139      Online Users : 554
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/134199


    Title: 以降偏差達成公平性機器學習之探究-以求職應徵錄取為例
    Bias Mitigation for Machine Learning Fairness - Job Recruiting Selection as an Example
    Authors: 周敬軒
    Chou, Ching-Hsuan
    Contributors: 胡毓忠
    Hu, Yuh-Jong
    周敬軒
    Chou, Ching-Hsuan
    Keywords: 機器學習
    降偏差
    機器中立性
    婚姻歧視
    Machine Learning
    Bias mitigation
    Machine fairness
    Marital discrimination
    Date: 2021
    Issue Date: 2021-03-02 14:56:20 (UTC+8)
    Abstract: 過去我們直覺認為機器學習應該是公平的、中性的,因為它來自數學的計算和統計。但事實並非如此,機器學習是透過訓練資料進行學習因此無可避免也會學習到人類的歧視與偏見。在機器學習中偏差是必要的,也可以說毫無偏差的資料集所訓練出來的模型是沒學習到任何知識,其分類結果亦不具參考價值。但有時候偏差是來自於敏感或受保護的屬性,就會造成不公平以及違法的問題。
    本論文旨在以應徵招募為主題探討以前處理作法達成降低機器學習歧視與偏見的目標,並搭配Scikit-learn和IBM AIF360的函式庫建構標準化的降偏差機器學習流程。經過實驗驗證透過前處理算法降低資料集的婚姻狀態偏差,可以讓模型更具公平性,讓已婚和未婚兩個族群的分類結果有更趨一致,提高了分類器模型整體的準確率和分類品質。
    In the past, we intuitively believed that machine learning should be fair and neutral, because it comes from mathematical calculations and statistics. But this is not the case. Machine learning learns through training data, so it is inevitable that it will also learn human discrimination and prejudice. Bias is necessary in machine learning. It can also be said that a model trained on an unbiased data set has not learned any knowledge, and its classification results have no reference value. But sometimes the bias comes from sensitive or protected attributes, which can cause unfairness and illegality.
    The purpose of this paper is to use recruitment as the theme to discuss the pre-processing algorithm to achieve the goal of reducing machine learning discrimination and prejudice, and to use Scikit-learn and IBM AIF360 library to construct a standardized deflection reducing machine learning process. It has been experimentally verified that the pre-processing algorithm reduces the marital bias of the data set, which can make the model more fair, make the classification results of the married and unmarried ethnic groups more consistent, and improve the overall accuracy and classification of the classifier model quality.
    Reference: [1] Acharyya, Rupam, et al. "Detection and Mitigation of Bias in Ted Talk Ratings." arXiv preprint arXiv:2003.00683 (2020).
    [2] Angwin, Julia, et al. (2016). “Machine bias. ProPublica.”, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed: 2020-03-13
    [3] Bellamy, Rachel KE, et al. "AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias." arXiv preprint arXiv:1810.01943 (2018).
    [4] Calders, Toon, Faisal Kamiran, and Mykola Pechenizkiy. "Building classifiers with independency constraints." 2009 IEEE International Conference on Data Mining Workshops. IEEE, 2009.
    [5] Chouldechova, Alexandra, and Aaron Roth. "The frontiers of fairness in machine learning." arXiv preprint arXiv:1810.08810 (2018).
    [6] d`Alessandro, Brian, Cathy O`Neil, and Tom LaGatta. "Conscientious classification: A data scientist`s guide to discrimination-aware classification." Big data 5.2 (2017): 120-134.
    [7] Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. 2012.
    [8] Frida Polli ,“Using AI to Eliminate Bias from Hiring” https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring, accessed:2020-03-18
    [9] Kamishima, Toshihiro, et al. "Fairness-aware classifier with prejudice remover regularizer." Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg, 2012.
    [10] Lohia, Pranay K., et al. "Bias mitigation post-processing for individual and group fairness." Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp). IEEE, 2019.
    [11] Manish Raghavan and Solon Barocas ,“Challenges for mitigating bias in algorithmic hiring”, https://www.brookings.edu/research/challenges-for-mitigating-bias-in-algorithmic-hiring/, accessed: 2020-04-30
    [12] Mehrabi, Ninareh, et al. "A survey on bias and fairness in machine learning." arXiv preprint arXiv:1908.09635 (2019).
    [13] Peña, Alejandro, et al. "Bias in Multimodal AI: Testbed for Fair Automatic Recruitment." arXiv preprint arXiv:2004.07173 (2020).
    [14] Peng, Andi, et al. "What you see is what you get? The impact of representation criteria on human bias in hiring." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. Vol. 7. No. 1. 2019.
    [15] Qin, Chuan, et al. "Enhancing person-job fit for talent recruitment: An ability-aware neural network approach." The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 2018.
    [16] Raghavan, Manish, et al. "Mitigating bias in algorithmic hiring: Evaluating claims and practices." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 2020.
    [17] Silberg, Jake, and James Manyika. "Notes from the AI frontier: Tackling bias in AI (and in humans)." McKinsey Global Institute (2019): 4-5.
    [18] Society For Human Resource Management. 2016. 2016 Human Capital Benchmarking Report. https://www.shrm.org/hr-today/trends-and-forecasting/ research-and-surveys/Documents/2016-Human-Capital-Report.pdf. (2016).
    [19] Trisha Mahoney, Kush R. Varshney & Michael Hind. (2020). “AI Fairness - How to Measure and Reduce Unwanted Bias in Machine Learning”, https://krvarshney.github.io/pubs/MahoneyVH2020.pdf, accessed: 2020-04-30
    [20] Xue, Songkai, Mikhail Yurochkin, and Yuekai Sun. "Auditing ML Models for Individual Bias and Unfairness." arXiv preprint arXiv:2003.05048 (2020).
    [21] Zhang, Brian Hu, Blake Lemoine, and Margaret Mitchell. "Mitigating unwanted biases with adversarial learning." Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 2018.
    [22] Zhang, Yukun, and Longsheng Zhou. "Fairness Assessment for Artificial Intelligence in Financial Industry." arXiv preprint arXiv:1912.07211 (2019).
    [23] Ziyuan Zhong ,“A Tutorial on Fairness in Machine Learning”, https://towardsdatascience.com/a-tutorial-on-fairness-in-machine-learning-3ff8ba1040cb, accessed:2020-03-28.
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    103971004
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0103971004
    Data Type: thesis
    DOI: 10.6814/NCCU202100290
    Appears in Collections:[資訊科學系碩士在職專班] 學位論文

    Files in This Item:

    File Description SizeFormat
    100401.pdf1509KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback