Why Data Science Projects go wrong?

Understanding the sources of unfairness in data-driven decision

Poonam Rao
3 min readOct 17, 2020
Image Source: https://www.infoclutch.com/installed-base/artificial-intelligence-big-data-both-together/

Abstract

This paper shares my takeaways, best practices and mitigation steps to mitigate unfairness and bias while designing machine learning algorithms.

Data science projects go wrong either due to flawed models or insufficiently/ incorrectly trained algorithms or emergent bias on new/ unanticipated contexts. Fairness is a human, not a mathematical decision, grounded in shared ethical beliefs. While machine learning does not make decisions based on feelings and emotions, it does inherit a lot of human biases leading to disparate impact. In this era where consequential decisions are algorithm-based it is imperative that they are fair, not perpetuated without users knowledge. Reexamination of the meaning of “discrimination” and “the measures of fairness” is required.

It is unlikely that all forms of bias can be entirely eliminated, decisions need to be made about what kinds and degrees of bias is tolerable in certain contexts or if algorithmic approaches should be used. Regulatory bodies should constantly question the potential legal, social, economic effects and liabilities when determining which decisions can be automated with minimal tolerable risks. If bias appears to have occurred, then notices should be given to impacted populations and a comment period opened for response.

Best practices:

  • Understanding the most affected audience.
  • Comparing outcomes for different groups, simulation testing.
  • Evaluating outcomes in cultural context. Analyzing if error rates are higher for certain groups. Establishing threshold for measuring and correcting bias especially for protected groups.
  • Ensuring diversity in design/execution. Training data should be diverse/reliable and designers are diverse to pick nuances, predict outcomes in different cultural contexts, if not, take steps to make these scenarios salient and understandable to designers.
  • Visualizing datasets as ML models expand beyond technological constructs to sociological concepts and political aspects.
  • Identifying unfair paths, relations among relevant variables in data generation mechanism.
  • Increasing human involvement in design and monitoring.

Key Takeaways on Machine Learning Algorithms

  • ML is not necessarily unfair but learning algorithms trained on datasets affected by historical prejudices, inherently designed to pick statistical patterns/biased correlations can render it unfair.
  • Inference of absent attributes; incomplete, under/over-representation of parts of sample population; classification inaccuracies could lead to unfair outcomes assuming what the classifier learned on the general population does not transfer to minorities faithfully.
  • Learning and applying multiple classifiers can lead to increased complexity. A separate identifier for a minority group, acting on protected attributes might be considered objectionable.
  • Automated decisions could favor statistically dominant groups with higher rate of decision-making accuracy.
  • Statistical patterns that apply to the majority might be invalid for minority groups.

Mitigation Steps while designing alogorithms

  • Establishing documentation/technical standards, algorithm certification and audits.
  • Developing regular, thorough data collection audit, along with responses from developers, public and those impacted to detect and deter biases.
  • Establishing regulatory bodies dedicated to overseeing algorithms.
  • Implementing regulatory sandboxes, safe harbors.
  • Establishing governance/guidelines for data, technical robustness, human oversight, fairness, privacy, transparency, accountability, diversity and environmental/societal well-being.
  • Updating nondiscrimination and civil rights laws to interpret/redress disparate impacts.
  • Increasing collaboration and reviews between scientists, business leaders, policy makers and public.
  • Developing robust open source softwares tools for bias analysis.
  • Determining intervention plans if we predict bad outcomes with development/deployment.

References

https://towardsdatascience.com/deepmind-is-using-this-old-technique-to-evaluate-fairness-in-machine-learning-models-f33bce98196e

https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/

--

--

Poonam Rao

Exec Director StratEx - I bring to the table blend of data science, finance and strategy management skills with 20+ years of experience in insurance & fintech.