Not Fair! Testing AI Bias and Organizational Values

Just because a machine learning system is biased doesn’t mean that it isn’t useful. If the bias reflects our goals as an organization, then it may not matter that the result is suboptimal.
Fairness means that we want to make sure we consider the needs of all stakeholders, and balance those needs against goals. To do so, we require a clear statement of those goals, and the ability to objectively test them against results.
This presentation posits fairness as a goal in developing machine learning systems, and describes how to make that goal objective in order to set up and execute a testing plan.


Peter Varhol

Peter Varhol Peter Varhol Firma

Peter Varhol is a well-known writer and speaker on software and technology topics, having authored dozens of articles and spoken at a number of industry conferences and webcasts. He has advanced degrees in computer science, applied mathematics, and psychology, and is Managing Director at Technology Strategy Research, consulting with companies on software development, testing, and machine learning. His past roles include technology journalist, software product manager, software developer, and university professor.

Gerie Owen

Gerie Owen Gerie Owen Firma

Gerie Owen is a QE Architect at Medullan. She is a Certified Scrum Master, Conference Presenter and Author on technology and testing topics. She enjoys analyzing and improving test processes and mentoring new QA Leads as well as bringing a cohesive team approach to testing. Gerie is the author of many articles on technology including Agile and DevOps topics. She chooses her presentation topics based on her experiences in technology, what she has learned from them and what she would do to improve them.