The Artificial Intelligence “Black-box” Carla Vieira @carlaprvieira Ilustração: Hanne Mostard

About me Carla Vieira Information Systems – USP Artificial Intelligence Evangelist Community Manager perifaCode @carlaprvieira @carlaprv@hotmail.com

Tech Conferences

data bias privacy ethics law

We need to talk less about Artificial Intelligence hype … … and more about how we are using this technology.

#1 Google Photos

#2 Gender Shades Article http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

“Whether AI will help us reach our aspirations or reinforce the unjust inequalities is ultimately up to us.” Joy Buolamwini

#3 Tweet

#4 Google’s Algorithm

• 46% false positives for African American • African American authors are 1.5 times more likely to be labelled “offensive” https://homes.cs.washington.edu/~msap/pdfs/sap2019risk.pdf

#5 COMPAS Software Software COMPAS

https://www.research.ibm.com/artificial-intelligence/trusted-ai/diversity-in-faces/

Artificial Intelligence needs to learn from the real world. Creating a smart computer is not enough, you need to teach it the right thing. https://about.google/stories/gender-balance-diversity-important-tomachine-learning/?hl=pt-BR

Gender Gap in Artificial Intelligence “Only 22% of AI professionals globally are female, compared to 78% who are male.” (The Global Gender Gap Report 2018 - p.28)

Bias Human Bias Technology

Even though these decisions affect humans, to optimize task performance ML models often become too complex to be intelligible to humans: black-box models

INPUT BLACK BOX OUTPUT

INPUT BLACK BOX OUTPUT

JUSTICE MATH

“This new law is a complete shame for our democracy.” Louis Larret Chahine Co-founder PREDICTICE https://www.artificiallawyer.com/2019/06/04/france-bans-judge-analytics-5-years-in-prison-for-rule-breakers/

https://edition.cnn.com/2019/05/14/tech/san-francisco-facial-recognition-ban/index.html

How to open this black-box? EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TRANSPARENCY TRUST

XAI intends to create a new suite of ML techniques that produce more interpretable ML models

Accuracy vs. Interpretability trade-off

Explainability Pre-modelling explainability Explainable modelling Goal Understand/describe data used to develop models Goal Develop inherently more explainable models Methodologies • Exploratory data analysis • Dataset description standardization • Dataset summarization • Explainable feature engineering Methodologies • Adopt explainable model family • Hybrid models • Joint prediction and explanation • Architectural adjustments • Regularization Post-modelling explainability Goal Extract explanations to describe pre-developed models Methodologies • Perturbation mechanism • Backward propagation • Proxy models • Activation optimization

Post-modelling explainability The proposed taxonomy of the post-hoc explainability methods including the four aspects of target, drivers, explanation family, and estimator.

Post-modelling explainability first a perturbation model is used to obtain perturbed versions of the input sequence. Next, associations between input and predicted sequence are inferred using a causal inference model. Finally, the obtained associations are partitioned and the most relevant sets are selected.

http://www.portaltransparencia.gov.br/download-de-dados

If we want AI to really benefit people, we need to find a way to get people to trust it.

https://serenata.ai/ https://brasil.io/home https://colaboradados.github.io/

Less machines that are going to take our jobs and more about what technology can actually achieve…

The choices we are making today about Artificial Intelligence are going to define our future.

Thank you! Carla Vieira @carlaprvieira carlaprv@hotmail.com

Useful links − AI NOW − Racial and Gender bias in Amazon Rekognition − Diversity in faces (IBM) − Google video – Machine Learning and Human Bias − Visão Computacional e Vieses Racializados − Machine Bias on Compas − Machine Learning Explainability Kaggle − Predictive modeling: striking a balance between accuracy and interpretability

Useful links −Racismo Algorítmico em Plataformas Digitais: microagressões e discriminação em código −Metrics for Explainable AI: Challenges and Prospects −The Mythos of Model Interpretability −Towards Robust Interpretability with Self-Explaining Neural Networks −The How of Explainable AI: Post-modelling Explainability