The human brain has mastered the art of simplifying information about it's surroundings into abstract representations. By building representations of things, context, people, intention, senses and other impressions, the brain can efficiently process information. These representations enable human beings to and take conscious and unconscious decisions.
A comparable kind of context reduction is observed in Machine Learning and AI. Just as with human beings the process of context creation bears pitfalls. E.g. the data used to train Machine Learning models can introduce bias: The Machine Learning algorithm encodes bias from the training data into the trained model.
When such a model is interrogated it replicates bias from the training dataset.
The How Else Initiative is focused at understanding underlying concept-networks and problem domains related to technology, sustainability and human needs. This is a sub-project of How Else aimed at the reduction of bias in machine learning.
Our mission is to provide a platform to analyze, understand and mitigate biases as they arise in the context of machine learning.
Read media releases, news, tweets and research papers about bias in Machine Learning application.
We are just getting started. Help to spread the word and to increase public attention on this subject and for this initiative. We are looking to grow the exposure in trade press, with the scientific community, industry, NGOs, governments and other stake holders.
Browse and collaborate on the collection of Mental Models and Ethic concepts related to Machine Learning.
Browse and collaborate on the collection of Biases.
Browse and collaborate on the collection of Mitigations and Counter Measures.
Browse and collaborate on the collection of Datasets used for Machine Learning.
Browse and collaborate on the collection of Datasets and Data Sources used for Machine Learning.
Browse and collaborate on the collection of Machine Learning Frameworks.