That artificial intelligence is part of our society is obvious by now. Governments, hospitals and numerous companies are increasingly using predictive algorithms to help them make decisions.
But how do we ensure that these algorithms are also reliable and inclusive? What biases lie behind machines that sometimes make choices that are impossible for a human brain to follow? How can you be sure these predictions are not discriminatory? When can you trust these decisions? And when are these decisions really adding value to an organization?
We thought together about how to shape reliable and inclusive AI models and got inspired and supported by some experts and experience experts.