How do machine learning algorithms perform against the biases of their creators? Is it possible that Data Scientists are spending all day toying with models in order to 'prove' their own prejudice?
In short, yes - it is very possible. Is this a flaw of Advanced Analytics as a whole? Definitely not.
Where Data Science professionals earn their money is in their ability to look at data, spot patterns and make valuable recommendations. Numerical Doctorate and Masters degrees are generally accepted as a prerequisite into the field of Data Science. The responsibility is with the individuals to use their education and training to spot and mitigate against the impacts of problems such as confirmation bias, selection bias, outliers etc. If your Advanced Analytics team fails to do this consistently it's not because Data Science has failed - you probably need a new team.
Ultimately a business has to make a decision on what insights to act upon. This is a shared responsibility. If a business is blindly following the recommendations of an Advanced Analytics team then the first recommendation should be to trim the fat and appoint the Lead Data Scientist as CEO!
The panel that included a mix of policy researchers, technologists, and journalists, discussed ways in which big data—while enhancing our ability to make evidence-based decisions—does so by inadvertently setting rules and processes that may be inherently biased and discriminatory. The rules, in this case, are algorithms, a set of mathematical procedures coded to achieve a particular goal. Critics argue these algorithms may perpetuate biases and reinforce built-in assumptions.