Close

Delete Collection?

Are you sure you want to delete this collection permanently?

Close

Delete Collection?

Are you sure you want to delete this collection permanently?

Everyone has a Story to Tell and an Experience to Share!

Let’s Start Writing
be7f4d55-de81-4a50-a298-b08517ca0a7e

Understanding Bias in Artificially Intelligent Algorithms

Is discrimination inevitable when it comes to AI?

It is clear that artificial Intelligence is going to change almost every industry in business, government and society - but we have to understand its limits. An artificial intelligence application is only as great as the individual that programmed it - it's as straightforward as that.

If this is the case, given the infinite biases of humans, can we ever be sure that the artificial intelligences we adopt are reasonable and unbiased? According to scientists, any artificial intelligence that learns from human language is very likely to come away biased in identical ways that humans are.

Take for example the hiring process. Having hiring procedures automated is extremely attractive, given that it saves time and allows for managers to concentrate on other business pursuits and overall strategy.

However, time and time again artificial intelligence hiring systems simply didn't appear to work reliably regarding job applicants of color. Unsupervised learning methods attempt to learn independently. If you have not gone out of your way to ensure and test that your technology is inclusive, then it is best to avoid using it.

Decisions made by systems aimed toward personalization will inevitably create bias. Of course, human decision making without the assistance of AI isn't necessarily fairer or more transparent. However, we often times assume that the work of machines is infallible - in the case of artificial intelligence bias, this could prove to be a big mistake.

#

There's no simple remedy to solving the issue of bias in AI, as it is not designed for the type of pattern recognition that can be most helpful in making inclusive hiring decisions. This being said, there are many tech leaders who still aspire to utilize AI to augment (and in some instances, replace) human decision-making.

Machine bias is increasingly impactful due to the expansive uses in today's world. Quite simply, biases will probably infiltrate any AI which uses GloVe, or that learns from human language generally. Reporting bias happens when the dissemination of research findings is influenced by the character and direction of the results, for example in systematic reviews. Simply put, if you're not conscious of the biases in training data, then you won’t be able to prevent the problem on a larger, algorithmic scale. For example, substantial racial bias has been shown to occur frequently in AI predictions.

It's tough to identify our own biases and, thus, extremely tricky to spot and prevent biases in AI technology. If there is an answer to the problem, it lies in the data and the people behind the machines.

Algorithms are simply permitted to match dependent on the data we tell it to look at, and are already having an impact without much consideration for inherent biases. For example, current algorithms are already be subtly distorting the types of healthcare someone receives, or the way in which they get treated in the criminal justice system. If there are racial, gendered or other biases in the data or coding of the algorithm, this can have a serious impact on the lives of already disadvantaged groups.

Phani Nagarjuna, chief analytics officer at Sutherland, sums this up nicely, stating “Quite often, AI becomes an immediate reflection of those who assemble it. An AI system will not only adapt to the same behaviors of its developers but reinforce them. The best way to prevent this is by making sure that the designers and developers who program AI systems incorporate cultural and inclusive dimensions of diversity from the start.”


Annie Brown is the founder of Lips, a cryptographic online platform for women and the LGBTQ community. 


Reference Image
Close