You are in an Algorithmic Prison. And funny thing is, you don’t even know what it is.

Image for post
Image for post

Asha, one of the finest employees in her organisation, fell into a financial crisis a year back, for reasons outside her control. She missed some payments on EMI and her credit score suffered. This is making difficult for her to get a new job and is worsening her financial situation. Which in turn, is making it even more difficult for her to find a job.

This isn’t the truth, at least as yet. But you know that you are not very far from this reality. The reality where algorithms will start judging you and take more decisions for you. And before you know, you will be in an Algorithmic Prison.

We are increasingly seeing use of Machine learning algorithms in predicting multiple facets of life. Predicting if it is going to rain tomorrow is one thing, predicting whether a loan applicant will default, is an entirely different proposition. The algorithms are deciding if you should get the loan, if you are fit for a job or if you are eligible for that insurance. Although, for companies, the loss of missing out opportunity on risky propositions may not be worth it; for individuals, it could be life changing and have very high negative repercussions.

To make matters worse, through connected systems, if you are denied one service, it is likely you will be deemed unfit for multiple such opportunities.

The Bias

The algorithms study multiple data points and behavioural aspects of the past and give out a probabilistic predictions. If the past was biased, it is very likely the predictions will show that too. Thereby, possibly depriving certain regions and castes and religions of opportunities. While, we morally preach equality for all, the algorithms do not have moral values. At least not as of now.

Moreover, the algorithms make predictions while segmenting you in buckets. If on an average, people like you from the same region, company or race have defaulted, it doesn’t mean you are going to default too.

The Polarised Views

If you are consuming a kind of content. the engines show similar content to you in anticipation of increasing the click throughs. Thereby, just adding to confirmation bias of your opinions and creating polarised views. We are already seeing the impact of social media on the election campaigns. And then there are companies, who are trying hard to use those analytics and help political parties. Media is the only source of information for you, and that is highly driven by algorithms on what to show you next. The movie Inception… its already here, just not in the form as in the movie.

The Justice

Most of the algorithms often keep learning with data and make decisions on their own. This makes the systems highly opaque and difficult for a human to reason why a particular decision was made.

We have proper judiciary system in place. But if an algorithm is denying you a job or a loan, who is to be blamed for that? If it is discriminating against a race or religion, can you explain how the algorithm made the decision? Well, the justice system has to evolve. For all you know, that will also be driven by algorithms.

Privacy — Huh?

You have thought of this at least once that Alexas, Siris and Googles of the world are “listening”, to you. We have no clue how the companies are tracking our behavioural data for the purpose of providing the best recommendations and services possible.

Monika, a friend, through her privacy awareness has enabled highest security an app has to provide. But, there are services which are part of our lives now. Choosing complete privacy is not possible now. We all have to be socially connected. She has to use Maps frequently and share her location. Privacy — well, yeah!!!

Do the right thing

AI is the new hype and we are seeing new prediction/recommendation engines emerging every day. All the systems are being built for a reason and the reasons may vary from being as simple as generating more profits, to as serious as the security of a Nation. It is changing our world entirely, but the consequences may reach far beyond the original purpose.

We at CoffeeBeans Consulting, are a bunch of engineers, who do not consider data as a thing. We are working on multiple AI products, starting from recommendation engine to a recruitment platform.

We understand most of the data is about people behaviours and identity. We value and respect it with humanity, while consciously taking responsibility of the world that we want to live in.

An algorithm, a software is not good or bad in itself. It is important how we use it. It is on to us, the engineers and the organisations in this domain, to make ethical choices, take responsibility and Do the right thing.

CoffeeBeans helps small, medium and large businesses unlock the true potential of technology and AI to solve some of their most pressing challenges

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store