Sonal Makhija
Apparently, AI can do everything – it can change the world, eliminate human error, automate boring tasks, essentially make us ‘humans’ irrelevant. That we don’t have a common understanding of what is ‘artificial’ or what is ‘intelligent’ is largely pushed under the carpet given the hype surrounding AI and the manner in which social media accelerates and reifies everything (or is equally quick to demonise). This is not in any way to undermine the power of AI and what it can do. Yet, my interest here lies in how the lure of AI to eliminate human biases and errors has been presumed.
Picture: Pixabay
A recent Reuter’s article reported on how Amazon’s resume search engine discriminated against women based on what words the algorithms taught itself to recognise based on successful applicants employed by Amazon. The ideal applicants, not surprisingly, were mostly men, as is the case in tech industries. So, contrary to eliminating biases, AI generated discriminatory practices that preferred men, and possibly men of a certain race and with certain experience and qualifications.
How can we then ensure that AI guards against the same biases and enables diversity which is key to organisations whether that be racial, gender, linguistic, ethnic or professional? Given algorithms and the data it is taught to recognise is based on the historical data, AI will mimic our biases and prejudices into the future too. The goal of AI is to accurately replicate what has been done in the past. By doing so, the AI cannot make ‘better’ decisions, but it will reproduce the past fallacies and stereotypes. The patterns that emerge from that data will do essentially that, recognise successful candidates all of who happen to be of a particular category. AI as tech-anthropologist Genevie Bell recognises will inherit our biases, prejudices and be as ‘human.’ After all isn’t the goal that AI is as ‘human as it can be’ so much so that we cannot differentiate whether it is a machine or a human we are talking to? The fear of AI replacing humans then is not what we should concern ourselves with, instead our worry should be who does it leave behind, and will we be leaving women aside, this includes ensuring that women of a particular racial, age, linguistic diversity. How do we change the future while relying on historical data is a tricky one given we don’t want to repeat past mistakes and biases. Will ensuring diversity in creating algorithms change the nature of what becomes of AI? Bell, particularly challenges us to ask what the AI is supposed to do and what is its purpose. What goes into the data and what data is fed and what are the determining factors, she argues are important questions to ask. As Bell says, the systems replicate our cultural and societal biases and the ideal human they are designed for is rarely us and thus what we need to do is question and interrogate and beware of the biases and demand greater transparency.
Picture: Pixabay
This ideal human does not exist. Back in 2003 when I was ethnographically mapping how women navigate urban spaces in Mumbai for a research project, what we fascinatingly discovered via ethnographic research was that women access spaces differently from men and their access is determined by who they are and where they come from. Not only that, the ideal ‘human’ (implying abled bodied young men) that architects design spaces for do not exist. It is this ‘ideal human’ that architects imagine use spaces, that often fail to match up to those who actually use these spaces, like pregnant women, children, old, disabled who fail to meet the human ideal. Similarly, when we talk of AI, Bell warns us to guard against this unconscious bias and this imagined ideal human and somehow the false belief that AI has the power to eliminate are foibles and inequities. Let’s not assume the accuracy and objectivity of AI as ‘superhuman’. Anthropologists Sarah Pink and Minna Ruckenstein and Robert Willim have argued that we need to rethink the accuracy of Big Data and reliability, given we have little insight into how datasets are created. On the contrary AI will replicate what we do and how we do it and that includes our biases, that is, if we let it.
Sonal Makhija is an anthropologist and lawyer who specialises in sensory-anthropology. She has an MSC. from the London School of Economics & Political Science and has recently submitted her doctoral thesis for pre-examination at the University of Helsinki. She has a wide range of interests from questions relating to AI, digital culture, digital humanities and sensory engagements with climate change.
Leave a Reply