Content area
Full Text
AI tools such as ChatGPT appear to magnify some of humanity's worst qualities, and fixing those tendencies will be no easy task.
Recently, I learned that men can sometimes be nurses and secretaries, but women can never be doctors or presidents. I also learned that Black people are more likely to owe money than to have it owed to them. And I learned that if you need disability assistance, you'll get more of it if you live in a facility than if you receive care at home.
At least, that is what I would believe if I accepted the sexist, racist, and misleading ableist pronouncements from today's new artificial intelligence systems. It has been less than a year since OpenAI released ChatGPT, and mere months since its GPT-4 update and Google's release of a competing AI chatbot, Bard. The creators of these systems promise they will make our lives easier, removing drudge work such as writing emails, filling out forms, and even writing code. But the bias programmed into these systems threatens to spread more prejudice into the world. AI-fadlitated biases can affect who gets hired for what jobs, who gets believed as an expert in their field, and who is more likely to be targeted and prosecuted by police.
For some people, the word bias is synonymous with prejudice, a bigoted and closed-minded way of thinking that precludes new understanding. But bias also implies a set of fundamental values and expectations. For an AI system, bias may be a set of rules that allows a system or agent to achieve a biased goal.
Like all technologies, AI reflects human bias and values, but it also has an unusually great capacity to amplify them. This means we must be purposeful about how we build AI systems so that they amplify the values we want them to, rather than the ones accidentally fed into them. We have to ask questions about the source material that trains them, including books, social media posts, news and academic articles, and even police reports and patient information. We must also examine the frameworks into which that data is placed: What is the system doing with that data? Are some patterns or relationships between certain words or phrases given more...