Press "Enter" to skip to content

AI couldn’t write this

With the constant additions and alterations of artificial intelligence models, the impact of new technology on people and society is as relevant as ever. People are looking for decisive situations pinning human cognition against robotic and artificial and algorithmic processes, sometimes as John Oliver puts it, “to demonstrate the technology that may make you obsolete.”

But another opinion circulating more recently is that there are some thoughts that even the most advanced technology cannot replace, because the human judgment of anthropologists and technologists can tell us that there are ethical problems in frequently used and increasingly implemented AI systems. A more pressing concern than losing jobs to our robotic replacements is the collection of data that skews AI toward biases that reflect society as it is, not as it should be. 

Healthcare, hiring processes, and credit scoring are full of examples where data collection processes need regulation as its reach accelerates. 

“Even though these systems are created and seem to change really quickly, if we can do a good enough job keeping people safe from harm, we will have a deliberate product that will withstand the process of changing technology,” Friedler said. 

This product is drafted in the “Blueprint for an AI Bill of Rights,” which is a regulatory suggestion from a collaborative at the White House. When Dr. Sorelle Friedler visited and spoke last week, Ostrove was full of curious and knowledgeable students of technology. Friedler distinguished that AI should be used as a “pattern-matching tool,” not a decision-maker. The principles of AI as outlined in the proposed Bill of Rights will directly impact their practice, and protective measures will be crucial to keeping healthcare human-oriented, for example. 

Health and care come up in Dr. Ruha Benjamin’s talk of inequality in Science, Technology, and Human Values. Her discussion at the College last month involved her work on that book as well as “Assessing risk, automating racism,” associating barriers to fair and effective healthcare with the way technological advances favor the demographic group that has historically been in positions to create these systems. While it is easy to say the system got away from us, all we can do to fix it is reconnect the process with the people. 

The concept of a ghost in the machine is real in an overlooked role involved in flagging or reporting inappropriate content, when human judgment is the only way to measure and decide what users of a certain website shouldn’t have to see. It is a collective human job to keep online spaces safe, even though behind the scenes workers do not often get credit. Importantly, it is not just up to people in technology-based jobs to work on the social impact of the constant tech accumulation. Both speakers called for people from diverse disciplines to become involved in or at least aware of the countless data points they provide every day. 

A class on AI changing what my classmates and I believe we can do about equality in Big Tech. It takes concentrated effort to learn how the smallest actions that seem inconsequential still feed data to a system that creates an image of what society should look like, all based on our inputs. Content generation with AI is becoming more prevalent and accessible with art platforms like DALL-E and writing tools like chatGPT. As our use of technology is constant, so should be our consideration of its social impact.


~ Molly George `23

Be First to Comment

Leave a Reply