Blog 5: Data & Society Databites Talk

Published on:

Red Teaming Generative AI Harm

Databite Video:
Databite No. 161: Red Teaming Generative AI Harm

Databite Review

In this databite, the speakers talk about the importance of Red teaming AI and how it can be modified to better fit into society. The speakers all have backgrounds in technology, ranging from OpenAI program manager to Tech Executive in Silicon Valley.

Red teaming is the attempt to get something to fail or find possible issues in order to fix them before the public would face them. In this video, the talkers talk about how red teaming is important because “best practices” is never enough, and testing those is essential. They use a good analogy that the Titanic had protocols and many people overlooking the plans in order to make sure it was safe, but they didn’t end up saving them. Thorough testing involves thinking through how certain people would use something, and possibly even getting public participation. They also argue that human red teaming is important in order to get human judgement on things that otherwise could be overlooked.

One of the speakers, Tarleton Gillespie, talks about how red teaming can have negative consequences. This can happen from unethical labor conditions, which are easy given the mental health impact red teaming can have. He also talks about how public perception is very different than specific user perception. This shows how appealing to just “society” is not a simple concept to follow, as reactions to AI output vary

Red teaming is often done in reaction to something’s launch, and not before as the speakers suggest doing in the video. Reactionary red teaming can cause things to be overlooked, and be less about shaping the model and eliminating structural bias. It also allows more thoughtful testing, rather than trying to patch something up.

Red teaming is within the social and technological spheres, and exists because of the dramatic increase of AI production and usage. Data, truth, and health are all at stake due to AI. The speakers explain things in a broad context, including how the public, users, and companies are all impacted by the way AI is handled. They propose that solutions involve broadening red teaming’s scope, including having it happen pre-deployment in order to better handle bias that might not be eliminated post-deployment. Red teaming is important to talk about because companies will have more success with it if the proper ways to do it are discussed more. Input from the public is important so we don’t miss real world harms that AI can have. Red teaming can help build tools on how to tackle the harms AI can cause, and isn’t necessarily about confronting companies head-on.

This topic is important because AI can have a major impact over many small interactions. It can slowly lead people to believing things that aren’t true, and can amplify bias that would negatively impact someone’s mental health. These things are caused by training, which could potentially be fixed before the public gets to see it, which is what red teaming aims at doing. AI’s ability to help people is being trusted too much and people are trusting it on things, which can cause issues and even danger in certain fields.

My Question

I would ask the speakers if red teaming would be a good indicator of whether a company is trustworthy and ethical or not. Can companies actually cause more harm by having their employees do red teaming? I ask this question because the idea of unethical labor to fix unethical AI is contradictory to me, and I wanna know more about how companies can end up hurting their employees by having them engage in the whole process. Red teaming seems straight forward enough, but it brings about the question of who is doing the labor, and under what conditions.