Blog 6: Implications of a Tech Focused Society
Published on:
The addictive AI companion and how it can shape people’s lives
Case Study:
Addictive Intelligence
This case study talks about AI companions and how they can negatively affect peoples lives. They also show how AI companions can worsen people’s communication skills with humans. They explore how AI companions can be modified to prevent negative things from happening. This is a very interesting topic because AI companions can be seen as progress towards better work-life balance, but it brings a plethora of ethical questions with it.
How could companies develop emotionally engaging companions that also prevent harmful dependencies? I think it can be trained to interact positively, and not engage in negative behaviors. It could also have hard-coding that makes it say a specific message like ‘talk to a licensed professional’ or to talk to someone else. Being emotionally engaging in an ethical way would entail understanding positive input from negative, which requires a lot of human data training. A good ethical guideline would be transparency, so that people are constantly reminded that it is AI and not a real human interaction. This could be requiring a small tutorial to the AI, which forces you to understand a safe way to use it. It could also be having text constantly displayed that says it is not necessarily accurate.
There is a big difference between addiction to these AI companions and other things like gaming or social media. Addiction to social media or games is just based on what’s there already; a post someone made, a game someone created. But AI presents new material that can be generated which is almost instantaneous, providing a more rapid feedback loop and making it easier for users to engage in new material. This is worrying because it makes it seem like it’s a real person, and that you have a personal connection with the model. You also feel privy to “new” information, and feel like someone has helped you.
If people do make a personal connection with a model, it’s important to make sure that the benefits outweigh what possible detriments there are. If someone has better interactions with people and a better outlook on life, then it could possibly outweigh their usage time with AI. I think a good intervention strategy would be to check in on the user’s emotions somewhat often, and see if the current interactions are being harmful. If they are, forcibly change the topic and suggest the user talk to someone in real life or go outside if they are caught up on something. This could help them have only beneficial conversations with the companion and reduce possible harm, but it’s impossible to make sure the user is telling the truth on their emotions.
So, how can companies make better economic-models in order to have healthier AI interactions? Incentivizing companies to make shorter interactions and finish an interaction quicker could promote healthier interactions by showing that it isn’t helpful to talk to it for a long period of time. They could also have different subscription levels, where the lower levels operate slower. This would allow people to pay for faster and more real-time interactions, which might remind them that it’s still AI and not a real human. Also measuring whether AI is making someone’s life easier/better would be a good way for companies to know what they can change.
If I was developing regulations for AI companions, I would address usage limits by figuring out what the most common demographic is that is using the AI companions, and see what can be done for that specific group. If a lot of minors are using it, I would add some sort of age verification in order to access the full model. There could also be a time limit so that people don’t talk to the companion for a long uninterrupted time. This could also tell the user to take a break as I’ve seen some video games do after an hour or two of playing, getting them to go outside, talk, or stretch.
After thinking about this case study, I have some questions. Should speech laws for humans be the same for AI companions even though a user can get a response they desire out of AI easier than a human? If a user convinces AI to tell them to do something bad, who do we really hold accountable?
These are relevant questions because I think a lot of responsibility lies on the user when it comes to AI use. However, if there aren’t proper tools in place to help prevent bad things from happening, it becomes a lot more blurry. Do we blame the person who used AI and got what they wanted, or the system that allowed it to happen?
This case study and blog post were critical in developing my ability to think through regulations and where responsbility falls to when it comes to people’s AI use. Thinking through how companies could modify AI companions to prevent people from using them and getting hurt was more complex than I originally thought. If companies want as much user engagement and usage rate as they can get, how do they focus simultaneously on this and keeping their user’s safe and happy? Ethics in technology is a lot less straightforward than I think most people think, having complex issues where we want to respect autonomy but also want to remain objective and transparent.
