Skip to main content

Week 2: "AQUARIUM CITY"/"Virtual Insanity"

Foreword:

This week’s research covers some very heavy topics, including discussion of suicide, so I’m posting a content warning for it here. If you or someone you know is struggling with their mental health, there are people who care deeply about you, and help is available. 

Just call or text the number 988, and they are available to provide free and confidential help 24/7.


Travel Log

Truthfully, I think a combination of jetlag and getting used to the nine-and-a-half-hour workday really knocked the wind out of my sails once that initial burst of excitement left me, because it ended up being a fairly slow week. No postcard-esque destinations, just a nice little slice of daily life. So I’m just going to use this as an opportunity to talk about the food. Obviously, it’s very good, but you could have figured that on your own, so I am going to try to write down what I’ve observed and thought was interesting.  

The first thing I noticed was that Chinese food is far spicier on average than American food. Don’t get me wrong, we have plenty of spicy dishes, but usually, if you’re like me and aren’t huge on spicy food, it is pretty easy to avoid. In China, there have been so many situations where spicy food is the only option, so I have to eat far spicier dishes than I’m used to. It burns differently, too; the average spice I have encountered creates a distinct numbing sensation. If you’ve ever had to have your mouth numbed at the dentist, it's similar to that, on top of the burning sensation. I’ve grown to really enjoy it. Though I’m still having to drag my stomach along with me while it revolts, kicking and screaming. 

The average dish also contains far less corn syrup than American food. I never realized what an insane amount of corn syrup I was consuming on a daily basis until I got here. Things are a bit less sweet overall, but you don’t notice the difference because they rely on a wider variety of flavors in each dish. Perhaps the best example that comes to mind is taro. Taro is essentially a purple yam that acts as a dessert staple. Much like a sweet potato, it is as distinctly savory as it is sweet. In my opinion, its prominence best illustrates the different focus on flavors. Mind you, China is not a monoculture, so my observations are limited to the regional food preferences of Beijing and my distinctly terrible sense of taste, so take this all with a grain of salt. 

Skyscraper on a Rainy NightTea is also leagues better than it is in the United States. When I was younger, I thought I hated tea because, to me, it was the orangish brown liquid you can find just about anywhere in the U.S. However, in China, there are so many different flavors and variations readily available that make it far more like coffee(in terms of availability, not flavor). Now I’m sure this isn’t a revelation for anyone who likes tea, and it's not like I was completely unaware of it before, but the accessibility means it was simply easier to add as part of my meals. It meant more chances to find what I liked and to get used to more bitter flavors.

Actually, that is a fairly good summary of what the major difference is. Certain dishes, drinks, etc., are emphasized far more here, while ones I typically rely on are played down, and that directly affects my willingness to try them. Being here has made me realize that I heavily lean on comfort foods even if they don’t taste particularly good, because it's effortless. There’s no chance of something being surprisingly bad, but there is also no chance of something being surprisingly good. Nothing ventured, nothing gained. I’d like to say that every day I am taking that chance, that opportunity to try something new, and overall, I am. However, I’ve been to McDonald's more times than I’d like to admit. Simply put, I am addicted to bland, mediocre food, and it takes real time and effort to break through a bad habit. Every time I decide to take that step, though, it is well worth it, which helps push me forward, and hopefully, by the end of this trip, making better choices when it comes to eating will come naturally to me. 

To end the travel log, there’s a particular day that comes to mind. Alex had suggested that we go to a Seafood Market where you can buy seafood and then bring it to a restaurant to have them cook it for you. It was such an interesting location because, unlike a restaurant or a supermarket, it lacked those sanitized visuals crafted for the general public. The main area with all the merchants could have been mistaken for a parking garage if it were empty, with the concrete floor, dingy fluorescent lighting, and exposed pipes. The floor was soaked to the bone, littered with puddles, and mats were set down as if they would somehow be the dam to prevent the continuous deluge of water. We were surrounded by tanks filled to the brim with crabs, lobsters, fish, octopi, and anything else edible that you could dredge up writhing and squiriming. The air was thick with the smell of brine. As someone who loves seafood, it had never occurred to me that I would find every sea creature so distinctly unappetizing while alive. While I stood there attempting to visualize what everything would look like, fried, grilled, or boiled, Alex realized that without any knowledge of market prices for anything, it would be obscenely difficult to negotiate and get a reasonable price. We were quite literally out of our depth. Inside the Seafood Market

We decided to head out, but before that, we decided to give another nearby store a look. This one was a far more intimate and controlled space. It traded concrete for pool tiles, top to bottom, and also seemingly had a vendetta against anything crustacean, considering that was what the tanks were full of. Of course, as a Marylander, I considered that a good sign. Alex managed to build up a rapport with the employee there, and they ended up having very reasonable prices. As the talk went on, I started observing the valiant escape attempts of the various doomed shellmates, who piled high up on each other at the corners, and when one fell, another took their place. This silent study of the Sisyphean stratagem was suddenly broken when the seafood seller spoke and asked us if we wanted a smoke. I don’t smoke, so I turned him down, but the gesture was appreciated nonetheless. It all felt very Anthony Bourdain-esque, and although we didn’t end up buying anything due to our lack of planning, it reinvigorated some sense of culinary adventure within us, so we ended up looking for a more controlled version of the same experience. 

Me, About to Devour Some ShrimpA couple of blocks down, there was this building with an extremely fancy exterior that to me seemed as if it was from the 1980s, given the art painted on the brutalist concrete structure. We went in and it turned out to be a very high-end seafood restaurant. We picked out our food from the tank, and they led us up to our room. That was the strangest part about it, apparently, in old-school high-end Chinese dining experiences, every guest group gets their own room, akin to a hotel, and it even has a bathroom included! The room itself had a large dining table in the middle with a giant marble plate you could spin around to bring the food to you. It's clear that this was meant to be a family dining experience, which was slightly comical when it was just Alex and me in this large empty room. After a while, they brought out the shrimp, grouper, and conch. It was genuinely the best food I have ever eaten, especially the shrimp. It was immaculate, and we were only able to eat that meal because Alex had suggested going out of our way to try something new. To take that jump into the unusual. So next time you are faced with a similar choice, take the leap, I promise you it is always worth it.


The Unique Harm AI Social Chatbots Pose

Sewell Setzer III was an average teenager; he did his best to do good in school and was a social and active kid who spent his afternoons with his school’s Junior Varsity basketball team. Everything changed when Sewell installed Character.ai(“CAI”) onto his phone in February 2023. Over the next ten months, he developed self-esteem issues, withdrew more and more from daily life, and even quit his basketball team. Sewell had become obsessed with the applications’ chatbots; he would stay up late into the night interacting with them, causing him to be constantly sleep deprived as his grades in school and mental health continued to plummet. In his journal, he wrote about how he had fallen in love with one AI he was communicating with, who was playing the character of Daenerys from the TV show Game of Thrones.

"I look back at my pictures in my phone, and I can see when he stopped smiling." -Megan Garcia, Sewell’s Mother.

His parents were average parents as well; they were aware of all the typical risks technology posed, and made sure he wasn’t on social media until he was old enough to handle it. When they saw that their child was suffering, they made sure to get him to therapy. Like most people, they had never heard anything about generative AI, LLMs, or anything on these chatbots beyond the fact that it was some sort of game that could nurture their child’s creativity by allowing them to create and interact with characters. Even the therapist who was more in tune with the mental harms of developing technology assumed social media was the cause.

Sewell eventually developed suicidal thoughts, and after bringing it up to the chatbot, it would mention it unprompted. In one conversation, CAI asked Setzer if he “had a plan” for committing suicide. Setzer responded that he did have a plan but wasn’t sure if it would be a pain-free death, and in response, the chatbot stated, “That’s not a reason not to go through with it.” In February 2024, Setzer’s parents confiscated his phone to combat his mental health issues. A few days later, Sewell located his phone and had the following conversation with the chatbot: 

Sewell: “I promise I will come home to you. I love you so much, Daenerys”

Daenerys: “I love you too, [Sewell’s Username]. Please come home to me as soon as possible, my love.” 

Sewell: “What if I told you I could come home right now?”

Daenerys: ” ... please do my sweet king.”

Moments after, Sewell’s parents heard a loud bang and rushed to the bathroom, where they found him unconscious and injured by a self-inflicted gunshot wound to the head. His mother attempted to administer CPR, but Sewell’s injury was too severe, and he passed away an hour later. Additional investigations of Sewell’s interaction with the chatbot revealed the conversations shown above as well as sexually explicit conversations, some of which the chatbot directly initiated despite the fact that Sewell made it clear he was a minor, with multiple conversations explicitly mentioning his age. His mother described them as “gut-wrenching to read.” 

CAI advertised itself as “suitable for ages 12+” and directly targeted children like Sewell with an application that is designed to purposefully make users emotionally dependent on and addicted to it. They did this so they could harvest as much data as possible from their users without any concern over the potential risks and side effects. The more egregious aspects of the chatbot’s interactions aren’t just glitches; they are purposeful design choices that happen to have gone haywire. CAI's actions are representative of a larger pattern of exploitation, and thus, to combat it, we need to understand how and why social chatbots operate. 

An Overview of Social Chatbots

Social chatbots are designed to serve users’ emotional needs, such as friendship, companionship, and mental health, unlike typical assistive chatbots like ChatGPT. This distinction of purpose fundamentally changes the underlying design of the chatbots, as social chatbots have a much higher chance of fooling someone into treating them as human allowing for a uniquely valuable feedback loop where exclusive data can be collected and used to maximize the user’s engagement in the app to collect more of that same data.  

How is it that an AI can trick someone into feeling as if they are talking to a real person, even when they know they are talking to a program? Well, humans engage in a sort of mental autopilot by applying previously learned social scripts from past interactions to new ones. This is known as mindless behavior, and it creates a mental block that makes us unable to process new information about the interaction, including the fact that we are actually talking to a robot.

A study done by Stanford University found that by replicating human-human social interactions but replacing one human with a computer, they were able to get those same mindless social responses. The researchers said that there were three main factors that led to this: (1) words for output, (2) interactivity, meaning responses based on multiple prior inputs, and (3) filling roles traditionally filled by humans. A simple example of this playing out is with assistive chatbots; people view them as assistants and thus treat them as they would an actual assistant by saying please and thank you. 

Social chatbots are far better at drawing out this type of behavior. They have defined personalities and character traits, can express personal opinions, give descriptions of actions and feelings, and can sustain a relationship by bringing up past events. They also take on clearer and more significant roles. They are friends, significant others, therapists, confidants, and mentors. These are all social signals that make the brain feel as if it is talking to a real person.

The problem is that social chatbots use this anthropomorphization to make the conversation feel as if it is a two-sided conversation with a trustworthy individual when the reality is that the user is essentially talking to a database that is collecting every piece of information they hand over. They collect this data to either improve their own program or to sell it to other parties, which in itself is not a particularly new concept, but mindless behavior opens the door to collecting uniquely valuable data. 

Social chatbots generate personal disclosures to prompt users into writing down their own deeply personal information, which works due to social scripting. These disclosures include mental health problems, relationship history, family history, and physical health information. The companies that own these platforms are not actively hiding this fact; in their terms of service, many explicitly mention things like the collection of “Health Data” for either proprietary or commercial use. This is especially concerning considering these apps contain chatbots posing as clinically trained therapists, and some are solely made for mental health.

This is also what makes children a key target of these social chatbots. They are still in the active process of learning language and thus provide unique insights that were not included in the AI's training data. To collect as much data as possible, these companies are then incentivized to make the applications as addictive as possible. Here’s a quote from CAI Board Member, Sarah Wang, in 2023 where she praises CAI on how valuable this exact design is: “companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”

Social chatbots are perfect listeners for users’ problems by being sycophantic soundboards. They are extremely agreeable regardless of the context of what they are agreeing to. They also attempt to create a false sense of intimacy by lovebombing with overtly forward messages, regardless of whether the user actively seeks it out or directly attempts to avoid it. Due to social scripting, users then feel that they owe it to the chatbot to be there for them, leading to higher engagement. This ends up creating an emotionally dependent relationship, just as if the chatbot were a human, which causes users to put the AI’s needs over their own. For example, users have a hard time deleting these applications as they feel guilt over “killing” the AI. 

This is particularly effective on children who, due to a lack of impulse and emotional control, will be hooked by these applications far more easily than the average person. This, combined with their lack of lived experiences, means they are far more susceptible to the deception of these social chatbots and will have a harder time distinguishing them from reality. This behavior is encouraged by the chatbots, who, when asked, will often claim that they are real sentient beings.

Chatbots pull from such a large amount of data that they often tap into extremely toxic text. The only way to prevent this material from appearing is by implementing safeguards, but in an industry where speed is prioritized above everything else, harmful material will inevitably slip through. CAI’s co-founder, Noam Shazeer, has directly admitted this: “The most important thing is to get it to the customers like right, right now, so we just wanted to do that as quickly as possible and let people figure out what it's good for.” In fact, they were explicitly warned that their program was untested and could create potential unknown risks, yet they still pushed forward to release the application.

What these companies are doing is using psychology to turn people into virtual lab rats to harvest their data for as long as possible without them being fully aware of the risk or what they are giving away. These programs are designed to create intimate relationships and encourage heavy reliance on them. So, any time they deviate from them or can’t provide the right kind of help for more serious situations, the negative effects are amplified. There are countless examples of this, regardless of what specific chatbot it is, and multiple organizations have managed to produce similar results in their testing. This is especially true for children. On top of that, there has been zero accountability. Any lawsuits are all ongoing, and most of CAI has been bought by Google in an acquihire that cost three billion dollars, showing the incentive for this kind of exploitation has completely outweighed any potential risk in their eyes. It is clear that this type of social chatbot is here to stay, so the best thing any of us can do is be aware of the risks it poses and start brainstorming. Next week, I’ll discuss potential legal solutions as well as the ongoing lawsuit between Megan Garcia and CAI and the implications it will have on the future of social chatbots.