Week 5: "Remember When"/"Land of Confusion"
Travel Log:
At the start of this week, it was still hot enough outside to fry an egg on the pavement, so after my personal scrambling last week, I did the sensible thing and stayed in my wonderful air-conditioned apartment. Drank lots of water, did a lot of sleeping, and did a bit of laundry. I’ll spare you the exciting play-by-play and just get to the point, which is that this was another much-needed, slightly slow week. A breather before the sprint that the latter half of my time in Beijing will almost certainly be. However, at the end of the week, I got a really cool opportunity to visit a co-worker’s university campus, Beijing Normal University. Yes, that's its actual name. The school is very highly rated. It just so happens to be a bizarre quirk of translation, where in China, the equivalent term is used to signal that the university has a wide range of electives.
It was mid-afternoon when I got there, the sun was blazing overhead, and the forecast had said rain, yet the only moisture present was coating the air. It was a long walk from the station to the university, an umbrella hung from my arm, sashaying with each slow, relaxed step forward I took. As I wiped the sweat from my forehead, the university’s south gate came into view, and a steely look of determination crossed my face. I had come here on a mission. Over the past month, I had been on a grand journey, struggling towards one simple goal: Get a Chinese SIM card.
See, when the last set of William & Mary acolytes got here, the SIM cards flowed like water. All you had to do was go to this place called the Pearl Market, and you could buy a prepaid for dirt cheap with the right amount of haggling. To perhaps nobody’s surprise, this led to those cards being tied to a bunch of criminal activity, which then led to the government cracking down on the sale of them. So by the time Alex and I arrived there, that particular well had all but dried up. Various typical providers also proved to be a problem because I would have to cancel the SIM card at the store, which, to make a long story short, didn’t work for a variety of reasons. For (roughly) 45 days and nights, I (sort of) searched high and low for a solution. Eventually, while Alex and our co-worker Jennie were eating lunch at the university, she pointed out that they had a store specifically designed for foreign students where they sold SIM cards that automatically canceled. Finally, a reprieve from this digital desert I was stuck in.
Why was this such a big deal? Well, two main reasons: 1. Without a SIM card, I had no access to the Internet on the go. So no GPS, no translator, and no ability to pay for anything that required me to scan a QR code(which was extremely common), and 2. Access to a Chinese phone number, which was tied to a whole lot of apps, functions, etc. Basically, without the SIM card, I was completely rudderless, but with it, I could finally go anywhere and do anything entirely on my own. In short, if you ever take a trip to China, there is only one bit of sage-like advice I have for you: GET A SIM CARD AT THE AIRPORT OR GET AN E-SIM, OR YOU WILL REGRET IT.
With that hurdle finally out of the way, the afternoon sun started to feel softer on my skin, and I felt like I could finally just enjoy that sense of exploration with nothing hanging over my head. Wow, and I was already presented with a great opportunity, I was in the middle of a Chinese university! The first thing that hits you is how dense the university is. Beijing has, at least from my impression, a metric ton of universities, so how do they fit them all in? They pack in and they build up. In the center of the university was this colossal skyscraper that must have gone up 20 to 25 floors smackdab in the middle of a university where most of the closest buildings weren’t even half the size. Apparently, this behemoth of a building was a multi-purpose library and office space. I didn’t even realize it was a part of the university until we were right up next to it. In front of that building was another goliath. To my eye, it looked like a modern take on ancient Egyptian architecture. A giant temple of brutalism with an awning shading the ground below it in the same color as the building. 
The other thing that was unmistakable was that it was graduation at the university. What a bittersweet feeling it was being there. My college graduation was only a little over a year ago, but getting to see people going through those final motions still felt nostalgic. That exact same scramble of friends loved ones, and significant others getting a million and one pictures around campus to commemorate the occasion. You could see the look of pride on their faces, feel that excitement in the air, and experience that sense of completion through osmosis. Everyone was in such a rush to get one last look at everything before leaving it all behind, on to bigger and better things. I was somewhere so completely removed from that hot day in Southern Maryland, and yet the sensations were identical. So many memories came flooding back at once, small moments, minor details, that had already slipped my mind were suddenly back in view. It made me realize that I didn’t stop and appreciate my own graduation enough. I was so focused on packing that I failed to take time out for a lot of goodbyes, which I really should have said. Just like now, I felt more like a passive observer of what was happening around me rather than someone who was actively engaged with all of it. At the end of the day, though, things never go as smoothly as we'd hope, and we never get as much time as we’d like. However, it's that very same longing after the fact that makes those blissful memories more vibrant. Without the pressure of time, it'd be even easier to sleepwalk through life, never really appreciating all those little details. You can't have sunshine without rain.
It seems like the weather agreed with my sentiment because as we were leaving, the skies opened up, and on that sunny day came a sudden, torrential downpour. Hours of lugging around that stupid umbrella finally paid off, and the sense of vindication was sensational. See Me, Alex, and Jennie were headed to this bar, which was in a sort of open-air mall-like complex. A giant pit in the ground for all the water to flow into, and we were going down into it. It's perhaps fitting that the bar we were at was themed after an American dive bar because, frankly, at that point, it felt like I was deep-sea diving, and the presence of a pirate ship only helped cement that fact. Anyway, once we got to the bar we were the bridge above us provided shelter from the rain, we sat down and had a couple of drinks as the sound of rain was met with the R’n’B’s Greatest Hits playlist from the bar. A good way to end the day, I was able to let the sights of the day soak in while the rest of me dried off.
I Trick Therefore I Am:
Here’s a headline that found its way to my metaphorical desk the other day: “Google’s DeepMind CEO says there are bigger risks to worry about than AI taking our jobs.” (link) If you just read that headline, you would probably think he’s just being dismissive of the very real risk AI poses to our very fleshy job market. Which, of course, he is, but when you actually read beyond the headline, it reveals something far more bizarre. Hassabis talks about how his main concern is what happens when an artificial general intelligence (“AGI”) falls into the hands of bad actors.
Now, Hassabis is not the first notable figure to raise concerns about this technology; one of OpenAI’s co-founders has stated, “We’re definitely going to build a bunker before we release AGI.” (link) This topic has generated enough interest that an MIT-published research paper last year went over the laundry list of concerns this technology could create. (link) Combine that with a constant flood of articles and key figures that for years have been sounding the alarm over this exact concept: “THE END IS NEAR, AGI IS IMMINENT!” they shout. Well, let me be the first to inform you that no, it is not imminent. We can take a rain check on this particular apocalypse.
A survey of 445 AI Researchers done by the Association for the Advancement of Artificial Intelligence found that 76% of the respondents believed that scaling up current technology was either “unlikely” or “highly unlikely” to be able to create an AGI. (link) LLMs, even if they do act as a base for this kind of technology, cannot simply be made big enough to suddenly have a brain; real technological hurdles need to be crossed for an AGI to even be possible. There’s no set time frame for when that will happen so any possible predictions are extremely variable. In a survey of 1,712 experts, half of them think there is a 50% chance an AGI could exist by 2048. (link) In another survey, half of 31 super forecasters said that it was a 50% chance by 2070. (Id.) The odds of an AGI being made in the next 10 years, let alone 5, are incredibly low. So why would the CEO of Google’s AI Division stoke these fears? Surely that can’t be good for business……right?
Here’s an even better question: What even is AGI anyway? Notice how I didn’t give a definition, but entirely on your own, you created something to fill in that space, like a game of Mad Libs on a long car ride. Don’t worry, this isn’t some “Gotcha!” moment because even experts can’t agree on what constitutes AGI. For New York Times Columnist Ezra Klein and former White House AI Special Advisor Ben Buchanan, it was “systems that can replace human beings in cognitively demanding jobs.”(link) Well, that is great news, readers, because, under that definition, you can do a live demo of your very own AGI right now! Just turn on your phone and open the calculator app on your phone. Behold the technology of the future, and the best part is you didn’t even have to wait an entire decade to get it(don’t say that you never got anything out of this blog).
Clearly, that definition is terrible, so how about one from somebody at the forefront of this technology, like Dario Amodei, CEO of Anthropic(an AI company that has billions of dollars invested in it). Well, firstly, Dario rightly criticizes the term AGI as being tied to a “lot of sci-fi baggage and hype.” However, his replacement term of “powerful AI” also suffers from being a vague idea of things that might possibly maybe happen. Here’s an exact quote: “an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently."(link) So basically, it’ll look a lot like the technology except for all the parts that won’t, and will be “smarter than a Nobel Prize winner across most relevant fields.” Wonderful, that really clears things up. Admittedly, I’m being overly uncharitable to a fairly nuanced description(which you should read), but I’m doing it to make a point about the lack of clarity. There is no singular definition for what qualifies as an AGI.
Lastly, let’s hear from Microsoft AI CEO Mustafa Suleyman, whose bar for AGI, in his own words, is: “see[ing] if [an] AI can make $1 million.” (link) Finally, a bit of unintentional honesty, Suleyman was referring to an AI being given $100k and then taking the necessary steps to invest that money to make a 10x return, but it shows what the actual current end goal is: return on investment. Over the past week, the Director asked me to look over the plethora of Western media about AI, and it gave me an epiphany. AI, as a concept, has all the marketing legwork already done for it. Books, TV shows, Movies, Games, Toys, etc., all selling this mythic idea of AI. It is an idea so deeply ingrained in human culture that it has existed for thousands of years, with “automatons”, animated statues that could think and feel, being mentioned in Homer’s Iliad, which was created sometime during the 8th Century BCE. When I mentioned AGI, it probably made you think of Skynet, HAL-9000, A.M., Replicants, or any of the other thousands upon thousands of pre-existing examples of AI in media. So the more they are able to make you think about all those fictional examples, the less they have to sell their product. The truth is, AGI is a marketing term that plays whatever role it needs to. Whether it conjures up the Jetsons or the Terminator, it makes you think about technology far more advanced than what is currently available to us, and that leads to a significant influx of capital.
Of course, that’s not to say that the technology they have isn’t incredible; it is. The problem is that technology isn’t what is being sold to the general public or our policymakers; it's the illusion of something greater. They are selling the cultural concept of AI, and much like any great magic trick, it requires misdirection. Ignore the LLM behind the curtain and focus on AGI, the Great and Terrible. Not truly a technology, but a vague idea that is everything you’ve imagined and more. Without clear measurements, every new development can be framed as “getting closer to AGI.” With the “imminent” threat of some sort of evil AGI, it only makes perfect sense that your government should throw the weight of its resources behind getting there first with as little regulation as possible and as much spending as necessary. That is part of why we're seeing an industry run rampant. Development of this technology is being portrayed as a power struggle, and thus, any restrictions can be depicted as shackles that will simply stop the good guys from getting it first. This week, I wanted to write about the benefits AI can provide(of which there are many), but while doing that research, I noticed this consistent dishonest framing. I could not in good conscience write to you about those benefits without first tackling this. It is going to be exceptionally difficult to have a genuine discussion about regulating and specifically tailoring currently available technology to maximize the good it can provide to society when all eyes are pointed at a pie in the sky. So this week, I’ve played the role of Penn(though at this point in the blog, you probably wish I was Teller) and attempted to point out the truth behind the trick. Remember, people who offer easy solutions to complex problems are never giving you the full story, and just because the topic may seem out of your field does not mean you need to take their word as gospel, especially when that word's clearest effect is a direct benefit to them.
As always, thanks for reading,
Logan Smith