If you want to understand the state of the video game industry, there’s no better place to get a snapshot than the Game Developers Conference. The annual event held in San Francisco unites game professionals around the world to show off what they’re working on, share ideas with other developers, and diagnose the problems facing the industry. It’s also a battle royale for tech companies hungry to shape the future of development. In 2023, the show was flooded with Web3 startups selling big visions for NFTs and the Metaverse. When that buzz died down in 2024, AI fervor took its place, as companies like Inworld bought prime booth space in hopes of forcing a revolution.
Two years later, not much has changed. AI was still a major focus of this year’s GDC; major players like Nvidia and Google made their presence known throughout the halls of the Moscone Center. With well-attended panels on the subject and tons of startups pushing new tools on the show floor, it was undeniable that the generative AI craze has yet to loosen its grip on a vulnerable gaming industry in need of change. What was less clear, though, is how exactly the tech will shape game development. A hodgepodge showcase where no one seemed to have the same vision of the future showed why the topic of generative AI in gaming is so difficult to understand — and why so many developers say they’re avoiding the mess altogether.
Even before GDC’s show floor opened on Wednesday, AI was a dominant focus of the conference. Banners advertising AI tools were strewn about the Moscone Center, while the expo center’s West Hall was lined with booths selling practical use cases for the tech in game development. That charge was led by Google, which roped off a part of the West Hall’s second floor (that hosted organizations like the indie-focused nonprofit Day of The Devs in previous years) to showcase games that were being developed with Gemini components.
Each of the projects on display showcased a very different way of using AI. One demo was for a rudimentary top-down shooter that featured a voiced AI helper that would give the players tips as they played. Every few seconds, a robotic voice butted in to tell players where a boss was on-screen, like a non-stop GPS. Another demo I tried was entirely focused on showing off AI-powered NPCs populating a very basic fantasy town setting. I could talk to any character I saw, typing up prompts that they would respond to after a few seconds of generation with chatbot-like text that struggled to give the townsfolk distinct personalities. Like a lot of attempts at AI NPC, my conversations weren’t always coherent. When I walked into a tavern and saw someone sitting at a table in front of a fully cooked chicken, I asked the barkeeper if he could make me a quail. He refused and said the tavern doesn’t cook fowl.
A more creative use case came from an upcoming roguelike called You vs. Zombies. The project utilizes generative AI to let players create a custom hero that the game adapts around. By answering a few prompts about my hero, I created the Baja Blaster: an over-caffeinated hero with a soda bottle-shaped head and skin the color of Mountain Dew. My stats were determined based on that (low health, high speed), as were my spells (which included one called Mountain Due-Over). The flavor text between the action made references to my soda intake, even cracking a Voltage pun, while the game generated a health-conscious boss to act as a foil to my hero.
Once the show floor opened on Wednesday, attendees were met with a smattering of booths pushing very different AI tools and use cases. Nunu.ai showed off a QA automation system where developers could quickly build bug tests using AI, not dissimilar to how some web developers use Selenium to craft routine test scripts. Arcade AI, on the other hand, showcased a full AI game engine where developers could generate a full environment, create assets, and even put together game logic from prompts. If that sounds too good to be true, it is. I tried a demo of a wave-defense first-person shooter that was quickly mocked up in the tool, which played like a student project from a design summer camp. (Though that didn’t seem to bother the professionals: I overheard the team talking to someone from Epic during my demo, who was curious about if the tool could be integrated into existing game engines.)
3D model generation? AI agents to help out in programs like Blender? Coding support? Every booth I visited had a completely different idea for generative AI with varying degrees of practicality. There were AI-powered games too, but those didn’t leave the strongest impressions either. One game by developer Gamecury AI was a mystery game starring Sherlock Holmes, where I communicated with Watson via written or vocal prompts. The settings menu allowed players to change which AI model generating the dialogue on the fly. It was a bizarre tech showcase filled with slow voice acting, flat text responses, and inconsistent character logic. When I asked Watson if he’d make me tea, he happily agreed and asked if I wanted some biscuits too. Then he let me know that, actually, he can’t do any of that. Thanks, my dear Watson.
The myriad of use cases made it hard to parse just how far along the tech actually is at this point. The games I demoed, for instance, felt like a small-scale proof of concept rather than something you’d actively play outside a conference setting. That was also the case at GDC 2024, where Ubisoft and Nvidia showcased a demo that had players chatting with AI-powered NPCs. The flood of low-quality tools and games drowned out more down-to-Earth applications that perhaps could fit into a development workflow. Tech companies believe that generative AI can do anything, and they’re throwing as many darts at the wall as possible in hopes that one sticks.
While images of the Google booth elicited some social media dunking during the week, there did seem to be a genuine interest in the tech at the show. A Google-hosted panel about DeepMind ended up being quite popular; Game File reported that the talk reached capacity, with at least 100 people turned away at the door. The curiosity is there, even if the sales pitches haven’t evolved much in two years.
You can’t blame consumers for being baffled by it all. As the buzzwords have become more invasive over the past few years, we’ve seen skeptical players getting their wires crossed as they try to parse the difference between generative AI and regular old AI in the classic video game sense. You can blame that on social media witch hunts — some proponents of the tech that I spoke to at GDC did just that — but the reality is that the tech industry is sowing that confusion itself. It’s not so dissimilar to the way that Web3, NFTs, and the Metaverse all fused into one nebulous topic in the early 2020s that poisoned the well for unrelated tech. There’s an education problem, but how is anyone supposed to learn when the top-down messaging is so garbled? If tech companies are going to keep pushing the tech, maybe it’s time for those companies to better understand what they’re trying to sell.
Maybe that’s why some developers I spoke to at the show were so quick to distance themselves from the tech entirely, rather than get into its nuances. When I broached the topic of AI with the writers of Zero Parades, the latest CRPG from Disco Elysium studio ZA/UM, they were quick to tell me that the studio isn’t using generative AI whatsoever, as if they knew the question was coming. They went as far as to say that Zero Parades doesn’t use any AI, period, pointing out that they don’t even have regular old AI baked into characters. Other studios I talked to similarly brushed past the topic as quickly as they could; after one interview, a PR person even told me that they could send me proper email statements about how the studio isn’t using AI. When the microphones were on, guards tended to go up.
But more casual conversations I had throughout the week were more nuanced. A few developers I spoke to voiced frustration with how difficult it is to actually talk about the tech and where it could be useful, characterizing the tech as a potentially valuable tool. Those conversations are lost when companies like Nvidia try to sell all-encompassing visions of AI that don’t give much thought to how it will impact jobs or take the artistry out of games. That’s been happening for years now, and it’s only getting more convoluted. Maybe Nunu.ai is on to something with a very focused idea of how AI can assist a small part of the QA process. But we can’t really have those conversations when AI-voiced NPCs keep interjecting over everything.
A lot of people at GDC would have you believe that AI is the future of gaming. I’m just not sure that they’re all living in the same present.







![13th Mar: Woody Woodpecker (2017), 1hr 31m [PG] – Streaming Again (5.1/10) 13th Mar: Woody Woodpecker (2017), 1hr 31m [PG] – Streaming Again (5.1/10)](https://occ-0-1381-999.1.nflxso.net/dnm/api/v6/Qs00mKCpRvrkl3HZAN5KwEL1kpE/AAAABbYXETi_EC9dpzEaSdwwKNjZtCK0G-GRxD2ENEXL1ECvFZLfMULIPHxwo9sZuHunOG10h9c1gbfQIYn-kBjo41FMd5Xc7W9hBgwV.jpg?r=d05)






