There’s a place for AI in NPCs but developers will have to know how to implement it correctly or it will be a disaster.
LLMs can be trained on specific characters and backstories, or even “types” of characters.
If they are trained correctly they will stay in character as well as be reactive in more ways than any scripted character could ever do.
But if the Devs are lazy and just hook it up to ChatGPT with a simple prompt telling it to “pretend” to be some character, then it’s going to be terrible like you say.
Now, this won’t work very well for games where you’re trying to tell a story like Baldur’s Gate… instead this is better for more open world games where the player is interacting with random characters that don’t need to follow specific scripts.
Even then it won’t be everything. Just because an LLM can say something “in-character” doesn’t mean it will line up with its in-game actions. So additional work will need to be made to help tie actions to the proper kind of responses.
If a studio is able to do it right, this has game changing potential… but I’m sure we’ll see a lot of rushed work done before anyone pulls it off well.
I think the issue is that games are games; an example that springs to mind is Caves of Qud’s Markov-chain generated books. I don’t mind them, but once I realized what they were, I stopped reading them. Unless it’s written by a developer, it doesn’t matter. They might as well be empty, unopenable items, like books from Dwarf Fortress where they get a description of what is inside but not any text from the passage.
Even random dialogue is interesting in games not only to “immerse” the player, but to receive messages and information from the developers; if they are randomly generated, they have no purpose. The game would only be improved by their absence.
There’s a place for AI in NPCs but developers will have to know how to implement it correctly or it will be a disaster.
LLMs can be trained on specific characters and backstories, or even “types” of characters. If they are trained correctly they will stay in character as well as be reactive in more ways than any scripted character could ever do. But if the Devs are lazy and just hook it up to ChatGPT with a simple prompt telling it to “pretend” to be some character, then it’s going to be terrible like you say.
Now, this won’t work very well for games where you’re trying to tell a story like Baldur’s Gate… instead this is better for more open world games where the player is interacting with random characters that don’t need to follow specific scripts.
Even then it won’t be everything. Just because an LLM can say something “in-character” doesn’t mean it will line up with its in-game actions. So additional work will need to be made to help tie actions to the proper kind of responses.
If a studio is able to do it right, this has game changing potential… but I’m sure we’ll see a lot of rushed work done before anyone pulls it off well.
I think the issue is that games are games; an example that springs to mind is Caves of Qud’s Markov-chain generated books. I don’t mind them, but once I realized what they were, I stopped reading them. Unless it’s written by a developer, it doesn’t matter. They might as well be empty, unopenable items, like books from Dwarf Fortress where they get a description of what is inside but not any text from the passage.
Even random dialogue is interesting in games not only to “immerse” the player, but to receive messages and information from the developers; if they are randomly generated, they have no purpose. The game would only be improved by their absence.