It’s not an article about LLMs not using dialects. In fact, they have learned said dialects and will use them if asked.
What they did was, ask the LLM to suggest adjectives associated with sentences - and it would associate more aggressive or negative adjectives with African dialect.
Seems like not a bias by AI models themselves, rather a reflection of the source material.
All (racial) bias in AI models is actually a reflection of the training data, not of the modelling.
Actually, much of that description, perpetuated by dystopian novels, is pretty far off the mark - and it’s the kind of mischaracterization that makes it harder to fight back against authoritarian governments.
The fact is, the vast majority of people in authoritarian states live ordinary lives, just like everywhere else. That’s part of what makes these governments so resilient. If everyone in there lived a nightmare, they wouldn’t last for decades, they’d collapse at the first sign of instability. After all, there are a lot more people than government officials.
For example, a canny authoritarian government won’t disappear anyone who steps out of line. Instead, they’d provide a “safe, legitimate” way to step out of line, that’s well regulated and doesn’t pose a threat to the government, but serves as an outlet. And most people will be satisfied with it. That’s both more subtle, and more effective, that instilling fear in everyone’s heart.