• Melody Fwygon@lemmy.one
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I don’t like the idea of a prompt being subtly manipulated like this to “force” inclusion. Instead the training data should be augmented and the AI re-trained on a more inclusive dataset.

    The prompt given by the user shouldn’t be prefixed or suffixed by additional words, sentences or phrases; except to remind the AI what it is not allowed to generate.

    Instead of forcing “inclusivity” on the end user in such a manner; we should instead allow the user to pick skin tone preferences in an easy to understand manner, and allow the AI to process that signal as a part of it’s natural prompt.

    Obviously; where specific characters are concerned, the default skin tone of the character named in the prompt should be encoded and respected. If multiple versions of that character exist, it should take a user’s skin tone output selection into account and select the closest matching version.

    If the prompt is requesting a skin tone alteration of a character; that prompt should obviously be honored as well, and executed with the requested skin tone; and not the skin tone setting selection. As an example I can select “Prefer ligher skin tones” in the UI and still request that the AI should generate me a “darker skinned version” of a typically fairer skinned character.

    Instead of focusing on forcing “diversity” into prompts that didn’t ask for it; let’s just make sure that the AI has the full range of human traits available to it to pull from.

    • flora_explora@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      Yes. But this would probably cause friction with the overall public, as the AI would then give a full range of human traits, but people would still expect very narrow default outputs. And thinking more about it, what is the full range of human traits anyways? Does such a thing exist? Can we access it? Like, if we only looked at the societies the AI is present in, we still don’t get all the people to actually be documented for AI to be trained upon. That’s partially the cause for the racist bias of AI in the first place, isn’t it? Because white cishet ablebodied people are proportionally much more frequently depicted in media.

      If you gave the AI a prompt, e.g. “a man with a hat”. What would you expect a good AI to produce? You have a myriad of choices to make and a machine, i.e. the AI, will not be able to make all these choices by itself. Will the result be a black person? Visibly queer or trans? In a wheelchair?

      I guess the problem really is, there is no default output for anything. But when people draw something then they so have a default option ready in their mind because of societal biases and personal experiences. So I would probably draw a white cishet man with a boring hat if I were to execute that prompt. Because I’m unfortunately heavily biased, like we all are. And an AI, based on our biases, would draw the same.

      But repeating the question from before, what would we expect a “fair” and “neutral” AI to draw? This is really tricky. In the meantime your solution is probably good, i.e. training the AI with more diverse data.

      (Oh and I ignored the whole celebrity or known people thingy, your solution is definitely the way to go.)