A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • KillerTofu@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    52
    ·
    2 months ago

    How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

    So no, you are making false equivalence with your video game metaphors.

    • fernlike3923@sh.itjust.works
      link
      fedilink
      arrow-up
      56
      arrow-down
      1
      ·
      2 months ago

      A generative AI model doesn’t require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

      • finley@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        19
        ·
        2 months ago

        In that case, the images of children were still used without their permission to create the child porn in question

        • MagicShel@programming.dev
          link
          fedilink
          arrow-up
          31
          arrow-down
          4
          ·
          2 months ago

          That’s not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.

          Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.

          • finley@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            16
            ·
            2 months ago

            Why does it need to be “ nuanced” to be valid or correct?

        • fernlike3923@sh.itjust.works
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          2 months ago

          That’s a whole other thing than the AI model being trained on CSAM. I’m currently neutral on this topic so I’d recommend you replying to the main thread.

            • fernlike3923@sh.itjust.works
              link
              fedilink
              arrow-up
              15
              arrow-down
              2
              ·
              edit-2
              2 months ago

              It’s not CSAM in the training dataset, it’s just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.

        • CeruleanRuin@lemmings.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          6
          ·
          2 months ago

          Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.

    • macniel@feddit.org
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      2 months ago

      Can you or anyone verify that the model was trained on CSAM?

      Besides a LLM doesn’t need to have explicit content to derive from to create a naked child.

      • KillerTofu@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        27
        ·
        2 months ago

        You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

        • macniel@feddit.org
          link
          fedilink
          arrow-up
          15
          arrow-down
          2
          ·
          2 months ago

          I just hope that the Models aren’t trained on CSAM. Making generating stuff they can fap on ““ethical reasonable”” as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn’t involve chemical castration or incarceration.

    • Diplomjodler@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      2 months ago

      While i wouldn’t put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don’t think that is how this stuff works.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      2 months ago

      But the AI companies insist the outputs of these models aren’t derivative works in any other circumstances!