A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • emmy67@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 months ago

    But you do know because corn dogs as depicted in the picture do not exists so there couldn’t have been photos of them in the training data, yet it was still able to create one when asked.

    Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      And this proves that AI can’t generate simulated CSAM without first having seen actual CSAM how, exactly?

      To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it’ll improve the quality of it by orders of magnitude.

      • emmy67@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        I wasn’t the one attempting to prove that. Though I think it’s definitive.

        You were attempting to prove it could generate things not in its data set and i have disproved your theory.

        To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it’ll improve the quality of it by orders of magnitude.

        To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don’t believe me?

        Here’s a link to it

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          2 months ago

          You were attempting to prove it could generate things not in its data set and i have disproved your theory.

          I don’t understand how you could possibly imagine that pic somehow proves your claim. You’ve made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a “corn dog” has nothing to do with how the image I posted was created. It’s a composite between a corn and a dog.

          Generative AI, just like a human, doesn’t rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn’t in its training data.

          Cause we have actual instances and many where csam is in the training data.

          If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.

          • emmy67@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            Generative AI, just like a human, doesn’t rely on having seen an exact example of every possible image or concept

            If a human has never seen a dog before, they don’t know what it is or what it looks like.

            If it’s the same as a human, it won’t be able to draw one.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              2 months ago

              And you continue to evade the questions challenging your argument.

              How was the first ever painting of a dragon created? You couldn’t possibly draw something you’ve never seen before, right?

              • emmy67@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                2 months ago

                Once again you’re showing the limits of AI. A dragon exists in fiction. It exists in the mind of someone drawing it. While in ai, there is no mind, the concept cannot independently exist.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  2 months ago

                  AI is not creating images in a vacuum. There is a person using it and that person does have a mind. You could come up with a brand new mythical creature right now, let’s call it AI-saurus. If you ask it to create a picture of AI-saurus, it wouldn’t be able to do so because it has no idea what it looks like. However what you could do is describe it to the AI and it’ll output something that more or less resembles what you had in mind. What ever flaws you see in it you could correct for with a new, modified prompt and you keep doing this untill it produces something that matches the idea you had in mind. AI is like a police sketch artist; the outcome depends on how well you managed to describe the subject. The artist itself doesn’t need to know what they looked like. They have a basic understanding of human facial anatomy and you’re filling in the blanks. This is what generative AI does as well.

                  The people creating pictures of underage kids with AI are not asking for it to produce CSAM. It would most likely refuse to do so and may even report you. Instead, they’re describing what they want the output to look like and they’re arriving to the same end result by just using a different route.

                  • emmy67@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    edit-2
                    2 months ago

                    You’re right, it’s not. It needs to know what things look like. Which. Once again, it’s not going to without knowing what those things look like. Sorry dude either csam is in the training data and can do this. Or it’s not. But I’m pretty tired of this. Later fool