A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • emmy67@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Once again you’re showing the limits of AI. A dragon exists in fiction. It exists in the mind of someone drawing it. While in ai, there is no mind, the concept cannot independently exist.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      AI is not creating images in a vacuum. There is a person using it and that person does have a mind. You could come up with a brand new mythical creature right now, let’s call it AI-saurus. If you ask it to create a picture of AI-saurus, it wouldn’t be able to do so because it has no idea what it looks like. However what you could do is describe it to the AI and it’ll output something that more or less resembles what you had in mind. What ever flaws you see in it you could correct for with a new, modified prompt and you keep doing this untill it produces something that matches the idea you had in mind. AI is like a police sketch artist; the outcome depends on how well you managed to describe the subject. The artist itself doesn’t need to know what they looked like. They have a basic understanding of human facial anatomy and you’re filling in the blanks. This is what generative AI does as well.

      The people creating pictures of underage kids with AI are not asking for it to produce CSAM. It would most likely refuse to do so and may even report you. Instead, they’re describing what they want the output to look like and they’re arriving to the same end result by just using a different route.

      • emmy67@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 months ago

        You’re right, it’s not. It needs to know what things look like. Which. Once again, it’s not going to without knowing what those things look like. Sorry dude either csam is in the training data and can do this. Or it’s not. But I’m pretty tired of this. Later fool