• Thorny_Insight@lemm.ee
    link
    fedilink
    arrow-up
    9
    arrow-down
    6
    ·
    5 months ago

    placing the burden on the legal system to prove it to the contrary.

    That’s how it should be. Everyone is innocent until proven otherwise.

    • Stovetop@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      5 months ago

      Right, but what I am suggesting is that laws should be worded to criminalize any sexualized depiction of children, not just ones with a real victim. It is no longer as simple to prove a photograph or video is actual CSAM with a real victim, making it easier for real abuse to avoid detection.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        5 months ago

        This same “think about the children” -argument is used when advocating for stuff such as banning encryption aswell which in it’s current form enables the easy spreading of such content AI generated or not. I do not agree with that. It’s a slippery slope despite the good intentions. We’re not criminalizing fictional depictions of violence either. I don’t see how this is any different. I don’t care what people are jerking off to as long as they’re not hurting anyone and I don’t think you should either. Banning it haven’t gotten rid of actual CSAM content and it sure wont work for AI generated stuff either. No one benefits from the police running after people creating/sharing fictional content.

        • Stovetop@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          5 months ago

          I think you’re painting a false equivalency. This isn’t about surveillance or incitement or any other pre-crime hypotheticals, but simply adjusting what material is considered infringing in light of new developments which can prevent justice from being carried out on actual cases of abuse.

          How do you prove what is fictional versus what is real? Unless there is some way to determine with near 100% certainty that a given image or video is AI generated and not real, or even that an AI generated image wasn’t trained on real images of abuse, you invite scenarios where real images of abuse get passed off as “fictional content” and make it easier for predators to victimize more children.