Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

  • lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    11
    ·
    edit-2
    13 hours ago

    This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

    I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

    EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

    • lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      23 hours ago

      I have 5 kids. I’m almost certain my photo library of 15 years has a few completely innocent pictures where a naked infant/toddler might be present. I do not have the time to search 10,000+ pics for material that could be taken completely out of context and reported to authorities without my knowledge. Plus, I have quite a few “intimate” photos of my wife in there as well.

      I refuse to consent to a corporation searching through my device on the basis of “well just in case”, as the ramifications of false positives can absolutely destroy someone’s life. The unfortunate truth is that “for your security” is a farce, and people who are actually stupid enough to intentionally create that kind of material are gonna find ways to do it regardless of what the law says.

      Scanning everyone’s devices is a gross overreach and, given the way I’ve seen Google and other large corporations handle reports of actually-offensive material (i.e. they do fuck-all), I have serious doubts over the effectiveness of this program.

      • Ledericas@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        im not surprised if they are also using an AI, which is very error prone.

    • Noxy@pawb.social
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      it had a ridiculous amount of safeties to protect people’s privacy

      The hell it did, that shit was gonna snitch on its users to law enforcement.

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        13 hours ago

        Nope.

        A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

        Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

    • Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 day ago

      Apple had it report suspected matches, rather than warning locally

      It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        13 hours ago

        They were not “suspected” they had to be matches to actual CSAM.

        And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.

        So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed

        • Natanael@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 hours ago

          Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.

          https://scaling-attacks.net/

          Also, even if those attacks were prevented they’re still going to look through basically your whole album if you trigger the alert

          • lepinkainen@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.

            No cops are called, no accounts closed

            • Natanael@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above

      • Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        20 hours ago

        The official reason they dropped it is because there were security concerns. The more likely reason was the massive outcry that occurs when Apple does these questionable things. Crickets when it’s Google.

        The feature was re-added as a child safety feature called “Comminication Saftey” that is optional on a child accounts that will automatically block nudity sent to children.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      Overall, I think this needs to be done by a neutral 3rd party. I just have no idea how such a 3rd party could stay neutral. Some with social media content moderation.