• Limiting Parole: A new law pushed by Louisiana Governor Jeff Landry cedes much of the power of the parole board to an algorithm that prevents thousands of prisoners from early release.
  • Immutable Risk Score: The risk assessment tool, TIGER, does not take into account efforts prisoners make to rehabilitate themselves. Instead, it focuses on factors that cannot be changed.
  • Racial Bias: Civil rights attorneys say the new law could disproportionately harm Black people in part because the algorithm measures factors where racial disparities already exist.
  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    3 days ago

    In fascism, the cruelty is the point. Using an algorithm to be cruel is an attempt to diffuse responsibility and dodge accountability. We can’t let society keep going this direction, but how to oppose the allure of cruelty?

    • Zwuzelmaus@feddit.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      3 days ago

      how to oppose the allure of cruelty?

      Value life itself, and health.

      Life and health are higher values than money. That’s where the need for public health insurance comes from. That’s where laws against murder etc. come from.

      But you can’t take this understanding for granted anymore. You have to think about it, and teach, and learn, and promote, and discuss it everywhere in society.

      Note that this does not depend on a political party. You are allowed :) to, and you actually should discuss values with people from both sides, and more.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        But that is the entire point of responsibility diffusion. Everyone knows murder is wrong but what about 100 people each making 1% of the decisions that leads to a loss of life? What about 10000 each making 0.01%? At which point does an individual feel sufficiently detached from the consequences of their actions to make choices that would go beyond their own morals if responsibility was concentrated on them alone?

    • Komodo Rodeo@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      3 days ago

      “Our system has determined that you’re no longer eligible for support, as your submission of Form 792B was 0:56 late. Please resubmit your application at your earliest convenience, our clients are important to us, and your well-being is our top priority. We’re currently experiencing a high volume of document submission, and the processing time may take as long as 3 months. Please ensure that you have an adequate supply of life-sustaining medication during this time, as no other assistance will be available until completion of your file.”

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      3 days ago

      Cruelty is a major foundation of capitalism. There is probably no way to violently enforce the valuation of capital over people without cruelty.

      If humanity wants to live in cruelty-free society or indeed continue living on a habitable planet, then capitalism needs to end.

  • Komodo Rodeo@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    3 days ago

    “A Louisiana law cedes much of the power of the parole board to an algorithm…”

    The people responsible for this imbecile sloughing off of responsibility need to be held to account. The stakes are too high to permit them to claim that “the brain warden got it wrong, but it’s out of our hands”, these are the types of situations that warrant standing directly in front of someone and forcing them justify the damage they’ve caused, and explain how they’re going to rectify their fuck up and compensate the affected parties. What I’m hearing is alarmingly similar to news about current problems with medical insurance and the labour market.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      Our age will eventually (assuming our species survives) be known as the age of responsibility diffusion. Companies, bureaucracies, AI, they are all just different mechanisms to achieve the same thing, detach the people from any sense of responsibility for the outcome of their horrible choices.

    • piefood@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      If the responsibility is being shelled out to “the algorithm”, then doesn’t that mean they have less responsibility? Shouldn’t they be paid less if they have less responsibility?

      Something tells me they won’t see the logic in that though.

      • Komodo Rodeo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        “If you don’t pay me the big bucks for my experience and expertise, who else will tell the robot how to destroy your life?”

  • aleq@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Automating this system with some kind of algorithm is not right, but a nearly blind 70-year-old can still do damage? The angle here is weird.

    • entwine413@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      3 days ago

      I know for a fact they’ve released “harmless old men,” who basically instantly go out and kill someone.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      3 days ago

      The angle makes complete sense if you understand it: A reason that “AI” automation is bad is because it labels blind 70-year-olds as dangerous.

      • entwine413@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        3 days ago

        Blind 70 year olds can still be dangerous. Being blind and old doesn’t prevent that.

        They’re not saying that offloading the responsibility to an algorithm is good, they’re saying it’s weird to assume a person is harmless based on nothing but two attributes.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I agree on a general basis that it’s bad that these kind of decisions are offloaded to an AI. A human should be the one to consider whether the blind 70 year old is dangerous, because they definitely can be.

        Operating a vehicle or weapon requires neither eyesight nor a clear mind if you don’t intend to do it safely.