• mox@lemmy.sdf.org
    link
    fedilink
    arrow-up
    35
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Using a tool like this to hide sections of code presented for review places a lot of trust in the automation. If Mallory were to discover a blind spot in the semantic diff logic, she could slip in a small change for eventual use in an exploit, and it would never be seen by another human.

    For example, consider this part of the exploit used in the recent xz backdoor. In case you don’t see the problem, here’s the fix.

    Rather than hiding code from review, if a tool figured out a way to use semantic understanding to highlight code that might be overlooked by a human (and should therefore be reviewed more carefully), it could conceivably help find such things.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      2 months ago

      If Mallory were to discover a blind spot in the semantic diff logic

      This is a very big stretch IMO. That xz change wasn’t actually the exploit, it was just used to make the exploit less detectable. And it was added by people with commit access so it didn’t even have to go through code review.

      On top of that, code review is not magic. It’s easy to get bugs past it hiding in plain sight (if that wasn’t the case Linux would be bug free!).

      Can you think of an actually realistic example?