• 1 Post
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle




  • It’s frustrating, because I have pronouns after my name and I dislike hexbear… a lot. It is a good idea to have users give pronouns and automatically attach it.

    Their behaviour has made me constantly check if people with pronouns after their names are part of hexbear before engaging in any threads, because of the stress of dealing with them :/, sometimes I do engage anyway and immediately regret it /shrug

    It is depressing, because normally using pronouns like this indicates trans supportiveness so I feel better about conversing with people with them on their names. Hexbear has ruined this because of their behaviour around all other topics and sometimes trans topics.

    Just hope Jerboa gets instance-blocking features soon ;p, then I can block them on both my lemmy accounts .


  • Something that might be useful long term is trying to train an AI and release weights to identify CSAM that admins can use to check images. The main problem is finding a way to do this without storing those kinds of images or video :/

    My understanding is that right now, the main mechanisms involved use several central databases which use perceptual hashes of known CSAM material. The problem is that this ends up being a whackamole solution, and at least in theory governments could use these databases to censor copyrighted or more general “unapproved” content, though i imagine such a db would lose trust quickly and I’m not aware of this being an issue in practise.

    One potential solution is “opportunistic training” where, when new CSAM material gets identified and submitted to the FBI or these databases by various server admins, a small amount of training is done on the AI weights before the image or video is deleted and only a perceptual hash remains. Furthermore, if a picture is reported as “known CSAM” by these dbs, then you do the same thing with that image before it gets deleted.

    To avoid false positives, you also train the AI on general non-CSAM content.

    Ideally this process would be fully automated so no-one has to look at that shit - over time, ypu’d theoretically get a neural net capable of identifying CSAM reliably with few or no false positives or false negatives .. Admins could also try for some kind of distributed training, where each contributes weight deltas from local training, or each builds up LoRA-style improvement modules and people combine them to reduce bandwidth for modification sharing.












  • “Normal” is a social construct that hardly anyone probably fits into. Most people have at least some major traits that diverge from the average.

    The reason people dislike the use of “normal” is because it’s usually used with the connotation that being outside of whatever is being described/considered as “normal” is bad, and describing a group as “abnormal” is usually meant as an insult and used to dehumanise.

    I’m not ashamed of being trans regardless of whether it’s “”“normal”" ^.^, and I don’t think being whatever our society deems “”“normal”“” is even desireable - though as I said before, most people are likely outside society’s definition of a “”“normal”“” personl in at least a couple categories.