• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • Most of the time I just copy/paste the terminal output and say ‘it didn’t work’ and it’ll come back with ‘I’m sorry, I meant [new command]’.

    It isn’t something that I’d trust to run unattended terminal commands (yet) but it is very good when you’re just like ‘Hey, I want to try to install pihole today, how do I install and configure it’, or ‘Here’s my IP Tables entry, why can’t I connect to this service’ … ‘Ok give me the commands to update the entry to do whatever it was you just said’.





  • I think a lot of the issue is the actual term. Defederation sounds like a lofty thing that we’re inflicting on a server. It’s just a block. Like you block a person or community on this instance, they still can type messages and they’re still on the instance but you can’t see them.

    If I’m running an instance then defederation is basically me choosing inserting a user onto your personal block list. You may like a certain type of humor and I think it’s annoying. You may like Popping videos but I find them gross. I can choose, on my own to block those things and my blocking Popping videos or dead baby joke communities is my personal choice.

    But if I chose to add those items to YOUR block list then suddenly I’m in the wrong. It isn’t up to me to say you can’t like Popping videos (even if I find them gross) and I can’t tell you that you can’t read those dead baby jokes that you really laugh at (even if I think they’re offensive).

    So why even allow a feature like defederation? Because there is some content that we ALL wouldn’t mind having blocked. It’s unanimous that nobody wants spam in their feed no matter their position on Popping videos or dead baby jokes. People don’t want to see CSAM in their feed. Nobody wants to see random private data about people being posted in their feed. In THOSE, very limited, cases then the ability of the instance admin to add an item to your block list is a positive feature. You only need a small group of people (moderators and admins) to detect and block abusive material and their work is shared by every single person on the instance.

    Instead we have people who are advocating that we use defederation to impose their personal (or their group’s) viewpoint on every other person on the same instance. This would be like me using my power to block spam instances in order to decide that you can’t watch those Popping videos that you love so much. Suddenly this formerly useful tool is now being by others to curate what you’re allowed to see on social media.

    As far as Facebook, I imagine a lot of people would want to see content on Facebook via Lemmy. There will be instances that don’t de-federate and those instances will see most of the user growth because they offer users both Fediverse and Facebook content… any instances that block Facebook will simply have a slightly different Fediverse with less people and less content.

    The average user simply doesn’t care about joining the battle against the corporate overlords, they’re looking for the app that lets them see funny videos the easiest. Having all of the motivated ideological users in their own isolated bubble will ensure that Meta’s section of the Fediverse can more easily be taken over by EEE. Meta will be the only developer developing features for the version of ActivityPub that is used in their network and so it will likely be adopted faster. Not having people developing FOSS-versions of ActivityPub extensions, apps and tools that are directly competing with Meta will create friction for people who want to transition away from Meta services and ensure their continued market dominance.

    Federate with them, develop better tools and features, and then take their users away. Providing a better social media is how you beat Meta.

    TL;DR

    1. Federation isn’t the tool for this kind of ideological splintering and;

    2. Not federating with Meta services will ensure that they get all of the benefit of having an open source protocol without any competition for their userbase.


  • It still doesn’t change the very basic math of Meta having billions of users and the existing Fediverse, across all services, still numbers in the millions.

    A social network is only as strong as the size of a network. If you’re trying to get an average person to join an instance are they going to want to join an instance with access to a few million people or an instance that can contact most of the planet?

    Cutting an instance off from the largest userbase of any service on the Internet is suicide for an instance.

    There are guaranteed to be instances that do not de-federate with Meta and so users looking to escape Meta will move to those independently owned instances as it allows them to get off of Meta services without losing contact with users and groups that they were previously using.

    It is disheartening to see how often de-federation is offered as a solution to any given problem or grievance. This mindset ensures that the network will be an ideologically fragmented mess instead of a single open social network.


  • If we could ensure 100% compliance with a meta-blockade then I’d be for it.

    However, that isn’t going to happen and any instances that do federate with Meta will be the part of the Fediverse that exists to billions of people. Those instances will become the dominate instances on the Fediverse for people who want to get away from Meta but still access the Fediverse services. Lemmy, as it stands now, is only a few million people at most. We simply do not have the weight to throw around on this issue.

    It is inevitable that commercial interests join the Fediverse and the conversation should be around how we deal with that inevitability rather than attempting to use de-federation as a tool to ‘fix’ every issue.




  • It seems inevitable that some kind of ID system will be needed online. Maybe not a real ID linked to your person but some sort of hard to obtain credential. That way getting it banned is inconvenient and posts without an ID signature can be filtered easily.

    It used to be that spam was fairly easy to detect for a human, it may have been hard to automate the detection of but a person could generally tell what was a bot and what wasn’t. Large Language Models (like GPT4) can make spam users appear to produce real conversations just like a person.

    The large scale use of such systems provide the ability to influence people on a mass scale. How do you know you’re talking to people and not GPT4 instances that are arguing for a specific interest? The only real way to solve this is to create some sort of system where posting has a cost associated with it, similar to how cryptocurrencies use a proof of work system to ensure that the transactional network isn’t spammed.

    Having to perform computationally heavy cryptography using a key that is registered to your account prior to posting would massively increase the cost of such spamming operations. Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn’t be terribly inconvenient to you but for a person trying to post on 1000 different accounts it would be a crippling limitation that would be expensive to overcome.

    That would fight spam effectively, it wouldn’t do much to filter content.