oh, I too often do this, with emails, where I compose it for a long time, all the while it changes a lot
Curiously, there seems to be a psychological factor behind this: when we’re compositing emails, we are focused on a single mail. Email composition boxes are often bigger and wider than those from social networks, and they often appear as fullscreen textareas (separated from the mail being replied, if there’s any). That’s possibly why it seems easier to do this with email composing. A tip? Notepad apps (such as Noto, Sketchbook, Joplin or even mainstream ones such as Google Keep) can mimick a composition box from emails. My previous reply to you was initially written in Noto, until I transferred it to PC. Perhaps this could help if you wish to apply the same habit for fediverse.
that, or what reddit does: replace the username with “deleted”
In a sense, yeah, “deleted” username placeholdering (automatically, when a person chooses to delete their own account) is also an interesting solution. However, there are some things I forgot to mention in my previous replies, one of which is GDPR’s “Right to be forgotten”, which could pose a legal obstacle for such a solution if, like Reddit, the content is restored against the user’s will (as a context: when people left Reddit to come to fediverse/Lemmy, Reddit undid many of the deletions, so they could both astroturf the Reddit platform (make it appear like they have a large userbase when they don’t anymore) AND train corporate AI with all posts and comments, and this probably led to legal issues or will lead in the future if people eventually find a Reddit’s legalese contradiction inside their ToS and decide to sue Reddit based on GDPRs rights).
In the end of the day, it’s a complicated matter, because it feels like there’s no easy solution that could both respect community AND the user behind the content while complying with certain laws out there, especially when things can unexpectedly change in the future (e.g. corp AI managing to haunt the fediverse) and leading people to decide on nuking entire posts.
@Fletcher Not only it is a golden mine for scrappers (AI-purposed or whatnot), but even deleted things from fediverse (and, by extension, Lemmy) continue to appear out there (e.g. Google Search), be it through federated instances, be it through direct scrapping.
I feel like a personal example of that: I deleted my Lemmy account. Still, many of my content still linger on Google and other search engines through instances I never saw before.
However, it’s not because fediverse is open: it’s because of how Web (or, at least, Clearnet) works. If someone can access it, it can become available for others to access. When even DRM-protected, pay-walled content still ends up being openly accessible somewhere, it’s no surprise fediverse content can, too. Everything done on Clearnet will end up on many places simultaneously, lasting any deletion: Internet Archive is a common place to find digital ghosts.
While it seems ominous, it is thanks for this very nature that many important and/or useful content can still be accessed (e.g. certain scientific papers and studies that were politically removed by a government, certain old/ancient games that fell into corporate/market oblivion, certain books from long-gone publishers).
To quote Cory Doctorow: “Scraping against the wishes of the scraped is good, actually”. The problem isn’t scrapping, but the intentions behind who use the scraped content, particularly if such a “who” is a corporation (such as Google and Microsoft).
Problem is: to the eyes of a webmaster, well-intentioned scraping isn’t distinguishable from corporate scrapping. They’re all broad GETs (i.e. akin to the “all the things” meme), perhaps differing in scale, distribution and frequency, but broad GETs nonetheless. People have been setting up Anubis (the libre PoW CAPTCHA solution) or CloudFlare (the MitM corporation) to avoid AI-crawling, but they’re also becoming prone to oblivion when, say, their servers ends up disappearing forever one day, taking all their content to the realms of /dev/null: many of which are unique contents, useful contents, gone as no archiving tool (e.g. Internet Archive) could reach them.
IMO, you’re not wrong, but scraping isn’t wrong per se, either.