Not at all! I agree, and COVID didn’t help at all. I do want to try and be accurate though :p
(They/Them)
This is my main lemmy account.
Admin of lemmy.cloudhub.social
I can also be found elsewhere on the fediverse at @jax@cloudhub.social
Not at all! I agree, and COVID didn’t help at all. I do want to try and be accurate though :p
Its possible that I estimated the timeline wrong 😅
I’ve added a note to the blog, thanks!
I should look into how to do that on my instance probably. Pictrs always seemed like a bit of a security nightmare.
Glad I could provide some insight! It’s not something I see talked about too much even on Reddit. Let me know if you have any questions or things I could flesh out more in the article!
I’m still relatively new to ActivityPub and Federated systems in general, though I’ve had my Lemmy and Mastodon instances for 8+ months now I don’t use them as much as I was expecting, sadly. Running your own instance can be very isolating and any content you put directly on your instance probably won’t gain much traction (at least on Mastodon, Lemmy seems to fair a bit better).
It’s one of a handful of blogs that I’ve run over the last couple of years, the other one that’s still online is HomeLab.Blog. I actually meant to run a federated blog platform like WriteFreely, but they don’t have a production docket image, and I saw that Ghost is planning on adding ActivityPub support.
This article might be more appropriate on that blog and an article about my experience with Federated systems might be more on-topic on this one. Oops.
Yeah, this seems like old news - cookies can be stolen, and FIDO doesn’t change that unless you are prompting the hardware token for validation with every request (which isn’t feasible for most things, though might be a good idea for sensitive actions).
What’s everyone’s workflows with these systems? Do you catalog both physical and digital documents? Where do you store the documents?
I disabled Pictrs around the time of CSAM attacks and have yet to bother enabling it again
Uhh… what?? When did that happen? I thought pictrs was a requirement also…
Huh, do you have your lemmy config documented somewhere? I keep running into issues with it and I’m not sure which component exactly is failing, but it’s annoying. I’m using this helm chart currently: ananace/lemmy It works, but I don’t have pict-rs setup in HA either.
They store the secrets in a file? Gross. What a poor way of handling that. Pretty sure environment variables would be more secure. Especially in Kubernetes.
Yeah, I used to host a Matrix instance - could do that for this one too.
The issue is more about setting up the Kubernetes manifests and templating them. I usually use the chart’s built-in postgres and redis config, though using an operator would make it more scalable for sure.
I’m using Authentik for auth, but I do also like Keycloak.
I’ve seen that around, but I prefer to run my own services instead of relying on a ready-built system like that. I find they don’t offer that much customization options usually.
I think both of the ones I mentioned have docker-compose
files, which I think I can convert with kompose convert
? I guess from there I would follow your steps and then start parameterizing it once it’s running properly.
Thanks! I think I’ll start trying out PixelFed tomorrow.
That’s actually super helpful! I haven’t done much custom Helm chart-ing, and was kinda lost where to start. That really helps break the process down, and the tip about skipping state to start is very wise.
Yeah, that’s the pain point - building and maintaining the charts.
Also, I know the charts likely wouldn’t have to be super complex, but I’m used to working with Bitnami’s charts that are massively complex - I just don’t have the time to go that in-depth.
Oh, I know I could get them to run with enough work. I just don’t have that much time to spend on initial implementation and upkeep of the charts.
I’m using FluxCD, which I believe can do deployments of plain Kubernetes manifests, but that still requires a decent amount of overhead to keep up to date.
Just because it’s not public facing doesn’t mean that it’s not an issue. It might be less of an issue, but it is still a massive vulnerability.
All it takes is one misconfiguration or other vulnerable system to use this as a jumping off point to burrow into other systems. Especially if this system has elevated access to sensitive locations within your network.
Your best bet is going to be a 4U chassis. You can get 2U chassis with consumer PSUs, but they are going to be more expensive and very limited in terms of parts that will work. You can easily find 4U chassis that support regular ATX internals with proper mounting holes for the PSU and mobo standoffs.
There are some small SuperMicro servers that use Xeon-D (I think? Very low power Xeons that are passively cooled), but you’re pretty vendor locked in with those.
Do not use external drives for this. TrueNAS doesn’t support it, and you’ll be limiting your speeds to that of the USB bus, which is not nearly as fast. Pointless going SSDs if you are using external drives.
I don’t think it’s worth the devs’ time to implement e2ee for DMs, there are lot of other things that need to be fixed first. Not only that, but if it’s implemented in Lemmy, it’d make Lemmy non-interoperable to DM users from other federated platforms such as Mastodon or KBin. Which, I’m not sure works right now, but in theory would be possible.
Also, yes, that is generally the case.
Yeah for sure! I like to post about both the positive and negative experiences. I find things like that to be a valuable learning tool.
From a security perspective, it’s important to understand the systems you’ve implemented and test that they are working as expected. I think in that example if I had tested user sign-up sooner I could have caught the configuration issue.
It’s also important to have good observability into your system, both metrics and logs. Metrics to help detect if something weird is happening (increased resource usage could point to ransomware or crypto mining) and logging to track down what happened and see what systems are impacted.
From a technical controls standpoint, it’s good practice to segregate your applications from other systems and control planes like IPMI and switching/routing admin interfaces. It’s also good to try to limit holes in your firewall. In this cluster, I have Cloudflare Tunnels setup so that I don’t have to open ports to access web servers, and I get access to their WAF tooling. You could do something similar with a VPS running WireGuard, CrowdSec, and a reverse proxy.