• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle
  • I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like Borgbackup

    And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

    You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

    I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

    I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.


  • I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like Borgbackup

    And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

    You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

    I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

    I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.


  • I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like Borgbackup

    And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

    You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

    I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

    I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.


  • I can imagine a containerized service that only runs, say, ssh which only runs a forcedcommand, like https://borgbackup.readthedocs.io/en/stable/usage/serve.html

    And set up the container with the storage-opt option to limit space usage. It would make it harder to misuse the space or cpu, or break out into the hosting server.

    You could go one step further and set up something like a tailscale/headscale network and only allow access over that, and limit the acls on the tailnet to only the ssh port. That should shield it from the Internet at large and also apple am absolute minimum of access to the other side.

    I wonder if you could run the tailscale client within the container? Having it all together would make it actually usable.

    I’m also looking at some of the distributed file systems out there, if one supports “m of n” connections to get the data, you could possibly use that to have the encrypted backups stored on multiple machines at once with more resilience.


  • It would be marginally risky, but considering how many people have large storage arrays having a “mutual backup compact” between two folks where each can run backups to the others array would help get you an affordable offsite backup for catastrophes.

    I see a bunch of people with 10TB of data and 30TB arrays and if two of them got together they would both be reasonably safe from a total array failure.


  • You could look into AWS Glacier or S3 Deep Storage tier. If you have 20TB stored that’s about $20/month(YMMV) which isn’t wonderful but that’s a lot of data so it’s understandable.

    Being a cheapskate, if I can get something back or it’s not crucial it’s on a RAID array with snapshots, everything else is either encrypted Duplicati backups to Google Drive (Windows) or encrypted borg backups to Borgbase(Linux)

    Borgbase is very reasonably priced and if you have a large storage space in GDrive due to having one of their other services it’s a good use of it.