I’m thinking about starting a self hosting setup, and my first thought was to install k8s (k3s probably) and containerise everything.

But I see most people on here seem to recommend virtualizing everything with proxmox.

What are the benefits of using VMs/proxmox over containers/k8s?

Or really I’m more interested in the reverse, are there reasons not to just run everything with k8s as the base layer? Since it’s more relevant to my actual job, I’d lean towards ramping up on k8s unless there’s a compelling reason not to.

      • csm10495@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The basics can be useful there. The whole idea with k8s is to be able to run applications across multiple hosts in a given fleet. Your cluster can be that fleet! :)

        • tiwenty@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Also k8s is in high demand in the sector, so those are good skills that could be turned into $$

          • kiddblur@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I get why too. I’m a full stack (including devops) software engineer, and docker/k8s is just completely opaque to me. I’m not sure why, but I really just can’t wrap my head around it. Thankfully my current company has a devops team that takes care of it, but jeez

            • tiwenty@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Tbh those stuff aren’t really intuitive. But, as was my case for instance, that’s something that can be “easily” learnt as a hobbyist like us. And when you understand those concepts, at least from an abstract point, my stance is that you can become a better dev/ops/sys :) I strongly advice anyone in the field to at least play a little with Docker/containers to grasp what it is.

  • SilentMobius@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Personally I always use containers unless there is a good reason to use a VM, and those reasons do exist. Sometime you want a whole, fully functional OS complete with custom kernel, in that situation a VM is a good idea, sometimes a utility only comes packaged as a VM.

    But absent of a good reason, containers are just better in the majority of cases

  • ChojinDSL@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Containers, unless you have a specific need for a VM.

    With a VM you have to reserve resources exclusively. If you give a VM 2gb of ram, then that’s 2gb of ram that you can’t use for other things, even if the guest OS is using less.

    With Containers, you only need as many resources as the process inside the container requires at the time.

  • twei@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Why not use both? I have PVE installed on all of my hosts and then use k3s/docker in VMs. If there ever is anything you don’t want to or just can’t deploy as a container (e.g. opnsense, hassio, truenas, windows [for whatever reason you might have]), you can just spin it up as a VM and not worry about adding and maintaining another physical machine

    • chaosratt@lemmy.computing.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      If you are using PvE for linux “VMs” those probably aren’t actually VMs but LXC containers. And if you are running docker in one of those, you’ve got containers in your containers.

      Welcome to the club.

      • twei@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        My brother in Christ, how would one confuse a VM with an LXC in Proxmox? They couldn’t be more clearly labelled as different things than they already are. But don’t let this distract you from the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer’s table.

  • FancyGUI@lemmy.fancywhale.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’d suggest looking into k8s. It’s definitely a bit more complex on the start, but so much more power once you get to the details. VMs you don’t share the base OS layer and the hardware, you have to pre-define the resources you need per app in a more constrained manner, while containers can move freely in their little sandbox to pickup whatever it needs.

    It is also much easier to manage replicas, upgrades, scale and a bunch of other things once you are using containers and an orchestrator like Kubernetes. Let me know if you need any help/insights. I’ve been trying to post more videos/answers about things that could be complicated.

  • Nitrousoxide@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    What I did is install proxmox on the bare metal, setup a vm in which I put the containers.

    Proxmox itself stays (almost) completely stock. The only changes I’ve made to it were to add the NUT client package so it could gracefully shut down if my NUT server indicates that the UPS is running out of power during an outage.

    In your VMs you can do whatever. Setup OMV, or a stock Ubuntu or Debian vm and install your services on the VM or use Docker/Podman. Setup Fedora CoreOS or IoT vms and host all your services in Podman containers.

    The great thing about Proxmox is you can do snapshot backups which take mere moments to complete. Then pass those off to a NAS where they can survive a irreparable loss of your Proxmox server.

    You can also spin up new vms as needed to just try to fuck around with new techs or just a new way of setting up your home lab. It gives you a ton of flexibility and makes backing stuff up way easier.

    Another great thing you can do is if 3 years down the line you are looking to replace your server hardware with some newer or more powerful stuff you can just add the new device as a node to the cluster. Then you can migrate all your existing VMs over to your new hardware and decommission your old one with very little to no downtime on anything.

    • karlthemailman@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This is my exact setup as well. Proxmox with one beefy vm dedicated just to docker and then a few other vms for non docker workloads (eg, home assistant, pihole, jelltfin). I can probably run those in docket as well, but the to worked better as vms when I set them up

      • Foreverwinter@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Appreciate your take on this and specifically mentioning that you have a VM for Home Assistant. That was a lightbulb moment for me as I like how easy it is to manage updates as an OS install rather than in a Docker container. If I ever get around to rebuilding my server architecture I’m definitely going to do this!

  • Hizeh@hizeh.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think it depends on your scale. If homelab stuff docker is awesome IMO.

  • redcalcium@c.calciumlabs.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Container processes are just ordinary linux processes, so they don’t need extra overhead (cpu and ram reservation) to run, which means your machine can run more of them. If you have a machine with 32GB of ram, can probably run 15 VMs with 2GB of ram each where the actual app running inside the VM might only consume about 50% of the VM ram, or you can run them as container and they all would just consume 15GB of ram, leaving you extra to run more containers. I found this to be ideal for self hosting because all apps are your personal apps so interprocess isolation is not as important compared to running in public cloud.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve always been unclear of why people choose to run VM’s. I would think you’d want to try Docker first, LXC second, and VM only in the last instance, if you need to emulate a different architecture? But if the stuff you need to run has been ported to your server’s architecture why add the overhead?

  • widowhanzo@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have a pretty low power server at home (Pentium G4560), and the previous one was even slower J3160, so I don’t want to unnecessarily hog the CPU with a VM, and the few services I need at home run perfectly fine in containers.

    I run pihole, unbound, wireguard, plex, unifi controller in containers, and I run some additional services directly on the host (samba, transmission).

    I have a Windows VM on my Windows PC for work, so it’s isolated from my main rig (various VPN clients and work files etc), and if I needed some Linux stuff on my Windows PC I’d also run a VM, but more VMs also mean more updating and patching, which is much easier with containers.

  • GustavoM@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    VMs if you have enough RAM and/or need to run something on a non-compatible system (like pfsense on ARM). Containers for everything else.

  • Brad Ganley@toad.work
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I, personally, haven’t done a whole lot of VM work but I do run a metric ass-ton of containers. I can spool up servers in docker compose on absolutely dogshit hardware and have it run serviceably. Also, the immutability of the container OS is really nice for moving things around and/or getting them set up quickly.

  • Midou@kbin.projectsegfau.lt
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If it’s relevant to your actual job, learning to use k8s will benefit you more. Generally i’d prefer to keep the bare metal OS as clean as possible to avoid breaking anything during upgrades and such, and keep the containers and normal running apps on separate VMs that can communicate with eachothers, k8s is mostly good if you got a lot of servers and want to manage them all at once through a single “orchestrator”. But for self hosting stuff in your home it’s kinda overkill. But it still can be used to manage things up. So imo go for k8s since it can be used in homeservers, it’s just that it’s kinda like using a nuclear bomb to kill a wasp.

  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If everything you want to run makes sense to do within k8s it is perfectly reasonable to run k8s on some bare-metal OS. Some things lend themselves to certain ways of running them better than others. E.g. Home Assistant really does not like to run anywhere but a dedicated machine/VM (at least last time I looked into it).

    Regardless of k8s it may make sense to run some sort of virtualization layer just to make management easier. One panel you can use to access all of the machines in you k8s cluster from a console level can be pretty nice, and a Proxmox cluster would give you this. You can make a VM on a host that takes up basically all of the available RAM/CPU on it. Proxmox specifically has some built-in niceties with gluster (which I’ve never use, I manage gluster myself on bare metal) which could even be useful inside a k8s cluster for PVCs and the like.

    If you are willing to get weird (and experimental) look into Rancher’s Harvester it’s an HCI platform (similar to Proxmox or vSphere) that uses k8s as its base layer and even manages VMs through k8s APIs… I played with it a bit and it was really neat, but opted for bare metal Ubuntu for my lab install (and actually moved from rke2 to k3s to Nomad to docker compose with some custom management/clustering over the course of a few years).

  • donalonzo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    VMs are often imperative and can be quite easy and familiar to setup for most people, but can be harder or more time-consuming to reproduce, depending on the type of update or error to be fixed. They have their own kernel and can have window managers and graphical interfaces, and can therefore also be a bit resource heavy.

    Containers are declarative and are quite easy to reproduce, but can be harder to setup, as you’ll have to work by trial-and-error from the CLI. They also run on your computers kernel and can be extremely slimmed down.

    They are both powerful, depends how you want to maintain and interface with them, how resource efficient you want them to be, and how much you’re willing to learn if necessary.