Say you have a script or something that gets run in cron/task scheduler and it needs a password… say to ssh to a raspberry pi elsewhere in your house.
How do you save that password in a way that automation can access it?
Some ideas:
- Plaintext file. Not a fan because its sitting unencrypted on the box somewhere.
- Environment variable. Not a fan because its still unencrypted somewhere to someone on the box (albeit likely the same user or an admin).
- A secrets manager. If I use something locally like hashicorp vault or infisical, I can get to a point where a cli/api call gets the password. Though in this case I still need a vault password/secret to get my password. So I fall back to needing one of the above to get this to work.
If the secrets manager is easily available, the secret to get into the secrets manager is available as well leading to a feeling of security by obscurity.
If someone breaks into my system via SSH/etc. then they can get the passwords either way.
… How do people normally do this? I’m not sure I actually get anything out of a secrets manager if its local and I have the disk itself encrypted before login.
What actually makes sense at a personal/home scale?
(Edit: I know using SSH key probably is better for getting to the raspberry pi, but still the question is the same idea).
If you want to get fancy: systemd credentials. It can store the secrets encrypted on disk and seal the encryption key with the TPM chip. The encrypted secret is decrypted (non-interactively) and made available only to a specific systemd service. The process itself does not special systemd integration–it just sees a plain text file containing the secret, backed by a
tmpfs
that’s not visible to other processes.Depending on which TPM PCRs you bind to, you can choose how secure you want it to be. A reasonable/usable configuration would be something like binding to PCRs 7 and 14. With that setup, the TPM will not unseal the key if the system is booted into any other OS (i.e. anything signed with a different UEFI Secure Boot key). But if you really want to lock things down, you can bind to additional PCRs and make it so changing any hardware, boot order, BIOS setting, etc. will prevent the TPM from unsealing the key.
TIL, thanks!
Yep, this is a root of trust problem. Your choice will ultimately come down to how much you want to invest and how much inconvenience you’ll put up with, measured against how secure you want it to be.
Personally, I go for full disk encryption and then just store things on the filesystem in secure (to the OS) ways. File permissions and users and groups, etc. Most other things boil down to that though something like vault adds a layer of access control in that you can seal it off in the case of a breach (if you care) and can get granular with authz permissions in a centralized place, only managing authn in your distributed tools.
There’s probably some ideal system out there like vault but with a plugin that can ping your phone for quick verifications that would likely be ultra ideal, but I haven’t seen that. Personally I’d love something like that.
I was thinking of the same thing: unless you’re running an enterprise load where you need to keep secrets like they keep it in the industry, filesystem encryption should be OK for the most part
I guess at the end of the day there is also a root of trust. In an enterprise setting a system giving out certs could be compromised and give out certs to the wrong people/machines. In a home setting, the machine being compromised has a similar affect.
Funny enough, I thought of using a USB stick or something as a physical security key, using that for a vault, then having secrets in the vault… but then realized I’d have to leave it plugged into the server, making it so anyone with server access would get the password anyways.
Makes me think that everything is security by obscurity at some level. The more obscure: the more ‘secure’.
It’s kind of like how an SSH key is generally considered more secure, but if I used password authentication and had a file with a 512 character random password, it would be more/less the same thing. Either way, we have the key in a file.
Yes, there has to be a minimal level of trust between the server and devices/users. You’re level of security defines what point the computer decides “yep, that’s good enough, I can trust this is the real user accessing me.” A true, perfectly secure system has no access, it’s a black box that nothing can interact with, because it can only trust itself.
At some point you have to trust yourself not to mess up too bad, you are the weak point in security, since I’m assuming you’re the only one who’s accessing the system right now.
I personally use plaintext password files, with appropriately managed permissions (only the owner can see or read the file.) As long as the user login is secure, and root/admin access is secure, I feel comfortable that no one but me can access the credentials. To manage remote access to the system, I use hardware (YubiKey) to store my SSH keys, with a PIN code lock that wipes the keys if entered incorrectly 5 times. I don’t have any government agencies coming after me (as far as i know) so no one has a practical way to extract the keys if the device were stolen off me, and the PIN retry limit prevents brute forcing. I trust myself to manage these hardware keys appropriately.
*Edit: to add to the “appropriately managed” bit there, each sub system (home automation, file server, media hosting, etc) should be properly containerized or isolated (using different user accounts) so that if one service is compromised, the others are still somewhat protected. *
Physical access to your server is endgame. If an attacker can physically mess with your system, you’ve lost, and that can only be fixed externally with home security improvements. A skilled attacker doesn’t need your ssh tokens to gain access if they can plug a keyboard into the server itself. I’ve also seen a demo of a neat little kit the feds use to seamlessly move a computer power cord from the wall to a portable battery pack, so they can simply walk out, with the device still powered up, and do what they want to it back at home base (it’s used mainly for raids on various computer fraudsters, but still, it exists, and can be used on you). I trust myself not to do stupid stuff that gets me targeted by a hacker group, or raided by the FBI.
Something less targeted, like a burglary (not focused on your server), can be protected against by disk encryption. I don’t use any disk encryption, but I probably should. I like the idea other user’s wrote down here, of using a TPM module to store disk encryption keys, so it can detect if the hardware or OS changed between boots and deny decryption. I’d also take it one step further and encrypt the data/password files with an encryption method that requires someone to log in and type a password, that way if someone were to steal the device and power it up elsewhere, the passwords are still safe until you OK it, essentially authorizing unexpected reboots, at the cost of having to log into the server every time it starts (not fun if you’re doing maintenance.) if you do this bit right, you don’t have to trust yourself to do anything but remember the password.
Sorry about the length there, but security and access management is a complicated topic, so it requires a lot of talking. Hopefully it helped!
I think you hit the nail on the head with the true security being black box. The moment I need access, I’m making a hole.
Environment variables. If they’re in my network, that has no open ports to the internet, I’ve got plenty more problems.
Even a dev machine, think about how many env vars a normal dev has, plenty to loose.
Secrets management tool for self hosting on my level would bring more complexity for little gain.
Bash scripts etc can be uploaded to open repo and not share secrets which is what I want.
Hashicorp Vault + Vault Config Operator + external-secrets. I have a simple chart that can add credentials to different apps which mostly gets used in argocd with its multichart functionality. A simple bash script to create the vault policies, which use the k8s back end to allow auth.
I use shh keys for all my remote machines, set passwords automatically with ansible, and store them with pass.
https://www.passwordstore.org/
EDIT:
Just to clarify, ansible can use pass as a password store, so in the ansible playbooks you can write which password you want to retrieve from pass.
You can also call pass from any shell script by writing $(pass <target_password>)
Ansible also comes with its own secrets manager
ansible-vault
, which you can also use to store your secrets in an encrypted file.Yeah I’ve been meaning to look into it.
Just went with pass because it’s what I’m used to, and it’s pretty straightforward. But definitely next on my to-do list.
I generate a unique key pair (or token) for each service that I want to access from the host machine. I see no issue with storing that dedicated private key locally in plaintext (obviously in a folder where only the required user can read it and I except it from backup and versioning). I use one dedicated user per externally accessible service.
Should the machine itself become compromised this would indicate that my personal master key and master password have been compromised or someone gained physical access. That would require me to restart from a blank page anyways.
Ideally, a secrets manager that you can unlock once and then give access to the secrets. You can either unlock it at boot by entering the password, or if you have a TPM, you can also do something to encrypt the main key with the TPM and then when you boot up, if all secure boot checks passes, you can decrypt that key with the TPM and unlock the secrets manager in question.
You can also store the secrets on another machine that only exposes say, the Vault API. Like a dedicated Raspberry Pi just for this function where remote access is disabled and everything, it only serves secrets. That way, you can trust the Vault logs to know when each secret was accessed and find anomalies this way.
If someone breaks in via SSH or whatever, your security already fell apart. That box can no longer be trusted for anything. Especially if they breached the root account. Doesn’t matter how big the fortress is, if you’re inside, it’s game over, it’s time to evaluate the damages and clean up.
Yeah, I haven’t gotten past using plaintext secrets in separate files (eg. using Home Assistant’s secrets files). I do the same for any Python scripts that need secrets too, like Slack auth tokens and stuff.
I haven’t really gotten around to looking into secrets management in my homelab, and I know I really should. As much as I have a lot of faith in my Nginx and Authelia config, it only takes one hole for someone to get in and get to those secrets files, especially if that hole is a security flaw in something like Home Assistant - one of the very few services I can access publicly, without using my Wireguard VPN.
- For kuberentes stuff, I use external-secrets-operator. It can be paired with selfhosted stuff, like Hashicorp Vault.
- For machines, disk encryption LUKS, and then files in plain text, with minimal permissions: mode 400.
- For my laptop, keepass.
In my opinion, for home selfhosted stuff you don’t have to go for complex solutions. In the industry, the problem is that secrets needs to be served to different systems, by different people, with some kind of audit logs. Unless you are working with lots of people, environment variables are OK. You github/gitlab may have all scripts with variables, and your disk may have a .env file with mode 400. If you make any machine or container with a single responsibility, there should be no secret leaks among them.
For example, let say your wordpress instance gets pwned. It should only have its needed secrets (like its db credentials), so your wikimedia instance is still fine.
On a completely unrelated side note: I like to see paralellisms of SOLID principles of OOP development and system administration.
A container may have one responsability. Or a service config (like nginx) may be closed to modifications but open to extensions, to avoid some automated client breaking elsewhere, etc, etc.
Sometimes I like to thing about system administration like some kind of very high level development.
spoiler
To mods: I have no problem to delete this comments if it doesn’t fit this community
I haven’t played around with it much, so this is a pretty uninformed suggestion, but BitWarden have a secrets manager, currently in beta. You can create projects, then grant human or service accounts access to secrets. Looks promising, but I have to say I’m a bit of a BW fanboy so I concede some bias.
Two more options you might consider:
- secret-tool - like a vault that unlocks when a user logs in to their session. This shifts the problem to keeping the user’s login credentials secure but depending on your setup that might be preferable. Just be aware the once unlocked any process could access the vault in theory (I wish they’d add access controls…)
- podman secrets - so you can securely provide secrets to containers. You can set these once securely then nothing except processes in the container can get them.
This is likely a too late, but reasonable moment to say this server happens to be Windows based.
… for backup reasons.
(The tool used for online backup only allows home versions of Windows and local drives)
One day if I build a new one, I might start with a Linux base, though that kind of requires this one to be on its last leg before I get to that point. It’s running a processor/mobo that are 14ish years old… so maybe I should think more about it.
In that case I’ll also mention that Powershell has a secure-string that allows you to load secrets from encrypted file/user input. I believe it’s secured by the user’s login/session like secret-tool. They are even remain encrypted in memory so they can’t be snooped on.
I’m on Kubernetes with ArgoCD gitops.
I use sealedsecrets, so all my secrets are in git encrypted. The encryption key is in my keepass vault.
Not ideal but a few scattered ssh agents I unlock on reboot. And zabbix to tell me if they’re not running.
deleted by creator
The problem is that would be so annoying/impractical. In an optimal world, yeah a person checking a prompt and approving could make sense, but in practice that would also mean that the MFA prompt would have to ask for the password anyways. (Or the password would be on the phone with the same problem as on the computer).
Can you imagine having to type a password on an hourly schedule or something? If the password was cached, we have the same problem again.
Is cert based auth an option?
Yeah… but I think its overkill. The root cert would be on the same box somewhere nearby. Compromising the host has the same issue as plaintext.