- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).
So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.
Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?
and now I don’t sound so fucking stupid for setting up local image caches on my self-hosted gitlab server.
Fortunately linuxserver’s main hosting is no longer dockerhub.
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?
They run their own registry at
lscr.io
. You can essentially prefix all your existing linuxserver image names withlscr.io/
to pull them from there instead.It’s actually a redirect service around ghcr to provide them analytics. There’s more info in their FAQ.
Forgejo gives you a registry built-in.
Also is it just me or does the docker hub logo look like it’s giving us the middle finger?
How long since getting an oracle CEO did this take?
Did they really? Oh my god please tell me your joking, that a company as modern as docker got a freaking oracle CEO. They pulled a Jack Barker. Did he bring his conjoined triangles of success?
A “jack barker” 🤣
Use a service that’s not Docker hub
Huh. I was just considering establishing a caching registry for other reasons. Ferb, I know what we’re going to do today!
Do you have a good resource for how one can go about this?
I’ve only done my “is it even possible” research so far, but these look promising:
https://medium.com/@amandubey_6607/docker-registry-caching-a2dfefecfff5
Much appreciated <3
Same here. I’ve been building a bootstrap script, and each time I test it, it tears down the whole cluster and starts from scratch, pulling all of the images again. Every time I hit the Docker pull limit after 10 - 12 hours of work, I treat that as my “that’s enough work for today” signal. I’m going to need to set up a caching system ASAP or the hours I work on this project are about to suddenly get a lot shorter.
Instead of using a sort of Docker Hub proxy, you can also use GitHub’s repository or Quay. If the project allows it, you can easily switch to these alternatives. Alternatively, you can build the Docker image yourself from the source. It’s usually not a difficult process, as most of it is automated. Or what I personally would probably do is just update the image a day later if I hit the limit.
https://distribution.github.io/distribution/
is an opensource implementation of a registry.
you could also self host something like gitlab, which bundles this or sonatype nexus which can serve as a repository for several kinds of artifacts including container images.
Gitea and therefore Forgejo also have container registry functionality, I use that for private builds.
Codeberg as woodpecker CI
Jumping on the forgejo love train
oh, thats good to know, forgejo seems way nicer for self hosting than the limited gitlab open source core.
local docker hub proxy
Do you mean a Docker container registry? If so, here are a couple options:
- Use the official Docker registry: https://www.docker.com/blog/how-to-use-your-own-registry-2/
- Self-host forgejo or gitea and use the included package registry, which is automatically enabled. Details: https://forgejo.org/docs/latest/user/packages/
I think they mean pull-through cache. https://shipyard.build/blog/how-to-docker-registry-pull-through-cache/
Is there a project that acts like a registry? It can proxy the request with TTL, and you can push images to it too?
Almost all of them. Forgejo handles containers already for example
Pull through Cache / proxy is what you’re looking for.
Artifactory is mandatory in some industries because it will keep all the versions of the images forever so that you can build your projects reliably without an internet connction.
I think most self-hosted Git+CI/CD platforms have container registry as a feature, but I’m not aware of a service that is just a standalone registry.
It’s easy to oversee because of the generic name, but this is pretty much that: https://hub.docker.com/_/registry
Edit: forgot there’s jfrog artifactory as well
If only they used a distributed protocol like ipfs, we wouldn’t be in this situation