Your package managers — pip, npm, docker, cargo, helm, go, all of them — talk directly to it using their native protocols. Security scanning with Trivy, Grype, and OpenSCAP is built in, with a policy engine that can quarantine bad artifacts before they hit your builds. And if you need a format it doesn't support yet, there's a WASM plugin system so you can add your own without forking the backend.
Why I built it:
Part of what pulled me into computers in the first place was open source. I grew up poor in New Orleans, and the only hardware I had access to in the early 2000s were some Compaq Pentium IIs my dad brought home after his work was tossing them out. I put Linux on them, and it ran circles around Windows 2000 and Millennium on that low-end hardware. That experience taught me that the best software is software that's open for everyone to see, use, and that actually runs well on whatever you've got.
Fast forward to today, and I see the same pattern everywhere: GitLab, JFrog, Harbor, and others ship a limited "community" edition and then hide the features teams actually need behind some paywall. I get it — paychecks have to come from somewhere. But I wanted to prove that a fully-featured artifact registry could exist as genuinely open-source software. Every feature. No exceptions.
The specific features came from real pain points. Artifactory's search is painfully slow — that's why I integrated Meilisearch. Security scanning that doesn't require a separate enterprise license was another big one. And I wanted replication that didn't need a central coordinator — so I built a peer mesh where any node can replicate to any other node. I haven't deployed this at work yet — right now I'm running it at home for my personal projects — but I'd love to see it tested at scale, and that's a big part of why I'm sharing it here.
The AI story (I'm going to be honest about this):
I built this in about three weeks using Claude Code. I know a lot of you will say this is probably vibe coding garbage — but if that's the case, it's an impressive pile of vibe coding garbage. Go look at the codebase. The backend is ~80% Rust with 429 unit tests, 33 PostgreSQL migrations, a layered architecture, and a full CI/CD pipeline with E2E tests, stress testing, and failure injection.
AI didn't make the design decisions for me. I still had to design the WASM plugin system, figure out how the scanning engines complement each other, and architect the mesh replication. Years of domain knowledge drove the design — AI just let me build it way faster. I'm floored at what these tools make possible for a tinkerer and security nerd like me.
Tech stack: Rust on Axum, PostgreSQL 16, Meilisearch, Trivy + Grype + OpenSCAP, Wasmtime WASM plugins (hot-reloadable), mesh replication with chunked transfers. Frontend is Next.js 15 plus native Swift (iOS/macOS) and Kotlin (Android) apps. OpenAPI 3.1 spec with auto-generated TypeScript and Rust SDKs.
Try it:
git clone https://github.com/artifact-keeper/artifact-keeper.git
cd artifact-keeper
docker compose up -d
Then visit http://localhost:30080Live demo: https://demo.artifactkeeper.com Docs: https://artifactkeeper.com/docs/
I'd love any feedback — what you think of the approach, what you'd want to see, what you hate about Artifactory or Nexus that you wish someone would just fix. It doesn't have to be a PR. Open an issue, start a discussion, or just tell me here.
Part of the reason we pay the big license fee is so we have someone to turn to when it inevitably breaks because we’ve used it in a way nobody has before. In Jan last year we were using 30TB of artifact storage in S3. That’s 140TB today.
Where do you get your CVE data? Would built artifacts have their CVEs updated after the fact? Do you have blocking policies on artifacts based on CVEs, licenses, artifact age, etc?
I still need to put some e2e testing on those policies. https://demo.artifactkeeper.com/security/policies here is a demo and you can add a policy. Again that one I need to make a series of end to end testing but that was designed in mind :) I really want a staging area and promotion of packages after scans.
On my list of things to do.
It's a great start. What I can say is that granularity of CVE's in policies will become important for larger consumers. We have about 4.5mn artifacts so even getting CVSSv3 10's blocked was a challenge, let alone 9.8.
Edit: the project if anyone reading this is interested: http://github.com/asfaload/asfaload (looking for feedback!)
SHould have info on the CVE, please leave some issues on the repository if you want to see more infromatoin on the actual dashbaord/ui :)
Thanks for the feedback!
I'm curious if AGPL shouldn't be more common (even though it's not a silver bullet), but MIT projects with foreseeable needs of some monetization to survive long term never ceased to show up, despite so many FOSS drama in the last couple years.
My graduate research focused on common computer security misconceptions — one of the biggest being that open source is inherently insecure. The algorithms and systems we trust most are the ones open to public scrutiny. AES was selected through an open competition where every candidate was published for the world to attack. TLS, SHA-256, RSA — their security comes from transparency, not obscurity. I believe the same applies to software infrastructure.
Could a bigger player take this and run a competing service? Sure, MIT allows that. But I'd rather have the code out there being used, audited, and improved than restrict it to protect a business model I don't even have yet. If someone like AWS wraps this in a managed service, that honestly means I built something worth wrapping — and the open version still exists for anyone who wants to self-host.
I've thought about the Canonical model — paid support around a free product — and I might go there someday. But I don't have years of production use behind this yet. We all start somewhere. Right now I'd rather focus on making the software good and building a community around it than optimizing a license for a monetization strategy that doesn't exist.
AGPL is a valid choice and I respect projects that use it. But for me, MIT is a statement about what I actually care about — the code being out there for everyone.
I agree that an extreme of the permisiveness is indeed the most likely to attract major usage. On the other hand, its freedom is more fragile. All is well with each project striking the preferred balance in that axis.
BTW, if there's an interest, I'd love to collaborate and integrate Packj [1] audit for malware scans.
1. Packj (https://github.com/ossillate-inc/packj) detects malicious PyPI/NPM/Ruby/PHP/etc. dependencies using behavioral analysis. It uses static+dynamic code analysis to scan for indicators of compromise (e.g., spawning of shell, use of SSH keys, network communication, use of decode+eval, etc). It also checks for several metadata attributes to detect bad actors (e.g., typo squatting).
I have been playing with the idea of using a single git repository to host them, Java packages as an Ivy repository and JavaScript packages as simply the contents of node_modules.
Anybody does something similar?
* I say this as an engineer who has supported an authentication platform for years for a SAAS company and know not one IdP has implemented SAML the same as others.
Now that you've implemented, was there a reason you didn't go for such an approach so that you would worry about less as someone hosting something like this?
I think the approach of multi-format, multi-UI, and new (to you) programming language isn't optimal even with AI help. Any mistake that is made in the API design or internal architecture will impact time and cost since everything will need to be refactored and tested.
The approach I'm trying to take for my own projects is to create a polished vertical slice and then ask the AI to replicate it for other formats / vertical slices. Are there any immediate use cases to even use and maintain a UI?
So a few comments on the code:
- feature claims rate limiting, but the code seems unused other than in unit tests... if so why wasn't this dead code detected?
- should probably follow Google/Buf style guide on protos and directory structure for them
- besides protos, we probably need to rely more on openapi spec as well for code generation to save on AI costs, I see openapi spec was only used as task input for the AI?
- if the AI isn't writing a postgres replacement for us, why have it write anything to do with auth as well? perhaps have setup instructions to use something like Keycloak or the Ory system?
Re: the vibe coding angle - the thing I keep running into is that standard scanners are tuned for human-written code patterns. Claude code is structurally different. More verbose, weirdly sparse on the explicit error handling that would normally trigger SAST rules. Auth code especially - it looks textbook correct and passes static analysis fine, but edge cases are where it falls apart. Token validation that works great except for malformed inputs, auth checks that miss specific header combinations, that kind of thing.
The policy engine sounds flexible enough that people could add custom rules for AI-specific patterns? That'd be the killer feature tbh.
Would be cool if this also could support the existing artifactory s3 backend format so you could just point this at your existing artifactory s3 bucket and migrate your db over to this.
Congrats on launching!
I made this discussion here. Please jump on github and add some comments and maybe we can get this added :)
I think it’s cool that the OSS version has everything but I hope you’re considering adding an actual enterprise tier for paid support because from my past experience that’s the killer feature large enterprises care about.
If your OSS service becomes a mission-critical service (what an artifact repository usually is), a large org will anyways have to invest into a team that can operate and own it.
If throwing some money at the vendor takes away some of the responsibility (= less time spent by in-house team on ops) then paying for an enterprise support SLA is a feature, not a bug.
It would be great to see more competition in the space even though my current team isn’t working with this problem!
will your stuff be really opensource?
My recommendation with testing out hands free agentic, know it is not fully hands free. I find my self babysitting alot of terminals going at once, like having a bunch of interns or junior devopers.
It is important to plan plan plan.
I want to eventually switch and play with self hosted models but for most agentic stuff Claude is killing it in terms of results.
These tools can’t architect clean solutions that cut out massive chunks of code, and they can’t talk to users and decide whether what they’re building makes sense. For that, we need a human touch.
But coding agents grant insane leverage if they’re just told when they got it wrong and given a chance to get it right.
I think if you follow a few rules whenever making changes and using all the latest tools like linters, security checkers, end to end frameworks, and any other helpful tool you can really make stuff happen.
I've spent quite a long time looking at artifact storage, both for work and for personal use and this project literally scratches that itch. So featureful (assuming they're not placeholders ;) ) and yes, Claude Code, but still - the proof will be in whether it works (and how clean the codebase feels - you're making it sound promising :D ).
Very excited to try this - well done :)
I've read the main readme so excuse if comments are covered already but key features and/or opportunities: - backend supporting Azure (Nexus has this under Pro though community does support S3 under community at least) - clear navigable S3 structure that could be sorted by a human if needed, like the on-disk backend of Nexus 2 used to have, not like Nexus' current organisation/obfuscation (which would be understandable but for...) - maintenance routines that actually work (Nexus' are a joke and very limited features for both cleanup and the task set leaving ever growing detritus). - having an automatically take the latest from upstreams is a big problem in the npm world; it would be a perfect fit to introduce this kind of staging concepts and window on upstream (proxied) repos - needs restful APIs and deep links to artifacts for ease of integration - we end up proxying other sources of files in a web proxy since there's no easy "pass through" via Nexus where we don't want to copy the current files into our DB or S3 but just want to pass the latest to the consumer. a direct proxy feature with URL remapping would be cool
Things I'd have to play around to understand what it does currently: - whether it has proper proxy and group support; composition is completely essential - whether that caching is sensible there (Nexus does a poor job, though it's a hard problem, when bad states get cached) - efficient (Maven) metadata generation (Nexus is abysmally slow) - whether rbac is clear over the repo structures (Nexus does ok here except everything is repo level AND the initial setup is very painful). - P2 consumption looks to be a supported format but P2 hosting I think was nerfed after Nexus v2.11 and some clients still use that - rpms added ("yum" to Nexus) but as with repo hierarchies would need to be assured they can be nested and will correctly produced merged repomd.xml and the like so they function properly
other comments: - having the security scanning in an open source tool would be amazing - it would be very hard to get clients to trust this without either a community and review process or a company (that "can be sued") behind it. I know it's very early days but it's a bit chicken and egg as if I can't use this on clients I wouldn't use for anything. Not that I am a valuable customer by myself, but I influence clients decisions who then need that support
Their security comes from transparency and years of public audit, not obscurity. The same principle applies to software. I see the legal argument for wanting a vendor to sue, and I've thought about something like Canonical's model for Ubuntu — offering paid support around a free product. But I don't have years of production use behind this yet. We all start somewhere. So for now, this stays open and free for everyone to use, and for me and others to maintain.
CLI with journal of instructions, TUI?
Fedora recently moved to managing packages in Forgejo, a fork of Gitea and Gogs, a clone of the old GitHub UI. https://news.ycombinator.com/item?id=45670055
Forgejo has an artifact registry for DEBs, RPMs, APKs,; and a Container Registry for OCI Containers.
Any type of artifact can be stored in an OCI container image registry. Any type of artifact can be signed/attested to with a short-lived signing key from sigstore.dev's or a self-hosted Rekor instance
Native container tools like bootc store host system images as a OCI container images.
From https://news.ycombinator.com/item?id=44991636 :
> bootc-image-builder, ublue-os/image-template, ublue-os/akmods, ublue-os/toolboxes w/ quadlets and systemd
There are streaming container standards to boot containers that haven't finished downloading yet, and container shapshot artifacts too; Seekable OCI, eStargz, Nydus: https://news.ycombinator.com/item?id=45270468
...
Forgejo can mirror git repos regularly or manually.
"Tell HN: GitHub will delete your private repo if you lose access to the original" re: `git clone --mirror` https://news.ycombinator.com/item?id=34603593
Python Packaging User Guide > Package index mirrors and caches > Existing projects: https://packaging.python.org/en/latest/guides/index-mirrors-...
> [ Cache, Mirror, Proxy ]
> [ mod_cache_disk (Apache), nginx_pypi_cache, pulp-python, ]
Pulp (RedHat,) mirrors and proxies a number of different types of packages. https://github.com/pulp
pulp_container, pulp_ostree, pulp_ansible, pulp_rpm, pulp_deb, pulp_npm, pulp_maven, pulp_r
pulp-operator for HA SPOF with k8s: https://github.com/pulp/pulp-operator
From https://news.ycombinator.com/item?id=44320936 re: cosign, Sigstore, TUF, SLSA; you have to pass this to get docker to check container image signatures
DOCKER_CONTENT_TRUST=1
..- integrate with Forgejo
- mirror git repos
- consider pulp's modular approach and deployment operator
- consider OCI for future packaging formats
- What SLSA recommends; check TUF, Sigstores, Trusted Publisher (OIDC) and GPG .asc signatures
And then also content-addressable networking might avoid some of the overhead and wasteful redundancy to checking the hash of each file in each signed package manifest.
On the other hand, it also shows that it took three weeks, so why should I use this instead of building a custom toolchain myself that is optimised for what I need and actually use? Trimming away the 45+ formats to the 5 or so that matter to my project. It raises the question - is 'enterprise' software doomed in favour of a proliferation of custom built services where everybody has something unique, or is the real value in the 'support' packages and SLAs? Will devs adopt this and put 'Artifact Keeper' on their CV, or will they put 'built an artifact toolchain with Claude'?
But then again, kudos to you for building something that can (and probably should) eat the lunch of the enterprise-grade tools that are simply unaffordable to small business, individual contractors, and underfunded teams. Truth be told, I'm not going to build my own, so this is certainly something I want to put in a sandbox and try out, and also this is inspirational and may finally convince me that I should give Claude a fair go if it's capable of being guided to create high quality output.
It doesn't use the 'unsafe' keyword anywhere, but that's not necessarily an indicator. Uses unsafe-libyaml which is like what it sounds (a hacky port of libyaml) but is no longer maintained (archived on GH in March 2024), and may have better choices. An SBOM would highlight these dependencies better than me doing random searches through the code.
I'm not sure I'd have put a default in the OIDC callback to localhost, that's about the only thing I've seen in a quick 5-minute skim through. I do like the comments and the lack of emojis :-)
I too would like to know the process, if OP is willing to share.
I think adding this to your workflow helps but you have to make sure to have end to end testing on the mind. Because some changes can break things real fast.
My process is pretty plain outside of paying anthropic too much money a month. Only thing extra I am using is the beads currently. I was using speckit and ralph-loop but as of last week it does not seem to be needed. THink anthropic is baking some of thes tools into claude code.
The only extra stuff I am doing now is beads. https://github.com/steveyegge/beads
I was using speckit and ralph-loop but think anthropic baked in that ralph-loop. Basically a dumb while true until you break with the condition.
Trust it not to leak credentials? No, that's something that is never taken for granted.
Trust it to hold a full history of uploaded binaries? That depends on the value of the releases. For incubator work, or web projects, or even Appstore apps where it's released to those stores to manage, maybe there should be enough trust. I just wouldn't use it for code where I want access to many stable versions, and I wouldn't put it publicly on the web either - not that I would do so with Sonatype Nexus without vendor support and many safeguards. I think it'll earn trust over time, once folk are convinced to use it for real workloads.
There's a lot of forms of trust.
If you find an existing full blown artifactory alternative that is opensource let me know.