- I'm not sure this kind of product is really a foot in the door to create new customers. Someone not willing to create an actual account because they have no money or they just don't want to put their card details is not someone who's going to become a 6 figures per year customer, which is the level to be noticed by those providers.
- The free tier of AWS is actually quite generous. For my own needs I spend less than $10/year total spread around dozens of accounts.
- If one wants to learn AWS, they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible. It's better to overspend $5 at the beginning of the journey than to overspend $5k when going to prod.
- The main interest of local cloud is actually to make things easier and iterating faster, because you don't focus on all the security layer. Since everything is local, focus on using the services, period. Meanwhile, if you wanted to rely on actual dev accounts, you need to first make sure that everything is secure. With local cloud you can skip all this. But then, if you decide to go live, you have to fix this security debt and it most often than not break things that "work on my computer".
- Localstack has the actual support of AWS, that's why they have so much features and are able to follow the releases of the services. I doubt this FOSS alternative will have it.
Localstack does have IAM emulation as part of the paid product. I'm intrigued to see how well this does at the same thing.
When you're running hundreds of integration test suites per day in CI pipelines, the free tier is irrelevant. You need fast, deterministic, isolated environments that spin up and tear down in seconds, not real AWS calls that introduce network latency, eventual consistency flakiness, rate limits, and costs that compound with every merge request.
It'd be great to just use AWS but in practice it doesn't happen. Even if billing doesn't, limits + no notion of namespacing will hit you very quickly in CI. It's also not practical to give every dev AWS account, I did it with 200 people it was OK but always caused management pain. Free tier also don't cover organizations.
> they MUST learn that there are no hard spend limits, and the only way to actually learn it, is to be bitten by it as early as possible
This is a bizarre take. "The best way to learn fire safety is to get burned." You can understand AWS billing without treating surprise charges as a rite of passage.
Security for dev accounts is not a big deal, just give each developer an individual account and set up billing alerts.
If your only focus is spending, yes.
Otherwise, a "not a big deal" dev account can quickly become the door to your whole org for hackers
RDS databases, DynamoDB, and S3? Much less so.
That's my point: I'm not the one setting it up and using it, it's the devs using it
And I'm not expecting them to know how to navigate a cloud provider securely.
So it's either setting the dev account with all the required guardrails in place, or using "local cloud" on their computer
If you want to use that for unit testing, then I think it would be better to mock the calls to AWS services. That way you test only your implementation, in an environment you control.
If you want to use that for local development, then I think it would be better to provision a test environment (using Terraform or any other IaC tool). That way you don't run the risk of a bug slipping into prod because the emulator has a different behaviour than the real service.
The aws endpoint coverage is impressive for moto [1], which my team almost migrated to, but we liked our support contract with LocalStack.
Although I love localstack and am grateful for what they have done, I always thought that an open community-driven solution would be much more suitable and opens a lot of doors for AWS engineers to contribute back. I’m certain that it’s on their best interest to do so (specially as many of their popular products have local versions)
It’s a no-brainer to me as AI adoption continues to increase: local-first integration testing is a must and teams that are equipped to do so will be ahead of everyone else
[1] https://docs.aws.amazon.com/prescriptive-guidance/latest/pat...
Anyway, do anyone know if there're similar stuff but for gcp? So far https://github.com/goccy/bigquery-emulator helped me a lot in emulating bigquery behaviour, but I cant find emulator for the whole gcp environment.
No pull-requests, no real issues, it smells like it was auto-generated which is disappointing. Makes it harder to trust if you're going to test with "real data", how do we know it won't be sent elsewhere?
>how do we know it won't be sent elsewhere?how do we know it won't be sent elsewhere?
I the past open source meant that you trusted in theory that someone else would notice and report these things. These days though just load up your LLM of choice and ask it to do a security audit. There are some unreliable ways to cheat this and they aren't magical, but it would be pretty hard to subvert this kind of audit.
There is no "this is the core, then we add S3, then we add RDS, then we add ..." history to view and that seems both unnatural and surprising. Over half the commits are messing around with github actions and documentations.
So I’m shocked cloud providers haven’t just done this themselves, given how feasible it is with the right harness
Mentions CLAUDE.md and didn't even bother deleting it.
Whether their concerns are driven by curiosity, ethics, philosophy, or something else entirely is really immaterial to the question itself.
Shit code can be written with AI. Good code can also be written with AI. The question was only really asked to confirm biases.
Using llm as a tool is different from guiding it with care vs tossing a one sentence prompt to copy localstack and expecting the bot to rewrite it for you, then pushing a thousand file in one go with typos in half the commit message.
Longevity of products comes from the effort and care put into them if you barely invest any of it to even look at the output, look at the graveyard of "show hn" slop. Just a temporary project that fades away quickly
The commits are sloppy and careless and the commit messages are worthless and zero-effort (and often wrong): https://github.com/hectorvent/floci/commit/1ebaa6205c2e1aa9f...
There are no code commits. The commits are all trying to fix ci.
The release page (changelog) is all invalid/wrong/useless or otherwise unrelated code changes linked.
Not clearly stating that it was AI written, and trying to hide the claude.md file.
The feature table is clearly not reviewed, like "Native binary" = "Yes" while Localstack is no. There is no "native" binary, it is a packed JVM app. Localstack is just as "native" then. "Security updates Yes" .. entirely unproven.
I'll happily use it for personal development stuff if I ever decide to try cloud stuff in my free time, but it's hardly an alternative to established projects like LocalStack for serious business needs.
Not that any of it should matter to the people behind this project of course, they can run and make it in whatever way they want. They stand to lose nothing if I can't convince my boss and they probably shouldn't care.
So by the time you’re ready to push to staging you should be past the point of wanting to emulate AWS and instead pushing to UAT/test/staging (whatever your naming convention) AWS accounts.
Ideally you would have multiple non-production environments in AWS and if your teams are well staffed then your dedicated Cloud Platform / DevOps team should be locking these non-prod environments from developers in the same way as they do to production too.
Bonus points if you can spin up ephemeral environments automatically for feature branches via CI/CD. But that’s often impractical / not pragmatic for your average cloud-based project.
But you can’t have every dev tweaking staging at the same time as they work. How can you debug things when the ground is shifting beneath you?
Ideally every dev has their own AWS account to play with, but that can be cost prohibitive.
A good middle ground is where 95% of work is done locally using emulators and staging is used for the remaining 5%.
One of the first things I do when building a new component is create a docker compose environment for it.
DIY mocks alone can get you somewhat there, but that relies on the developer having intimate knowledge of the aws sdk under test and it's very easy to mock the inputs and outputs wrong. I'd rather defer that to an emulation layer that does that mimicry better than my guess and check with 30m between attempts when my cloudformation deployments ultimately fail...
At that speed you can treat it as disposable: fresh instance per test run, no shared state, no flaky tests from leftover S3 objects. that was never practical with LocalStack cold start
I recently discussed this with an adjacent org that didn't use a local environment at all outside of junit mocks for unit testing, and their deployment pipelines take over 45m per commit. Ridiculous.
Hiding bad system design behind another docker container will not push you to the right direction, but the opposite.
In addition this is def vide-coded (50k loc in one week) so I don't see how can one trust this even.
However, there is a dedicated Dockerfile for creating a native image (Java words for "binary") that shouldn't require a JVM. I haven't tested running the binary myself so it's possible there are dependencies I'm not aware of, but I'm pretty sure you can just grab the binary out of the container image and run there locally if you want to.
It'll produce a Linux image of course, if you're on macOS or Windows you'd have to create a native image for those platforms manually.
Downloading JDK, setting up the correct env variables, or running Docker, all this is just pain, compared to single binary approach.
On my Mac Docker runs Linux virtualized. It’s a resource hog.
Compare that with simple native binary.
Also localhost and presumably this are good for validating your logic before you throw in roles, network and everything else that can be an issue on AWS.
Confirm it runs in this, and 99% of the time the issue when you deploy is something in the AWS config, not your logic.
Exactly, especially when people are starting out, don't have a clear understanding of the inner workings of the system for whatever reason. Jobs are getting harder to find nowadays and if during learning, you make one mistake, you either pay or the learning stops.
I currently work with several AWS serverless stacks that are challenging or even impossible to integration test locally. While Localstack provide a decent solution, it seems like a service that AWS should offer to enhance the developer experience. They’d also be in the best position to keep it current.
One issue is that local emulation runs into some big political rocks as soon as it gets good. To start with, the emulator is good enough and covers a tiny surface of what people want, eg k8s and s3. Resistance here is about customers experiencing issues caused by gaps in fidelity vs the real environment and subsequent pain for the emulator product team. ok, fine.
But then you get customers who take your emulator and use it in places where AWS cant go, eg, airgapped environments. They start asking for more serious features. But wait! another team in the hyperscaler was already trying to solve this, for far more than zero dollars. Azure Stack. Azure Local. AWS Snowball. Now there are VPs shooting at you because you are, in their view, cannibalizing their revenue.
You might try to avoid this war by emphasizing the dev sandbox aspect, selling to developers only and making sure that you only talk about APIs and stuff. Problem is, the API surface is 90% of why the cloud is useful (the other ten percent being the assertion that you don't have to think about it, which is an increasingly untrue proposition, as the reams of SREs will tell you). So now you have an emulator for the most valuable part of Cloud, in the hands of people who know how to use it and are strongly incentivized and capable of making it better, all running locally. It's a very small step to making that commercial and wiping huge chunks of revenue out, as your VP will tell you as they sign your pink slip.
Talking to devs, the most common thing I hear re emulation is a desire to be able to let rip on any service and not fear a giant bill. Since all clouds have budget tools I wonder why this isnt possible today? Maybe theres a weakness in the planning tools rather thanthe post-use budgeting ones?
AWS don't want that support nightmare.
Great to see Localstack offset a bit thanks to ... AI driven shift left infrastructure tooling? This is a great trend.
You should build your software around abstractions and interfaces which are portable enough to work locally and in AWS or any other cloud and not just AWS specific APIs.
For example, IAM/S3/SQS policy evaluations can have profound impact on an application running but an abstraction wouldn’t help much here (assuming the developer is putting any thought into securing things). There just isn’t an alternative to these. If you’re rolling out an application using AWS-proprietary services, you have to get into vendor-specific functionality.
The only functional use of a tool like this to me would be to learn how to use AWS so that I can work for people who want me to use AWS. Would that not be to Amazon's benefit?
It could encourage more development and adoption and lead to being a net-positive for the revenue.
The myopathy among us "online people" is assuming number of voices here and elsewhere correlate to revenue.
It does not.