OpenAI models coming to Amazon Bedrock: Interview with OpenAI and AWS CEOs
94 points by translocator 2 hours ago | 28 comments
https://aws.amazon.com/bedrock/openai/
https://www.aboutamazon.com/news/aws/bedrock-openai-models
jasobake 2 hours ago
As someone who works at big tech and spends countless hours in meetings hoping to get some small feature coordinated for deployment across two teams, I can't imagine the amount of meetings and 6-pagers that were involved in running these models on bedrock's hardware.
reply33MHz-i486 56 minutes ago
at this level they just decide and spin up a swat team to execute it in a couple weeks without politicking. the bureaucratic ways, reviews are just for the low levels, to keep them busy with feature scraps while they mostly do operations
replyo10449366 3 minutes ago
Lol, spinning up swat teams because someone high up decides "drop everything this is my pet priority now" is politicking. It looks good for the leaders, meanwhile its the engineers pulling the all nighters and dealing with having to maintain systems that are operationally compromised from day 0 because there's no proper planning/scoping involved other than "Big Man says this needs to be done in 2 weeks"
replygiancarlostoro 53 minutes ago
Depends on how its implemented, but Amazon already did add gpt-oss-20b so if the model is similar enough to the OSS variant of GPT, it might not have been as complicated as you might think.
replylondons_explore 37 minutes ago
I imagine there's lots of custom kernels and optimization...
replyOpenai hasn't been publishing innovations for quite a while.
spindump8930 57 minutes ago
Remember that models on different inference platforms might not necessarily give exactly the same results, adding another axis of non-determinism to development. Things like quantization, custom model serving silicon, batching, or other inference optimizations might mean a model from the original provider performs differently from the hosted one :/
replyThis paper isn't the exact same scenario, since it's an auditable open weight llama model, but shows the symptoms of this: https://arxiv.org/pdf/2410.20247
bossyTeacher 2 minutes ago
Anyone who has used gpt-x via openai vs microsoft has experienced this very clearly.
reply2001zhaozhao 17 minutes ago
The market might be increasingly hard on AI startups in general as enterprises adopt providers like Amazon Bedrock and refuse to sign other deals.
replynijave 47 minutes ago
This would be a nice compliance win. One less sub-processor and all our data is already on AWS so less worrying about sending it off somewhere else
replythrow03172019 2 hours ago
OpenAI frontier models coming to Bedrock soon?
replykarmasimida 2 hours ago
> Starting today, @awscloud and OpenAI are bringing the latest OpenAI models to Amazon Bedrock, launching Codex on Amazon Bedrock, and launching Amazon Bedrock Managed Agents, powered by OpenAI (all in limited preview). AWS and OpenAI will continue to bring the latest advances to Amazon Bedrock—so the models and agents you build with today continue to benefit from new breakthroughs as they arrive.
replyechelon 49 minutes ago
This doesn't mean you have the raw model weights, right? That's still entirely hidden / opaque?
replyYou can just run "air gapped" inference?
Is this only of interest to enterprise customers already on AWS (who want "air gapped" behavior)? Is there any other use case for this?
This will be more expensive than calling OpenAI directly, right?
kube-system 52 seconds ago
A lot of companies already have data processing agreements and compliance sign-off for using AWS.
replyIf this ends up similar to Claude on Bedrock, it's the same price.
londons_explore 35 minutes ago
This is for people who don't trust openAI with their data, but do trust Amazon.
replyBut it also is for Devs in a company who already have a blanket agreement with Amazon, but would have an uphill battle signing an agreement with openAI.
We will see if this changes the equation, but it feels like OpenAI is pretty far behind and playing catch up on all fronts. Though to be honest, "pretty far behind" is like 2-8 weeks in the AI world, so it may not matter a ton, it's mostly perception. And for me and my information bubble, perception of OpenAI is rock-bottom due to Sam Altman. From appearing unethical to appearing unhinged with demands from fabs and everything else, I'm not a fan.
What OP is referring to is Anthropic aligning with corporate terms and conditions early, positioning themselves to be effectively resold by AWS rather than requiring orgs to procure them directly. This is huge in the enterprise world because the processes to get broad approval are generally far smaller and shorter for "just another AWS service" compared to a whole new vendor.