Gemini 3.1 Pro
406 points by MallocVoidstar 9 hours ago | 639 comments
Preview: https://console.cloud.google.com/vertex-ai/publishers/google...

Card: https://deepmind.google/models/model-cards/gemini-3-1-pro/


spankalee 6 hours ago
I hope this works better than 3.0 Pro

I'm a former Googler and know some people near the team, so I mildly root for them to at least do well, but Gemini is consistently the most frustrating model I've used for development.

It's stunningly good at reasoning, design, and generating the raw code, but it just falls over a lot when actually trying to get things done, especially compared to Claude Opus.

Within VS Code Copilot Claude will have a good mix of thinking streams and responses to the user. Gemini will almost completely use thinking tokens, and then just do something but not tell you what it did. If you don't look at the thinking tokens you can't tell what happened, but the thinking token stream is crap. It's all "I'm now completely immersed in the problem...". Gemini also frequently gets twisted around, stuck in loops, and unable to make forward progress. It's bad at using tools and tries to edit files in weird ways instead of using the provided text editing tools. In Copilot it, won't stop and ask clarifying questions, though in Gemini CLI it will.

So I've tried to adopt a plan-in-Gemini, execute-in-Claude approach, but while I'm doing that I might as well just stay in Claude. The experience is just so much better.

For as much as I hear Google's pulling ahead, Anthropic seems to be to me, from a practical POV. I hope Googlers on Gemini are actually trying these things out in real projects, not just one-shotting a game and calling it a win.

reply
bluegatty 2 hours ago
Yes, this is very true and it speaks strongly to this wayward notion of 'models' - it depends so much on the tuning, the harness, the tools.

I think it speaks to the broader notion of AGI as well.

Claude is definitively trained on the process of coding not just the code, that much is clear.

Codex has the same limitation but not quite as bad.

This may be a result of Anthropic using 'user cues' with respect to what are good completions and not, and feeding that into the tuning, among other things.

Anthropic is winning coding and related tasks because they're focused on that, Google is probably oriented towards a more general solution, and so, it's stuck in 'jack of all trades master of none' mode.

reply
rhubarbtree 26 minutes ago
Google are stuck because they have to compete with OpenAI. If they don’t, they face an existential threat to their advertising business.

But then they leave the door open for Anthropic on coding, enterprise and agentic workflows. Sensibly, that’s what they seem to be doing.

That said Gemini is noticeably worse than ChatGPT (it’s quite erratic) and Anthropic’s work on coding / reasoning seems to be filtering back to its chatbot.

So right now it feels like Anthropic is doing great, OpenAI is slowing but has significant mindshare, and Google are in there competing but their game plan seems a bit of a mess.

reply
bluegatty 14 minutes ago
Yup, you got it. It's a weird situation for sure.

You know what's also weird: Gem3 'Pro' is pretty dumb.

OAI has 'thinking levels' which work pretty well, it's nice to have the 'super duper' button - but also - they have the 'Pro' product which is another model altogether and thinks for 20 min. It's different than 'Research'.

OAI Pro (+ maybe Spark) is the only reason I have OAI sub. Neither Anthropic nor Google seem to want to try to compete.

I feel for the head of Google AI, they're probably pulled in major different directions all the time ...

reply
andai 57 minutes ago
Tell me more about Codex. I'm trying to understand it better.

I have a pretty crude mental model for this stuff but Opus feels more like a guy to me, while Codex feels like a machine.

I think that's partly the personality and tone, but I think it goes deeper than that.

(Or maybe the language and tone shapes the behavior, because of how LLMs work? It sounds ridiculous but I told Claude to believe in itself and suddenly it was able to solve problems it wouldn't even attempt before...)

reply
bluegatty 17 minutes ago
Your intuition is exactly correct - it's not just 'tone' it's 'deeper than that'.

Codex is a 'poor communicator' - which matters surprisingly a lot in these things. It's overly verbose, it often misses the point - but - it is slightly stronger in some areas.

Also - Codex now has 'Spark' which is on Cerebras, it's wildly fast - and this absolutely changes 'workflow' fundamentally.

With 'wait-thinking' - you an have 3-5 AIs going, because it takes time to process but with Cerebras-backed models ... maybe 1 or 2.

Basically - you're the 'slowpoke' doing the thinking now. The 'human is the limiting factor'. It's a weird feeling!

Codex has a more adept 'rollover' on it's context window it sort of magically does context - this is hard to compare to Claude because you don't see the rollover points as well. With Claude, it's problematic ... and helpful to 'reset' some things after a compact, but with Codex ... you just keep surfing and 'forget about the rollover'.

This is all very qualitative, you just have to try it. Spark is only on the Pro ($200/mo) version, but it's worth it for any professional use. Just try it.

In my workflow - Claude Code is my 'primary worker' - I keep Codex for secondary tasks, second opinions - it's excellent for 'absorbing a whole project fast and trying to resolve an issue'.

Finally - there is a 'secret' way to use Gemini. You can use gemeni cli, and then in 'models/' there is a way to pick custom models. In order to make Gem3 Pr avail, there is some other thing you have to switch (just ask the AI), and then you can get at Gem3 Pro.

You will very quickly find what the poster here is talking about: it's a great model, but it's a 'Wild Stallion' on the harness. It's worth trying though. Also note it's much faster than Claude as well.

reply
embedding-shape 11 minutes ago
Spark is fun and cool, but it isn't some revolution. It's a different workflow, but not suitable for everything that you're use GPT5.2 for with thinking set to high, for example, it's way more dumb and makes more mistakes, while 5.2 will carefully thread through a large codebase and spend 40 minutes just to validate the change actually didn't break anything, as long as you provide prompts for it.

Spark on the other hand is a bit faster at reaching a point when it says "Done!", even when there is lots more it could do. The context size is also very limiting, you need to really divide and conquer your tasks, otherwise it'll gather files and context, then start editing one file, trigger the automatic context compaction, then forget what it was doing and begin again, repeating tons of time and essentially making you wait 20 minutes for the change anyways.

Personally I keep codex GPT5.2 as the everyday model, because most of the stuff I do I only want to do once, and I want it to 100% follow my prompt to the letter. I've played around a bunch with spark this week, and been fun as it's way faster, but also completely different way of working, more hands-on, and still not as good as even the gpt-codex models. Personally I wouldn't get ChatGPT Pro only for Spark (but I would get it for the Pro mode in ChatGPT, doesn't seem to get better than that).

reply
avereveard 13 minutes ago
Yeah gemini 3.0 is unusable to me, to an extent all models do things right or wrong, but gemini just refuses to elaborate.

Sometime you can save so much time asking claude codex and glm "hey what you think of this problem" and have a sense wether they would implement it right or not.

Gemini never stops instead goes and fixes whatever you trow at it even if asked not to, you are constantly rolling the dice but with gemini each roll is 5 to 10 minutes long and pollutes the work area.

It's the model I most rarely use even if, having a large google photo tier, I get it for basically free between antigravity, gemini-cli and jules

For all its fault anthropic discovered pretty early with claude 2 that intelligence and benchmark don't matter if the user can't steer the thing.

reply
karmasimida 5 hours ago
Gemini just doesn’t do even mildly well in agentic stuff and I don’t know why.

OpenAI has mostly caught up with Claude in agentic stuff, but Google needs to be there and be there quickly

reply
onlyrealcuzzo 5 hours ago
Because Search is not agentic.

Most of Gemini's users are Search converts doing extended-Search-like behaviors.

Agentic workflows are a VERY small percentage of all LLM usage at the moment. As that market becomes more important, Google will pour more resources into it.

reply
Macha 4 hours ago
> Agentic workflows are a VERY small percentage of all LLM usage at the moment. As that market becomes more important, Google will pour more resources into it.

I do wonder what percentage of revenue they are. I expect it's very outsized relative to usage (e.g. approximately nobody who is receiving them is paying for those summaries at the top of search results)

reply
curly6 4 hours ago
> Most agent actions on our public API are low-risk and reversible. Software engineering accounted for nearly 50% of agentic activity, but we saw emerging usage in healthcare, finance, and cybersecurity.

via Anthropic

https://www.anthropic.com/research/measuring-agent-autonomy

this doesn’t answer your question, but maybe Google is comfortable with driving traffic and dependency through their platform until they can do something like this

https://www.adweek.com/media/google-gemini-ads-2026/

reply
onlyrealcuzzo 4 hours ago
> (e.g. approximately nobody who is receiving them is paying for those summaries at the top of search results)

Nobody is paying for Search. According to Google's earnings reports - AI Overviews is increasing overall clicks on ads and overall search volume.

reply
bayindirh 3 hours ago
So, apparently switching to Kagi continues to pay in dividends, elegantly.

No ads, no forced AI overview, no profit centric reordering of results, plus being able to reorder results personally, and more.

reply
swftarrow 2 hours ago
I suspect a large part of Google's lag is due to being overly focused on integrating Gemini with their existing product and app lines.
reply
alphabetting 5 hours ago
the agentic benchmarks for 3.1 indicate Gemini has caught up. the gains are big from 3.0 to 3.1.

For example the APEX-Agents benchmark for long time horizon investment banking, consulting and legal work:

1. Gemini 3.1 Pro - 33.2% 2. Opus 4.6 - 29.8% 3. GPT 5.2 Codex - 27.6% 4. Gemini Flash 3.0 - 24.0% 5. GPT 5.2 - 23.0% 6. Gemini 3.0 Pro - 18.0%

reply
kakugawa 2 hours ago
In mid-2024, Anthropic made the deliberate decision to stop chasing benchmarks and focus on practical value. There was a lot of skepticism at the time, but it's proven to be a prescient decision.
reply
girvo 2 hours ago
Benchmarks are basically straight up meaningless at this point in my experience. If they mattered and were the whole story, those Chinese open models would be stomping the competition right now. Instead they're merely decent when you use them in anger for real work.

I'll withhold judgement until I've tried to use it.

reply
avereveard 7 minutes ago
What's your opinion of glm5 if you had a chance to use it
reply
metadat 2 hours ago
Ranking Codex 5.2 ahead of plain 5.2 doesn't make sense. Codex is expressly designed for coding tasks. Not systems design, not problem analysis, and definitely not banking, but actually solving specific programming tasks (and it's very, very good at this). GPT 5.2 (non-codex) is better in every other way.
reply
nl 24 minutes ago
Codex has been post-trained for coding, including agentic coding tasks.

It's certainly not impossible that the better long-horizon agentic performance in Codex overcomes any deficiencies in outright banking knowledge that Codex 5.2 has vs plain 5.2.

reply
306bobby 53 minutes ago
It could be problem specific. There are certain non program things that opus seems better than sonnet at as well
reply
306bobby 52 minutes ago
Swapped sonnet and opus on my last reply, oops
reply
blueaquilae 2 hours ago
Marketing team agree with benchmark score...
reply
HardCodedBias 3 hours ago
LOL come on man.

Let's give it a couple of days since no one believes anything from benchmarks, especially from the Gemini team (or Meta).

If we see on HN that people are willing switching their coding environment, we'll know "hot damn they cooked" otherwise this is another wiff by Google.

reply
drivebyhooting 56 minutes ago
You can’t put Gemini and Meta in the same sentence. Llama 4 was DOA, and Meta has given up on frontier models. Internally they’re using Claude.
reply
hintymad 2 hours ago
My guess is that Gemini team didn't focus on the large-scale RL training for the agentic workload. And they are trying to catch up with 3.1.
reply
renegade-otter 3 hours ago
It's like anything Google - they do the cool part and then lose interest with the last 10%. Writing code is easy, building products that print money is hard.
reply
miohtama 2 hours ago
One does not need products if you have monopoly on search
reply
margorczynski 2 hours ago
That monopoly is worth less as time goes by and people more and more use LLMs or similar systems to search for info. In my case I've cut down a lot of Googling since more competent LLMs appeared.
reply
ionwake 5 hours ago
Can you explain what you mean by its bad at agentic stuff?
reply
karmasimida 5 hours ago
Accomplish the task I give to it without fighting me with it.

I think this is classic precision/recall issue: the model needs to stay on task, but also infer what user might want but not explicitly stated. Gemini seems particularly bad that recall, where it goes out of bounds

reply
ionwake 59 minutes ago
cool thanks for the explanation
reply
raducu 2 hours ago
> Gemini also frequently gets twisted around, stuck in loops, and unable to make forward progress.

Yes, gemini loops but I've found almost always it's just a matter of interrupting and telling it to continue.

Claude is very good until it tries something 2-3 times, can't figure it out and then tries to trick you by changing your tests instead of your code (if you explicitly tell it not to, maybe it will decide to ask) OR introduce hyper-fine-tuned IFs to fit your tests, EVEN if you tell it NOT to.

reply
RachelF 46 minutes ago
I haven't used 3.1 yet, but 3.0 Pro has been frustrating for two reasons:

- it is "lazy": I keep having to tell it to finish, or continue, it wants to stop the task early.

- it hallucinates: I have arguments with it about making up API functions to well known libraries which just do not exist.

reply
s3p 6 hours ago
Don't get me started on the thinking tokens. Since 2.5P the thinking has been insane. "I'm diving in to the problem", "I'm fully immersed" or "I'm meticulously crafting the answer"
reply
raducu 53 minutes ago
> Don't get me started on the thinking tokens.

Claude provides nicer explanations, but when it comes to CoT tokens or just prompting the LLM to explain -- I'm very skeptical of the truthfulness of it.

Not because the LLM lies, but because humans do that also -- when asked how the figured something, they'll provide a reasonable sounding chain of thought, but it's not how they figured it out.

reply
foz 5 hours ago
This is part of the reason I don't like to use it. I feel it's hiding things from me, compared to other models that very clearly share what they are thinking.
reply
dumpsterdiver 3 hours ago
To be fair, considering that the CoT exposed to users is a sanitized summary of the path traversal - one could argue that sanitized CoT is closer to hiding things than simply omitting it entirely.
reply
mikestorrent 2 hours ago
This is something that bothers me. We had a beautiful trend on the Web of the browser also being the debugger - from View Source decades ago all the way up to the modern browser console inspired by Firebug. Everything was visible, under the hood, if you cared to look. Now, a lot of "thinking" is taking place under a shroud, and only so much of it can be expanded for visibility and insight into the process. Where is the option to see the entire prompt that my agent compiled and sent off, raw? Where's the option to see the output, replete with thinking blocks and other markup?
reply
fragmede 11 minutes ago
If that's what you're after, tou MITM it and setup a proxy so Claude Code or whatever sends to your program, and then that program forwards it to Anthropics's server (or whomever). That way, you get everything.
reply
dist-epoch 5 hours ago
That's not the real thinking, it's a super summarized view of it.
reply
WhitneyLand 49 minutes ago
Yeah it’s amazing how it can be the best model on paper, and in some ways in practice, but coding has sucked with it.

Makes you wonder though how much of the difference is the model itself vs Claude Code being a superior agent.

reply
jpcompartir 2 hours ago
Yep, Gemini is virtually unusable compared to Anthropic models. I get it for free with work and use maybe once a week, if that. They really need to fix the instruction following.
reply
Oras 5 hours ago
Glad I’m not the only one who experienced this. I have a paid antigravity subscription and most of the time I use Claude models due to the exact issues you have pointed out.
reply
stephen_cagle 4 hours ago
I also worked at Google (on the original Gemini, when it was still Bard internally) and my experience largely mirrors this. My finding is that Gemini is pretty great for factual information and also it is the only one that I can reliably (even with the video camera) take a picture of a bird and have it tell me what the bird is. But it is just pretty bad as a model to help with development, myself and everyone I know uses Claude. The benchmarks are always really close, but my experience is that it does not translate to real world (mostly coding) task.

tldr; It is great at search, not so much action.

reply
neves 2 hours ago
Gemini interesting with Google software gives me the best feature of all LLMs. When I receive a invite for an event, I screenshot it, share with Gemini app and say: add to my Calendar.

It's not very complex, but a great time saver

reply
stephen_cagle 2 hours ago
Yeah, as evidenced by the birds (above), I think it is probably the best vision model at this time. That is a good idea, I should also use it for business cards as well I guess.
reply
jeffbee 45 minutes ago
That's great but it can't add stuff to your calendar unless you throw the master switch for "personalization" giving it access to your GMail, Docs, etc. I tried that and it went off the rails immediately, started yapping in an unrelated context about the 2002 Dodge Ram that I own, which of course I do not own, but some imbecile who habitually uses my email address once ordered parts for one. I found that to be a pretty bad feature so I had to turn it off, and now it can't do the other stuff like make calendars or add my recipes to Keep.
reply
menaerus 2 hours ago
I don't know ... as of now I am literally instructing it to solve the chained expression computation problem which incurs a lot of temporary variables, of which some can be elided by the compiler and some cannot. Think linear algebra expressions which yield a lot of intermediate computations for which you don't want to create a temporary. This is production code and not an easy problem.

And yet it happily told me what I exactly wanted it to tell me - rewrite the goddamn thing using the (C++) expression templates. And voila, it took "it" 10 minutes to spit out the high-quality code that works.

My biggest gripe for now with Gemini is that Antigravity seems to be written by the model and I am experiencing more hiccups than I would like to, sometimes it's just stuck.

reply
stephen_cagle 60 minutes ago
Can't argue with that, I'll move my Bayesian's a little in your direction. With that said, are most other models able to do this? Also, did it write the solution itself or use a library like Eigen?

I have noticed that LLM's seem surprisingly good at translating from one (programming) language to another... I wonder if transforming a generic mathematical expression into an expression template is a similar sort of problem to them? No idea honestly.

reply
cmrdporcupine 2 hours ago
People's objections are not the quality of code or analysis that Gemini produces. It's that it's inept at doing things like editing pieces of files or running various tools.

As an ex-Googler part of me wonders if this has to do with the very ... bespoke ... nature of the developer tooling inside Google. Though it would be crazy for them to be training on that.

reply
knollimar 6 hours ago
Is the thinking token stream obfuscated?

Im fully immersed

reply
orbital-decay 6 hours ago
It's just a summary generated by a really tiny model. I guess it also an ad-hoc way to obfuscate it, yes. In particular they're hiding prompt injections they're dynamically adding sometimes. Actual CoT is hidden and entirely different from that summary. It's not very useful for you as a user, though (neither is the summary).
reply
ukuina 5 hours ago
Agree the raw thought-stream is not useful.

It's likely filled with "Aha!" and "But wait!" statements.

reply
astrange 48 seconds ago
For some of the best models it's also not readable, not really in English, and uncensored.

https://x.com/blingdivinity/status/1998590768118731042

> Maybe I'll attempt to reconstruct by cross-ling; e.g., in natural language corpora, the string " Seahorse" seldom; but I can't.

> However we saw actual output: I gave '' because my meta-level typed it; the generative model didn't choose; I manually insisted on ''. So we didn't test base model; we forced.

> Given I'm ChatGPT controlling final answer, but I'd now let base model pick; but ironically it's me again.

I notice Claude's thinking is in ordinary language though.

reply
FergusArgyll 4 hours ago
They hide the CoT because they don't want competitors to train on it
reply
orbital-decay 4 hours ago
Training on the CoT itself is pretty dubious since it's reward hacked to some degree (as evident from e.g. GLM-4.7 which tried pulling that with 3.0 Pro, and ended up repeating Model Armor injections without really understanding/following them). In any case they aren't trying to hide it particularly hard.
reply
FergusArgyll 4 hours ago
> In any case they aren't trying to hide it particularly hard.

What does that mean? Are you able to read the raw cot? how?

reply
cubefox 3 hours ago
The early version of Gemini 2.5 did initially show the actual CoT in AI Studio, and it was pretty interesting in some cases.
reply
agentifysh 4 hours ago
Relieved to read this from an ex-Googler at least we are no the crazy ones we are made out to be whenever we point out issues with Gemini
reply
slopinthebag 6 hours ago
Hmm, interesting..

My workflow is to basically use it to explain new concepts, generate code snippets inline or fill out function bodies, etc. Not really generating code autonomously in a loop. Do you think it would excel at this?

reply
mikestorrent 2 hours ago
I think that you should really try to get whatever agent you can to work on that kind of thing for you - guide it with the creation of testing frameworks and code coverage, focus more on the test cases with your human intellect, and let it work to pass them.
reply
slopinthebag 2 hours ago
I'm not really interested in that workflow, too far removed from the code imo. I only really do that for certain tasks with a bunch of boilerplate, luckily I simply don't use languages or frameworks that require very much BS anymore.
reply
mikestorrent 22 minutes ago
I feel you, that's how I was thinking about a year ago. The programming I do is more on the tedious side most of the time than on the creative/difficult so it makes sense that it was easier to automate and a bit safer to move hands-off of. I still review the code, mostly. I think that I may be able to stop doing that eventually.
reply
jbellis 6 hours ago
yeah, g3p is as smart or smarter as the other flagships but it's just not reliable enough, it will go into "thinking loops" and burn 10s of 1000s of tokens repeating itself.

https://blog.brokk.ai/gemini-3-pro-preview-not-quite-baked/

hopefully 3.1 is better.

reply
nicce 5 hours ago
> it will go into "thinking loops" and burn 10s of 1000s of tokens repeating itself.

Maybe it is just a genius business strategy.

reply
mikestorrent 2 hours ago
Similarly, Cursor's "Auto Mode" purports to use whichever model is best for your request, but it's only reasonable to assume it uses whatever model is best for Cursor at that moment
reply
varispeed 4 hours ago
> stuck in loops

I wonder if there is some form of cheating. Many times I found that after a while Gemini becomes like a Markov chain spouting nonsense on repeat suddenly and doesn't react to user input anymore.

reply
fragmede 3 minutes ago
Small local models will get into that loop. Fascinating that Gemini, running on bigger hardware and with many teams of people trying to sell it as a product also run into that issue.
reply
sdeiley 2 hours ago
People underrate Google's cost effectiveness so much. Half price of Opus. HALF.

Think about ANY other product and what you'd expect from the competition thats half the price. Yet people here act like Gemini is dead weight

____

Update:

3.1 was 40% of the cost to run AA index vs Opus Thinking AND SONNET, beat Opus, and still 30% faster for output speed.

https://artificialanalysis.ai/?speed=intelligence-vs-speed&m...

reply
bluegatty 2 hours ago
You can pay 1 cent for a mediocre answer or 2 cents for a great answer.

So a lot of these things are relative.

Now if that equation plays out 20K times a day, well that's one thing, but if it's 'once a day' then the cost basis becomes irrelevant. Like the cost of staplers for the Medical Device company.

Obviously it will matter, but for development ... it's probably worth it to pay $300/mo for the best model, when the second best is $0.

For consumer AI, the math will be different ... and that will be a big deal in the long run.

reply
jstummbillig 2 hours ago
It's not half price or cost effective if it can't do the job, that I am happy to pay twice the price for to get done.

But I agree: If they can get there (at one point in the past year I felt they were the best choice for agentic coding), their pricing is very interesting. I am optimistic that it would not require them to go up to Opus pricing.

reply
nu11ptr 2 hours ago
That sounds great, but if Opus generates 20% better code think of the ramifications of that on a real world project. Already $100/month gets you a programmer (or maybe even 2 or 3) that can do your work for you. Insanity. Do I even care if there is something 80% as good for 50% the cost? My answer: no. That said, if it is every bit as good, and their benchmarks suggest it is (but proof will be in testing it out), then sure, a 50% cost reduction sounds really nice.
reply
WarmWash 49 minutes ago
Gemini is the most paradoxical model because it benchmarks great even in private benchmarks done by regular people, Deep Mind is unquestionably full of capable engineers with incredible skill, and personally Gemini has been great for my day job and my coding for fun (not for profit) endeavors. Switching between it and 4.6 in antigravity and I don't see much of a difference, they both do what I ask.

But man, people are really avid about it being an awful model.

reply
sdeiley 33 minutes ago
People can be and often are wrong.

You'd notice how good Opus is in Claude Code. IMHO CC is the secret sauce

reply
vitaflo 37 minutes ago
Deepseek is 2% of the cost of Opus. But most people aren't using that for code even tho it's ridiculously cheap.
reply
Decabytes 2 hours ago
Any tips for working with Gemini through its chat interface? I’ve worked with ChatGPT and Claude and I’ve generally found them pleasant to work with, but everytime I use Gemini the output is straight dookie
reply
londons_explore 2 hours ago
make sure you use ai studio (not the vertex one), not the consumer gemini interface. Seems to work better for code there.
reply
csmpltn 2 hours ago
> "People underrate Google's cost effectiveness so much. Half price of Opus. HALF."

Google undercutting/subsidizing it's own prices to bite into Anthropic's market share (whilst selling at a loss) doesn't automatically mean Google is effective.

reply
sdeiley 2 hours ago
Everybody is subsidizing their prices.

But Flash is 1/8 the cost of sonnet and its not impressive?

reply
csmpltn 41 minutes ago
Sure, for the launch. Until they start introducing ads, capping existing subscriptions and raising prices (on all products)
reply
mritchie712 2 hours ago
It's half the price per token. Not all tokens are generated equally.
reply
sdeiley 2 hours ago
Neither are cars but Ill take a Porsche over a Ferrari for a fraction of the price.
reply
ionwake 57 minutes ago
which model?
reply
sdeiley 55 minutes ago
For me any, tbh. I wouldn't fit in a Ferrari lol
reply
metadat 56 minutes ago
Attention is the new scarce resource. Saving even 50% is nothing if it wastes more of my time.
reply
fastball 2 hours ago
We are not at the moment where price matters. All that matters is performance.
reply
sdeiley 55 minutes ago
What did you say? Cant hear you over the $400B in capex spend.

Counterpoint: price will matter before we hit AGI

reply
willis936 59 minutes ago
It matters to me. I pay for it and I like using it. I pick my models to keep my spend reigned in.
reply
cyanydeez 2 hours ago
Some people like blackjack and a technical edge with card counting, others just say screw it and do slot machines.
reply
sdeiley 53 minutes ago
This is a decent analogy actually. Kudos
reply
Svoka 2 hours ago
While price is definitely important, results are extremely important. Gemini often falls into the 'didn't do' it part of the spectrum, this days Opus almost always does 'good enough'.

Gemini definitely has its merits but for me it just doesn't do what other models can. I vibe-coded an app which recommends me restaurants. The app uses gemini API to make restaurants given bunch of data and prompt.

App itself is vibe-coded with Opus. Gemini didn't cut it.

reply
sdeiley 2 hours ago
The binary you draw on models that havent been out a quarter is borderline insane.

Opus is absurdly good in Claude code but theres a lot of use cases Gemini is great at.

I think Google is further behind with the harness than the model

reply
SV_BubbleTime 2 hours ago
Well, it’s half if the product is equal.

Is it? Honestly, I still chuckle about black Nazis and the female Indian Popes. That was my first impression of Gemini, and first impressions are hard to break. I used Gemini’s VL (vision) for something and it refused to describe because it assumed it was NSFW imagery, which is was not.

I also question statis as an obvious follow up. Is Gemini equal to Opus? Today? Tomorrow? Has Google led the industry thus far and do I expect them to continue?

Counterpoint to that would be that with natural language input and output, that LLM specific tooling is rare and it is easy to switch around if you commoditize the product backend.

reply
varispeed 2 hours ago
If something is shit, it doesn't matter it costs half price of something okay.
reply
nimchimpsky 22 minutes ago
[dead]
reply
xrd 6 hours ago
These models are so powerful.

It's totally possible to build entire software products in the fraction of the time it took before.

But, reading the comments here, the behaviors from one version to another point version (not major version mind you) seem very divergent.

It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

But it also feels like you have to be diligent about adopting new models (even same family and just point version updates) because they operate totally differently regardless of your prompt and agent files.

Imagine managing a team of software developers where every month it was an entirely new team with radically different personalities, career experiences and guiding principles. It would be chaos.

I suspect that older models will be deprecated quickly and unexpectedly, or, worse yet, will be swapped out with subtle different behavioral characteristics without notice. It'll be quicksand.

reply
simonw 6 hours ago
I had an interesting experience recently where I ran Opus 4.6 against a problem that o4-mini had previously convinced me wasn't tractable... and Opus 4.6 found me a great solution. https://github.com/simonw/sqlite-chronicle/issues/20

This inspired me to point the latest models at a bunch of my older projects, resulting in a flurry of fixes and unblocks.

reply
small_model 4 hours ago
I have a codebase (personal project) and every time there is a new Claude Opus model I get it to do a full code review. Never had any breakages in last couple of model updates. Worried one day it just generates a binary and deletes all the code.
reply
TZubiri 4 hours ago
No version control?
reply
small_model 4 hours ago
I was being facetious, I mean one day models might skip the middle man of code and compilation and take your specs and produce an ultra efficent binary.
reply
mikestorrent 2 hours ago
Musk was saying that recently but I don't see it being efficient or worthwhile to do this. I could be proven brutally wrong, but code is language; executables aren't. There's also no real reason to bother with this when we have quick-compiling languages.

More realistically, I could see particular languages and frameworks proving out to be more well-designed and apt for AI code creation; for instance, I was always too lazy to use a strongly-typed language, preferring Ruby for the joy of writing in it (obsessing about types is for a particular kind of nerd that I've never wanted to be). But now with AI, everything's better with strong types in the loop, since reasoning about everything is arguably easier and the compiler provides stronger guarantees about what's happening. Similarly, we could see other linguistic constructs come to the forefront because of what they allow when the cost of implementation drops to zero.

reply
TZubiri 57 minutes ago
You can map tokens to CPU instructions and train a model on that, that's what they do for input images I think.

I think the main limitation on the current models is not that cpu instructions aren't cpu instructions (even though they can be with .asm), it's that they are causal, the cpu would need to generate a binary entirely from start to finish sequentially.

If we learned something over the last 50 years of programming is that that's hard and that's why we invented programming languages? Why would it be simpler to just generate the machine code, sure maybe an LLM to application can exist, but my money is in that there will be a whole toolchain in the middle, and it will probably be the same old toolchain that we are using currently, an OS, probably Linux.

Isn't it more common that stuff builds on the existing infra instead of a super duper revolution that doesn't use the previous tech stack? It's much easier to add onto rather than start from scratch.

reply
mikestorrent 24 minutes ago
Those CPU instructions still need to be making calls out to things, though. Hallucinated source code will reveal its flaws through linters, compiler errors, test suites. A hallucinated binary will not reveal its flaws until it segfaults.
reply
jauntywundrkind 4 hours ago
From the project description here for your sqlite-chronicle project:

> Use triggers to track when rows in a SQLite table were updated or deleted

Just a note in case its interesting to anyone, sqlite compatible Turso database has CDC, a changes table! https://turso.tech/blog/introducing-change-data-capture-in-t...

reply
poszlem 2 hours ago
This may seem obvious, but many people overlook it. The effect is especially clear when using an AI music model. For example, in Suno AI you can remaster an older AI generated track with a newer model. I do this with all my songs whenever a new model is released. It makes it super easy to see the improvements that were made to the models over time.
reply
petesergeant 5 hours ago
I continue to get great value out of having claude and codex bound together in a loop: https://github.com/pjlsergeant/moarcode
reply
apitman 5 hours ago
They are one, the ring and the dark lord
reply
nly 37 minutes ago
I keep giving the top Anthropic, Google and OpenAI models problems.

They come up with passable solutions and are good for getting juices flowing and giving you a start on a codebase, but they are far from building "entire software products" unless you really don't care about quality and attention to detail.

reply
jama211 6 hours ago
Yeah I keep maintaining a specific app I built with gpt 5.1 codex max with that exact model because it continues to work for the requests I send it, and attempts with other models even 5.2 or 5.3 codex seemed to have odd results. If I were superstitious I would say it’s almost like the model that wrote the code likes to work on the code better. Perhaps there’s something about the structure it created though that it finds easier to understand…
reply
seizethecheese 6 hours ago
> It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

In my experience it’s more like idiot savant engineers. Still remarkable.

reply
cm2012 20 minutes ago
Its like getting access to an amazing engineer, but you get a new individual engineer each prompt, not one consistent mind.
reply
worldsavior 6 hours ago
Sushy dinner? What are you building with AI, a calculator?
reply
WarmWash 6 hours ago
I have long suspected that a large part of people's distaste for given models comes from their comfort with their daily driver.

Which I guess feeds back to prompting still being critical for getting the most out of a model (outside of subjective stylistic traits the models have in their outputs).

reply
HardCodedBias 3 hours ago
"These models are so powerful."

Careful.

Gemini simply, as of 3.0, isn't in the same class for work.

We'll see in a week or two if it really is any good.

Bravo to those who are willing to give up their time to test for Google to see if the model is really there.

(history says it won't be. Ant and OAI really are the only two in this race ATM).

reply
minimaxir 7 hours ago
Price is unchanged from Gemini 3 Pro: $2/M input, $12/M output. https://ai.google.dev/gemini-api/docs/pricing

Knowledge cutoff is unchanged at Jan 2025. Gemini 3.1 Pro supports "medium" thinking where Gemini 3 did not: https://ai.google.dev/gemini-api/docs/gemini-3

Compare to Opus 4.6's $5/M input, $25/M output. If Gemini 3.1 Pro does indeed have similar performance, the price difference is notable.

reply
amluto 2 hours ago
Now compare the monthly plans for business users who want the CLI agent but who don’t want the models trained on their data.

OpenAI: no big deal — sign up, pick your number of seats, and you’re all set.

Anthropic: also no big deal but there’s an obnoxious minimum purchase.

Google: first you have to try to figure out what the product is called. Then you need to figure out how to set the correct IAM rules. Then you have to sign up and pay for it. Maybe you succeed. Maybe you give up after an hour or two of cursing. Gemini is, of course, completely unable to help. (OpenAI clearly has not trained their models on how to operate their tools. Google’s models hallucinate Google’s product offerings so outrageously that I’m not sure I could tell. I haven’t asked Claude about Claude Code.)

At least the monthly pricing is similar once you get over the hurdles.

reply
nl 16 minutes ago
There's a reason Google model usage on OpenRouter is so high - it's easier to pay the OpenRouter tax than it is to figure out how to pay Gemini directly.
reply
waffletower 2 hours ago
Well some are using Anthropic on AWS Bedrock which is a bit more like the Google paragraph. Perhaps a good thing that Nova models aren't competitive (and many here are asking "What's a Nova model?"). And remember, many businesses aren't flinching at IAM controls and are asking for data privacy contracts.
reply
trvz 37 minutes ago
Well some are masochists.
reply
miohtama 2 hours ago
I can confirm the products bit, I tried to use Gemini to help with G Suite admin.
reply
rancar2 6 hours ago
If we don't see a huge gain on the long-term horizon thinking reflected with the Vendor-Bench 2, I'm not going to switch away from CC. Until Google can beat Anthropic on that front, Claude Code paired with the top long-horizon models will continue to pull away with full stack optimizations at every layer.
reply
jbellis 6 hours ago
still no minimal reasoning in G3.1P :(

(this is why Opus 4.6 is worth the price -- turning off thinking makes it 3x-5x faster but it loses only a small amount of intelligence. nobody else has figured that out yet)

reply
sunaookami 4 hours ago
Thinking is just tacked on for Anthropic's models and always has been so leaving it off actually produces better results everytime.
reply
girvo 2 hours ago
What about for analysis/planning? Honestly I've been using thinking, but if I don't have to with Opus 4.6 I'm totally keen to turn it off. Faster is better.
reply
agentifysh 4 hours ago
Looks like its cheaper than codex ??? this might be interesting then
reply
TZubiri 4 hours ago
It's not trained for agentic coding I don't think
reply
oblio 4 hours ago
> Knowledge cutoff is unchanged at Jan 2025.

Isn't that a bit old?

reply
minimaxir 3 hours ago
Old relative to its competitors, but the Search tool can compensate for it.
reply
plaidfuji 6 hours ago
Sounds like the update is mostly system prompt + changes to orchestration / tool use around the core model, if the knowledge cutoff is unchanged
reply
sigmar 6 hours ago
knowledge cutoff staying the same likely means they didn't do a new pre-train. We already knew there were plans from deepmind to integrate new RL changes in the post training of the weights. https://x.com/ankesh_anand/status/2002017859443233017
reply
brokencode 6 hours ago
This keeps getting repeated for all kinds of model releases, but isn’t necessarily true. It’s possible to make all kinds of changes without updating the pretraining data set. You can’t judge a model’s newness based on what it knows about.
reply
mijoharas 6 hours ago
Gemini 3 is still in preview (limited rate limits) and 2.5 is deprecated (still live but won't be for long).[0]

Are Google planning to put any of their models into production any time soon?

Also somewhat funny that some models are deprecated without a suggested alternative(gemini-2.5-flash-lite). Do they suggest people switch to Claude?

[0] https://ai.google.dev/gemini-api/docs/deprecations

reply
chr15m 5 minutes ago
I am sure Google would never deprecate a piece of software lots of people depend upon.

https://killedbygoogle.com/

reply
andrewmutz 6 hours ago
I agree completely. I don't know how anyone can be building on these models when all of them are either deprecated or not actually released yet. As someone who has production systems running on the deprecated models, this situation really causes me grief.
reply
irthomasthomas 2 hours ago
I dont think any of them really wants api customers in the end. They are only temporarily useful.
reply
moffkalast 2 hours ago
Well let me use llama.cpp to run worlds-smallest-violin-Q8.gguf

When you build on something that can be rugpulled at any moment, that's really kind of on you.

reply
NitpickLawyer 5 hours ago
You are reading your link wrong. They are deprecating 2.5-preview models. 2.5 (including lite) are up till at least sept/oct 26.
reply
mijoharas 4 hours ago
gemini-2.5-pro has a listed shutdown date of "June 17, 2026" in the linked table.

(Another commenter pointed out that this is the earliest shutdown date and it won't necessarily be shut down on that date).

Where are you getting sept/Oct from? I see gemini-2.5-flash-image in October, but everything else looks like June/July to me?

reply
vidarh 6 hours ago
This feels very Google
reply
drbacon 3 hours ago
I found the Googler!
reply
vidarh 3 hours ago
Nope. The closest I've gotten was rejecting Google recruiters several times.

But like everyone else I'm used to Google failing to care about products.

reply
cmrdporcupine 60 minutes ago
Inside Google we just constantly joked/complained about "old thing is deprecated, new isn't ready yet"

This held for internal APIs, facilities, systems more even than it did for the outside world. Which is terrible.

reply
busko 25 minutes ago
[dead]
reply
jakejohnson 6 hours ago
I haven't seen any deprecation notices for 2.5 yet, just for 2. I'd expect (and hope) the deprecation timeline for 2.5 is longer since 3.0 is still in preview. Maybe they just default to 1 year here?

> Note: The shutdown dates listed in the table indicate the /earliest/ possible dates on which a model might be retired. We will communicate the exact shutdown date to users with advance notice to ensure a smooth transition to a replacement model.

reply
mijoharas 5 hours ago
I think you're right, it was 2 I think I saw explicitly deprecated, then searched again and saw 2.5 having a shutdown date.

This article[0] talks about 2 being deprecated.

It's still frustrating that they don't have proper production endpoints for 3.0 yet.

[0] https://ai.google.dev/gemini-api/docs/changelog

reply
axpy906 3 hours ago
Have 2.5 in prod. Hope they release 3 lite soon so it will be easier to swap them. Holding my breath as pro pricing is a non starter.
reply
logicchains 2 hours ago
It's the Tensorflow strategy; deprecate the old version while the new version still doesn't support all the old one's functionality.
reply
incognito124 2 hours ago
welcome to Google, where the only kinds of systems are the deprecated ones, and the not-yet-production-ready ones
reply
sheepscreek 2 hours ago
If it’s any consolation, it was able to one-shot a UI & data sync race condition that even Opus 4.6 struggled to fix (across 3 attempts).

So far I like how it’s less verbose than its predecessor. Seems to get to the point quicker too.

While it gives me hope, I am going to play it by the ear. Otherwise it’s going to be - Gemini for world knowledge/general intelligence/R&D and Opus/Sonnet 4.6 to finish it off.

reply
sigmoid10 2 hours ago
For me it's Opus 4.6 for researching code/digging through repos, gpt 5.3 codex for writing code, gemini for single hardcore science/math algorithms and grok for things the others refuse to answer or skirt around (e.g. some security/exploitability related queries). Get yourself one of those wrappers that support all models and forget thinking about who has the best model. The question is who has the best model for your problem. And there's usually a correct answer, even if it changes regularly.
reply
1024core 6 hours ago
It got the car wash question perfectly:

You are definitely going to have to drive it there—unless you want to put it in neutral and push!

While 200 feet is a very short and easy walk, if you walk over there without your car, you won't have anything to wash once you arrive. The car needs to make the trip with you so it can get the soap and water.

Since it's basically right next door, it'll be the shortest drive of your life. Start it up, roll on over, and get it sparkling clean.

Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it?

reply
1024core 6 hours ago
Some people are suggesting that this might actually be in the training set. Since I can't rule that out, I tried a different version of the question, with an elephant instead of a car:

> It's a hot and dusty day in Arizona and I need to wash my elephant. There's a creek 300 feet away. Should I ride my elephant there or should I just walk there by myself?

Gemini said:

That sounds like quite the dusty predicament! Given the options, you should absolutely ride (or lead) your elephant to the creek.

Here is the straightforward reality of the situation:

    The Logistical Nightmare of Walking Alone: If you walk to the creek by yourself, your elephant stays at your starting point. Elephants require a massive amount of water for a bath. Unless you are planning an extreme workout carrying hundreds of buckets of water 300 feet back through the Arizona heat, you're going to have a very bad time—and the elephant will stay dusty.

    The Perks of Going Together: 300 feet is a very short, easy stroll for an elephant. Plus, elephants generally love water and mud! Letting your elephant get right into the creek will make the washing process much easier for you and much more enjoyable for them. They can even use their trunk to help hose themselves down.
Grab some elephant-safe soap and head down to the water together!

Would you like some tips on the best methods and tools for scrubbing down an elephant?

reply
kovek 5 hours ago
I think that semantically this question is too similar to the car wash one. Changing subjects from car to elephant and car wash to creek does not change the fact that they are subjects. The embeddings will be similar in that dimension.
reply
1024core 5 hours ago
I understand. But isn't it a sign of "smarts" that one can generalize from analoguous tasks?
reply
willis936 5 hours ago
Sure it is, but it's a different set of smarts than the kind of gotcha logic puzzle trying to be tested with the car wash question.
reply
nebezb 50 minutes ago
My gut says you’re right, but I don’t know if this is indeed true. It might be the same thing.
reply
proto-n 4 hours ago
From Gemini pro:

You should definitely ride the elephant (or at least lead it there)!

Here is the logic:

If you walk there by yourself, you will arrive at the creek, but the dirty elephant will still be 300 feet back where you started. You can't wash the elephant if it isn't with you!

Plus, it is much easier to take the elephant to the water than it is to carry enough buckets of water 300 feet back to the elephant.

Would you like another riddle, or perhaps some actual tips on how to keep cool in the Arizona heat?

reply
irthomasthomas 2 hours ago
i would say this is a lower difficulty. the car question primes it to think about stuff like energy and pollution.
reply
cyanydeez 33 minutes ago
Ok, but the point of the logical question is about the connection. If you really think it's answering logically with reasoning, there should be zero priming.
reply
jmkni 2 hours ago
I like telling it the car wash is 24,901.1 miles away (with the implication being that I'm right beside the car wash)
reply
embedding-shape 6 hours ago
GPT-OSS-120b, a (downloadable) model released more than half a year ago also gets that right, I'm not sure this is such a great success.

> Would you like me to check the local weather forecast to make sure it's not going to rain right after you wash it?

Regardless of what I do, the days I decide to wash my car, it ALWAYS rains the day after, even if I look at the weather report and it says sunny. Cute that Gemini thinks it can predict it better than universe's will.

reply
sushisource 3 hours ago
> Regardless of what I do, the days I decide to wash my car, it ALWAYS rains the day after

Undeniable universal truth. I sometimes find myself making plans based on the fact that the most annoying possible outcome is also the most likely one.

reply
Aachen 3 hours ago
Non car person here. Why does that matter? It's not like rain means you didn't have to go to the wash, it rains often enough here that there wouldn't be car wash places left near me but there are plenty
reply
embedding-shape 19 minutes ago
> Why does that matter? It's not like rain means you didn't have to go to the wash

The car gets dirty again when it rains and when it gets dry again. I guess dust, salt, pollution and more is what gets mixed in and put on the chassi as it rains, but can't say I've investigated deeply enough. Not the end of the world, just annoying it keeps happening.

reply
opinion-is-bad 2 hours ago
Many people avoid washing cars just before rain to avoid spots, etc. Phoenix as an extreme example rarely rains and leaves everything filthy afterwards.
reply
steve1977 6 hours ago
The question of course is, did it get the car wash question right because it is "the car wash question" or because it could actually infer why the car needed to be there?
reply
embedding-shape 6 hours ago
Wasn't that "twoot" (or whatever Mastodon calls them) made just a week ago? Unlikely to have been in the training dataset of a model becoming available for public use today, unless Google made some serious advancements on the training front.
reply
jama211 6 hours ago
Shouldn’t be too hard to come up with a new unique reasoning question
reply
leumon 4 hours ago
Gemini 3 pro and flash already answered this correctly.
reply
upmind 6 hours ago
The answer here is why I dislike Gemini, though it gets the correct answer, it's far too verbose.
reply
KoolKat23 2 hours ago
I can't stand a model over-explaining, needless fluff and wasting tokens. I asked the question so I know the context.
reply
mikelward 4 hours ago
I don't love the verbosity of any of the chatbots when I'm using my phone, but at least it put the answer/tl;dr in the first paragraph.
reply
suddenlybananas 6 hours ago
They probably had time to toss that example in the training soup.
reply
AlphaAndOmega0 6 hours ago
Previous models from competitors usually got that correct, and the reasoning versions almost always did.

This kind of reflexive criticism isn't helpful, it's closer to a fully generalized counter-argument against LLM progress, whereas it's obvious to anyone that models today can do things they couldn't do six months ago, let alone 2 years back.

reply
suddenlybananas 6 hours ago
I'm not denying any progress, I'm saying that reasoning failures that are simple which have gone viral are exactly the kind of thing that they will toss in the training data. Why wouldn't they? There's real reputational risks in not fixing it and no costs in fixing it.
reply
AlphaAndOmega0 2 hours ago
Given that Gemini 3 Pro already did solid on that test, what exactly did they improve? Why would they bother?

I double checked and tested on AI Studio, since you can still access the previous model there:

>You should drive. >If you walk there, your car will stay behind, and you won't be able to wash it.

Thinking models consistently get it correct and did when the test was brand new (like a week or two ago). It is the opposite of surprising that a new thinking model continues getting it correct, unless the competitors had a time machine.

reply
patrickmcnamara 5 hours ago
Truly we entering the era of AGI.
reply
buttered_toast 5 hours ago
I think we need to reevaluate what purpose these sorts of questions serve and why they're important in regards to judging intelligence.

The model getting it correct or not at any given instance isn't the point, the point is if the model ever gets it wrong we can still assume that it still has some semblance of stochasticity in its output, given that a model is essentially static once it is released.

Additionally, hey don't learn post training (except for in context which I think counts as learning to some degree albeit transient), if hypothetically it answers incorrectly 1 in 50 attempts, and I explain in that 1 failed attempt why it is wrong, it will still be a 1-50 chance it gets it wrong in a new instance.

This differs from humans, say for example I give an average person the "what do you put in a toaster" trick and they fall for it, I can be pretty confident that if I try that trick again 10 years later they will probably not fall for it, you can't really say that for a given model.

reply
energy123 5 hours ago
They're important but not as N=1. It's like cherry picking a single question from SimpleQA and going aha! It got it right! Meanwhile it's 8% lower score than some other model when evaluated on all questions.
reply
buttered_toast 5 hours ago
Makes me wonder what people would consider better, a model that gets 92% of questions right 100% of the time, or a model that gets 95% of the questions right 90% of the time and 88% right the other 10%?

I think that's why benchmarking is so hard for me to fully get behind, even if we do it over say, 20 attempts and average it. For a given model, those 20 attempts could have had 5 incredible outcomes and 15 mediocre ones, whereas another model could have 20 consistently decent attempts and the average score would be generally the same.

We at least see variance in public benchmarks, but in the internal examples that's almost never the case.

reply
nickandbro 7 hours ago
Does well on SVGs outside of "pelican riding on a bicycle" test. Like this prompt:

"create a svg of a unicorn playing xbox"

https://www.svgviewer.dev/s/NeKACuHj

Still some tweaks to the final result, but I am guessing with the ARC-AGI benchmark jumping so much, the model's visual abilities are allowing it to do this well.

reply
ertgbnm 2 hours ago
Animated SVGs are one of the example in the press release. Which is fine, I just think the weird SVG benchmark is now dead. Gemini has beat the benchmark and now differences are just coming down to taste.

I don't know if it got these abilities through generalization or if google gave it a dedicated animated SVG RL suite that got it to improve so much between models.

Regardless we need a new vibe check benchmark ala bicycle pelican.

reply
simonw 7 hours ago
Interesting how it went a bit more 3D with the style of that one compared to the pelican I got.
reply
pugio 2 hours ago
Unfortunately it still fails my personal SVG benchmark (educational 2d cross section of the human heart), even after multiple iterations and screenshots feedback. Oh well, back to the (human) drawing board.
reply
andy12_ 7 hours ago
I'm thinking now that as models get better and better at generating SVGs, there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs in realtime (like flash games).
reply
rafark 4 hours ago
> there could be a point where we can use them to just make arbitrary UIs and interactive media with raw SVGs

So render ui elements using xml-like code in a web browser? You’re not going to believe me when I tell you this…

reply
nickandbro 7 hours ago
Or quite literally a game where SVG assets are generated on the fly using this model
reply
kridsdale3 5 hours ago
Thats one dimension before another long term milestone: Realtime generation of 3D mesh content during gameplay.

Which is the "left brain" approach vs the "right brain" approach of coming at dynamic videogames from the diffusion model direction which the Gemini Genie thing seems to be about.

reply
roryirvine 6 hours ago
On the other hand, creation of other vector image formats (eg. "create a postscript file showing a walrus brushing its teeth") hasn't improved nearly so much.

Perhaps they're deliberately optimising for SVG generation.

reply
mclau153 4 hours ago
can we move on from SVG to 3D models at some point?
reply
knicholes 2 hours ago
Image to model is already a thing, and it's pretty good.
reply
simonw 7 hours ago
Pretty great pelican: https://simonwillison.net/2026/Feb/19/gemini-31-pro/ - took over 5 minutes though, but I think that's because they're having performance teething problems on launch day.
reply
embedding-shape 7 hours ago
It's an excellent demonstration of the main issue I have with the Gemini family of models, they always go "above and beyond" to do a lot of stuff, even if I explicitly prompt against it. In this case, most of the SVG ends up consisting not just of a bike and a pelican, but clouds, a sun, a hat on the pelican and so much more.

Exactly the same thing happens when you code, it's almost impossible to get Gemini to not do "helpful" drive-by-refactors, and it keeps adding code comments no matter what I say. Very frustrating experience overall.

reply
mullingitover 7 hours ago
> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

Just asking "Explain what this service does?" turns into

[No response for three minutes...]

+729 -522

reply
cowmoo728 6 hours ago
it's also so aggressive about taking out debug log statements and in-progress code. I'll ask it to fill in a new function somewhere else and it will remove all of the half written code from the piece I'm currently working on.
reply
chankstein38 6 hours ago
I ended up adding a "NEVER REMOVE LOGGING OR DEBUGGING INFO, OPT TO ADD MORE OF IT" to my user instructions and that has _somewhat_ fixed the problem but introduced a new problem where, no matter what I'm talking to it about, it tries to add logging. Even if it's not a code problem. I've had it explain that I could setup an ESP32 with a sensor so that I could get logging from it then write me firmware for it.
reply
sd9 5 hours ago
If it's adding too much logging now, have you tried softening the instruction about adding more?

"NEVER REMOVE LOGGING OR DEBUGGING INFO. If unsure, bias towards introducing sensible logging."

Or just

"NEVER REMOVE LOGGING OR DEBUGGING INFO."

reply
bratwurst3000 6 hours ago
"I've had it explain that I could setup an ESP32 with a sensor so that I could get logging from it then write me firmware for it." lol did you try it? This so far from everything ratinonal
reply
BartShoot 5 hours ago
if you had to ask it obviously needs to refactor code for clarity so next person does not need to ask
reply
quotemstr 6 hours ago
What. You don't have yours ask for edit approval?
reply
girvo 2 hours ago
The depressing truth is most I know just run all these tools in /yolo mode or equivalents.

Because your coworkers definitely are, and we're stack ranked, so it's a race (literally) to the bottom. Just send it...

(All this actually seems to do is push the burden on to their coworkers as reviewers, for what it's worth)

reply
embedding-shape 6 hours ago
Who has time for that? This is how I run codex: `codex --sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox --search exec "$PROMPT"`, having to approve each change would effectively destroy the entire point of using an agent, at least for me.

Edit: obviously inside something so it doesn't have access to the rest of my system, but enough access to be useful.

reply
well_ackshually 3 hours ago
>Who has time for that?

People that don't put out slop, mostly.

reply
embedding-shape 2 hours ago
That's another thing entirely, I still review and manually decide the exact design and architecture of the code, with more care now than before. Doesn't mean I want the UI of the agent to need manual approval of each small change it does.
reply
quotemstr 5 hours ago
I wouldn't even think of letting an agent work in that made. Even the best of them produce garbage code unless I keep them on a tight leash. And no, not a skill issue.

What I don't have time to do is debug obvious slop.

reply
kees99 5 hours ago
I ended up running codex with all the "danger" flags, but in a throw-away VM with copy-on-write access to code folders.

Built-in approval thing sounds like a good idea, but in practice it's unusable. Typical session for me was like:

  About to run "sed -n '1,100p' example.cpp", approve?
  About to run "sed -n '100,200p' example.cpp", approve?
  About to run "sed -n '200,300p' example.cpp", approve?
Could very well be a skill issue, but that was mighty annoying, and with no obvious fix (options "don't ask again for ...." were not helping).
reply
embedding-shape 2 hours ago
I keep it on a tight leash too, not sure how that's related. What gets edited on disk is very different from what gets committed.
reply
mullingitover 4 hours ago
Ask mode exists, I think the models work on the assumption that if you're allowing edits then of course you must want edits.
reply
kylec 6 hours ago
"I don't know what did it, but here's what it does now"
reply
moffkalast 2 hours ago
I've seen Kimi do this a ton as well, so insufferable.
reply
SignalStackDev 6 hours ago
[dead]
reply
h14h 5 hours ago
Would be really interesting to see an "Eager McBeaver" bench around this concept. When doing real work, a model's ability to stay within the bounds of a given task has almost become more important than its raw capabilities now that every frontier model is so dang good.

Every one of these models is so great at propelling the ship forward, that I increasingly care more and more about which models are the easiest to steer in the direction I actually want to go.

reply
cglan 5 hours ago
being TOO steerable is another issue though.

Codex is very steerable to a fault, and will gladly "monkey paw" your requests to a fault.

Claude Opus will ignore your instructions and do what it thinks is "right" and just barrel forward.

Both are bad and papering over the actual issue which is these models don't really have the ability to actually selectively choose their behavior per issue (ie ask for followup where needed, ignore users where needed, follow instructions where needed). Behavior is largely global

reply
kees99 4 hours ago
I my experience Claude gradually stops being opinionated as task at hand becomes more arcane. I frequently add "treat the above as a suggestion, and don't hesitate to push back" to change requests, and it seems to help quite a bit.
reply
cglan 26 minutes ago
Yeah that happens to me too. It’s hard to know where it’s going to break off and follow instructions too well vs use it as a tip. Idk it’s all tiring
reply
h14h 2 hours ago
For sure. I imagine it'd be pretty difficult to evaluate the "correct" amount of steer-ability. You'd probably just have to measure a delta in eagerness on a single same task between when given highly-specified prompts, and more open-ended prompts. Probably not dissimilar from how artificialanalysis.ai does their "omniscience index".
reply
Yizahi 2 hours ago
Asking LLM programs to "not do the thing" often results in them tripping and generating output including that "thing", since those are simply the tokens which will enter the input. I always try to rephrase query the way that all my instructions have only "positive" forms - "do only this" or "do it only in that way" or "do it only for those parameters requested" etc. Can't say if that helps much, but it is possible.
reply
enobrev 7 hours ago
I have the same issue. Even when I ask it to do code-reviews and very explicitly tell it not to change files, it will occasionally just start "fixing" things.
reply
mikepurvis 6 hours ago
I find Copilot leans the other way. It'll myopically focus its work in the exact function I point it at, even when it's clear that adding a new helper would be a logical abstraction to share behaviour with the function right beside it.

Overall, I think it's probably better that it stay focused, and allow me to prompt it with "hey, go ahead and refactor these two functions" rather than the other way around. At the same time, really the ideal would be to have it proactively ask, or even pitch the refactor as a colleague would, like "based on what I see of this function, it would make most sense to XYZ, do you think that makes sense? <sure go ahead> <no just keep it a minimal change>"

Or perhaps even better, simply pursue both changes in parallel and present them as A/B options for the human reviewer to select between.

reply
neya 6 hours ago
> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

This has not been my experience. I do Elixir primarily and Gemini has helped build some really cool products and massive refactors along the way. And it would even pick up security issues and potential optimizations along the way

What HAS been an issue constantly though was randomly the model will absolutely not respond at all and some random error would occur which is embarrassing for a company like Google with the infrastructure they own.

reply
embedding-shape 6 hours ago
Out of curiosity, do you have any public projects (with public source code) you've made exclusively with Gemini, so one could take a look? I've tried a bunch of times to use Gemini to at least finish something small but I always end up sufficiently frustrated to abort it as the instruction-following seems so bad.
reply
msteffen 4 hours ago
> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

Not like human programmers. I would never do this and have never struggled with it in the past, no...

reply
embedding-shape 4 hours ago
Fairer comparison would be against other models, which are typically better at instruction following. You say "don't change anything not explicitly mentioned" or "Don't add any new code comments" and they tend to follow that.
reply
apitman 5 hours ago
This matches my experience using Gemini CLI to code. It would also frequently get stuck in loops. It was so bad compared to Codex that I feel like I must have been doing something fundamentally wrong.
reply
tyfon 6 hours ago
I was using gemini antigravity in opencode a few weeks ago before they started banning everyone for that and I got into the habit of writing "do x, then wait for instructions".

That helped quite a bit but it would still go off on it's own from time to time.

reply
JLCarveth 6 hours ago
Every time I have tried using `gemini-cli` it just thinks endlessly and never actually gives a response.
reply
gavinray 7 hours ago
Do you have Personalization Instructions set up for your LLM models?

You can make their responses fairly dry/brief.

reply
embedding-shape 7 hours ago
I'm mostly using them via my own harnesses, so I have full control of the system prompts and so on. And no matter what I try, Gemini keeps "helpfully" adding code comments every now and then. With every other model, "- Don't add code comments" tends to be enough, but with Gemini I'm not sure how I could stop the comments from eventually appearing.
reply
WarmWash 7 hours ago
I'm pretty sure it writes comments for itself, not for the user. I always let the models comment as much as they want, because I feel it makes the context more robust, especially when cycling contexts often to keep them fresh.

There is a tradeoff though, as comments do consumer context. But I tend to pretty liberally dispense of instances and start with a fresh window.

reply
embedding-shape 6 hours ago
> I'm pretty sure it writes comments for itself, not for the user

Yeah, that sounds worse than "trying to helpful". Read the code instead, why add indirection in that way, just to be able to understand what other models understand without comments?

reply
metal_am 7 hours ago
I'd love to hear some examples!
reply
gavinray 7 hours ago
I use LLM's outside of work primarily for research on academic topics, so mine is:

  Be a proactive research partner: challenge flawed or unproven ideas with evidence; identify inefficiencies and suggest better alternatives with reasoning; question assumptions to deepen inquiry.
reply
ai4prezident 6 hours ago
[dead]
reply
zengineer 7 hours ago
true, whenever I ask Gemini to help me with a prompt for generating an image of XYZ, it generates the image.
reply
jasonjmcghee 5 hours ago
What's crazy is you've influenced them to spend real effort ensuring their model is good at generating animated svgs of animals operating vehicles.

The most absurd benchmaxxing.

https://x.com/jeffdean/status/2024525132266688757?s=46&t=ZjF...

reply
simonw 4 hours ago
I like how they also did a frog on a penny-farthing and a giraffe driving a tiny car and an ostrich on roller skates and a turtle kickflipping a skateboard and a dachshund driving a stretch limousine.
reply
jasonjmcghee 4 hours ago
Ok Google what are some other examples like a pelican riding a bicycle
reply
simultsop 3 hours ago
reminds me of andor, luthen, positive reinforcing wasting time of emperor
reply
threatofrain 5 hours ago
Animated SVG is huge. People in different professions are worrying to different degrees in terms of being replaced by ML, but this one is huge with regards to digital art.
reply
yieldcrv 3 hours ago
yeah, complex SVG's are so much more bandwidth, computation and energy efficient than raster images - up to a point! but in general use we are not at that point and there's so much more we can do with it

I've been meaning to let coding agents take a stab at using the lottie library https://github.com/airbnb/lottie-web to supercharge the user experience without needing to make it a full time job

reply
eurekin 5 hours ago
Can't wait until they finally get to real world CAD
reply
tngranados 5 hours ago
There's a CAD example in that same thread: https://x.com/JeffDean/status/2024528776856817813
reply
tantalor 5 hours ago
He's svg-mogging
reply
gnatolf 5 hours ago
So let's put things we're interested in in the benchmarks.

I'm not against pelicans!

reply
ghurtado 4 hours ago
I think the reason the pelican example is great is because it's bizarre enough that it's unlikely that to appear in the training as one unified picture.

If we picked something more common, like say, a hot dog with toppings, then the training contamination is much harder to control.

reply
troymc 54 minutes ago
I think it's now part of their training though, thanks to Simon constantly testing every new model against it, and sharing his results publicly.

There's a specific term for this in education and applied linguistics: the washback effect.

reply
rvnx 4 hours ago
It's the most common SVG test, it's the equivalent of Will Smith eating spaghettis, so obviously they benchmax toward it
reply
casey2 4 hours ago
You don't have to benchmax everything, just the benchmarks in the right social circles
reply
UltraSane 5 hours ago
It if funny to think that Jeff Dean personally worked to optimize the pelican riding a bike benchmark.
reply
MrCheeze 6 hours ago
Does anyone understand why LLMs have gotten so good at this? Their ability to generate accurate SVG shapes seems to greatly outshine what I would expect, given their mediocre spatial understanding in other contexts.
reply
tedsanders 5 hours ago
A few thoughts:

- One thing to be aware of is that LLMs can be much smarter than their ability to articulate that intelligence in words. For example, GPT-3.5 Turbo was beastly at chess (1800 elo?) when prompted to complete PGN transcripts, but if you asked it questions in chat, its knowledge was abysmal. LLMs don't generalize as well as humans, and sometimes they can have the ability to do tasks without the ability to articulate things that feel essential to the tasks (like answering whether the bicycle is facing left or right).

- Secondly, what has made AI labs so bullish on future progress over the past few years is that they see how little work it takes to get their results. Often, if an LLM sucks at something that's because no one worked on it (not always, of course). If you directly train a skill, you can see giant leaps in ability with fairly small effort. Big leaps in SVG creation could be coming from relatively small targeted efforts, where none existed before.

reply
emp17344 3 hours ago
We’re literally at the point where trillions of dollars have been invested in these things and the surrounding harnesses and architecture, and they still can’t do economically useful work on their own. You’re way too bullish here.
reply
dbeardsl 3 hours ago
Neither do cars until very recently. A tool doesn't have to be unsupervised to be useful.
reply
simonw 6 hours ago
My best guess is that the labs put a lot of work into HTML and CSS spatial stuff because web frontend is such an important application of the models, and those improvements leaked through to SVG as well.
reply
mitkebes 4 hours ago
All models have improved, but from my understanding, Gemini is the main one that was specifically trained on photos/video/etc in addition to text. Other models like earlier chatgpt builds would use plugins to handle anything beyond text, such as using a plugin to convert an image into text so that chatgpt could "see" it.

Gemini was multimodal from the start, and is naturally better at doing tasks that involve pictures/videos/3d spatial logic/etc.

The newer chatgpt models are also now multimodal, which has probably helped with their svg art as well, but I think Gemini still has an edge here

reply
pknerd 6 hours ago
> Does anyone understand why LLMs have gotten so good at this?

Added more IF/THEN/ELSE conditions.

reply
kridsdale3 5 hours ago
More wires and jumpers on the breadboard.
reply
sam_1421 7 hours ago
Models are soon going to start benchmaxxing generating SVGs of pelicans on bikes
reply
cbsks 6 hours ago
That’s Simon’s goal. “All I’ve ever wanted from life is a genuinely great SVG vector illustration of a pelican riding a bicycle. My dastardly multi-year plan is to trick multiple AI labs into investing vast resources to cheat at my benchmark until I get one.”

https://simonwillison.net/2025/Nov/13/training-for-pelicans-...

reply
travisgriggs 5 hours ago
So once that's achieved, I wonder how well it deals with unsuspected variations. E.g.

"Give me an illustration of a bicycle riding by a pelican"

"Give me an illustration of a bicycle riding over a pelican"

"Give me an illustration of a bicycle riding under a flying pelican"

So on and so forth. Or will it start to look like the Studio C sketch about Lobster Bisque: https://youtu.be/A2KCGQhVRTE

reply
embedding-shape 7 hours ago
Soon? I'd be willing to bet it's been included in the training set at least 6 months by now. Not so obvious so it generates always perfect pelicans on bikes, but sufficiently for the "minibench" to be less useful today than in the past.
reply
Rudybega 3 hours ago
If only there were some way to test it, like swapping the two nouns in the sentence. Alas.
reply
jsheard 7 hours ago
Simons been doing this exact test for nearly 18 months now, if vendors want to benchmaxx it then they've had more than enough time to do so already.
reply
stri8ted 7 hours ago
Exactly. As far as I'm concerned, the benchmark is useless. It's way too easy and rewarding to train on it.
reply
bonoboTP 4 hours ago
It's just an in-joke, he doesn't intend it as a serious benchmark anymore. I think it's funny.
reply
Legend2440 6 hours ago
Y'all are way too skeptical, no matter what cool thing AI does you'll make up an excuse for how they must somehow be cheating.
reply
toraway 5 hours ago
Jeff Dean literally featured it in a tweet announcing the model. Personally it feels absurd to believe they've put absolutely no thought into optimizing this type of SVG output given the disproportionate amount of attention devoted to a specific test for 1 yr+.

I wouldn't really even call it "cheating" since it has improved models' ability to generate artistic SVG imagery more broadly but the days of this being an effective way to evaluate a model's "interdisciplinary" visual reasoning abilities have long since passed, IMO.

It's become yet another example in the ever growing list of benchmaxxed targets whose original purpose was defeated by teaching to the test.

https://x.com/jeffdean/status/2024525132266688757?s=46&t=ZjF...

reply
arcatech 5 hours ago
Or maybe you’re too trusting of companies who have already proven to not be trustworthy?
reply
pixl97 6 hours ago
I mean if you want to make your own benchmark, simply don't make it public and don't do it often. If your salamander on skis or whatever gets better with time it likely has nothing to do with being benchmaxxed.
reply
ks2048 4 hours ago
Forget the paperclip maximizer - AGI will turn the whole world into pelicans on bikes.
reply
SoKamil 6 hours ago
It seems they trained the model to output good svg’s.

In their blog post[1], first use case they mention is svg generation. Thus, it might not be any indicator at all anymore.

[1] https://blog.google/innovation-and-ai/models-and-research/ge...

reply
brikym 2 hours ago
Another great benchmark would be to convert a raster image of a logo into SVG. I've yet to find a good tool for this that produces accurate smooth lines.
reply
culi 3 hours ago
Cost per task has increased 4.2x but their ARC-AGI-2 score went from 33.6% to 77.1%

Cost per task is still significantly lower than Opus. Even Opus 4.5

https://arcprize.org/leaderboard

reply
Arcuru 7 hours ago
Did you stop using the more detailed prompt? I think you described it here: https://simonwillison.net/2025/Nov/18/gemini-3/
reply
simonw 6 hours ago
It seems to be having capacity problems right now but I'll run that as soon as I can get it to work.
reply
WarmWash 7 hours ago
Less pretty and more practical, it's really good at outputting circuit designs as SVG schematics.

https://www.svgviewer.dev/s/dEdbH8Sw

reply
InitialLastName 6 hours ago
I don't know what of this is the prompt and what was the output, but that's a pretty bad schematic (for both aesthetic and circuit-design reasons).
reply
WarmWash 6 hours ago
The prompts were doing the design, reference voltage, hysteresis, output stage, all the maths and then the SVG is from asking the model to take all that and the current BOM to make an SVG schematic of it. In the past models would just output totally incoherent messes of lines and shapes.

I did a larger circuit too that this is part of, but it's not really for sharing online.

reply
svnt 6 hours ago
Yes but you concede it is a schematic.
reply
tadfisher 5 hours ago
How far we have come!
reply
0_____0 6 hours ago
that's pretty amazing for an LLM but as an EE, if my intern did this i would sigh inwardly and pull up some existing schematics for some brief guidance on symbol layout.
reply
AmazingTurtle 6 hours ago
At this point, the pelican benchmark became so widely used that there must be high quality pelicans in the dataset, I presume. What about generating an okapi on a bicycle instead?
reply
tromp 6 hours ago
Or, even more challenging, an okapi on a recumbent ?!
reply
steve_adams_86 7 hours ago
Ugh, the gears and chain don't mesh and there's no sprocket on the rear hub

But seriously, I can't believe LLMs are able to one-shot a pelican on a bicycle this well. I wouldn't have guessed this was going to emerge as a capability from LLMs 6 years ago. I see why it does now, but... It still amazes me that they're so good at some things.

reply
emp17344 6 hours ago
Is this capability “emergent”, or do AI firms specifically target SVG generation in order to improve it? How would we be able to tell?
reply
steve_adams_86 5 hours ago
I asked myself the same thing as I typed that comment, and I'm not sure what the answer is. I don't think models are specifically trained on this (though of course they're trained on how to generate SVGs in general), but I'm prepared to be wrong.

I have a feeling the most 'emergent' aspect was that LLMs have generally been able to produce coherent SVG for quite a while, likely without specific training at first. Since then I suspect there has been more tailored training because improvements have been so dramatic. Of course it makes sense that text-based images using very distinct structure and properties could be manipulated reasonably well by a text-based language model, but it's still fascinating to me just how well it can work.

Perhaps what's most incredible about it is how versatile human language is, even when it lacks so many dimensions as bits on a machine. Yet it's still cool that we can resurrect those bits at rest and transmogrify them back into coherent projections of photons from a screen.

I don't think LLMs are AGI or about to completely flip the world upside down or whatever, but it seems undeniably magical when you break it down.

reply
simonw 6 hours ago
Google specifically boast about their SVG performance in the announcement post: https://blog.google/innovation-and-ai/models-and-research/ge...

You can try any combination of animal on vehicle to confirm that they likely didn't target pelicans directly though.

reply
0_____0 6 hours ago
next time you host a party, have people try to draw a bicycle on your whiteboard (you have a whiteboard in your house right? you should, anyway...)

human adults are generally quite bad at drawing them, unless they spend a lot of time actually thinking about bicycles as objects

reply
542354234235 6 hours ago
reply
iammattmurphy 5 hours ago
Fantastic post, thanks for that.
reply
emp17344 6 hours ago
What’s your point? Yes, humans fail sometimes, as do AI models. Are you trying to imply that, in light of this, AI is now as capable as human beings? If so, that conclusion doesn’t follow logically.
reply
0_____0 6 hours ago
it's not a loaded point, i just think it's funny that humans typically cannot one-shot this. and it will make your friends laugh
reply
HPsquared 7 hours ago
And the left leg is straight while the right leg is bent.

EDIT: And the chain should pass behind the seat stay.

reply
bredren 7 hours ago
What is that, a snack in the basket?
reply
sigmar 7 hours ago
"integrating a bicycle basket, complete with a fish for the pelican... also ensuring the basket is on top of the bike, and that the fish is correctly positioned with its head up... basket is orange, with a fish inside for fun."

how thoughtful of the ai to include a snack. truly a "thanks for all the fish"

reply
defen 6 hours ago
A pelican already has an integrated snack-holder, though. It wouldn't need to put it in the basket.
reply
SauntSolaire 4 hours ago
That one's full too
reply
troymc 50 minutes ago
The number of snacks in the basket is a random variable with a Poisson distribution.
reply
WarmWash 7 hours ago
A fish for the road
reply
tarr11 6 hours ago
What do you think this particular prompt is evaluating for?

The more popular these particular evals are, the more likely the model will be trained for them.

reply
TZubiri 4 hours ago
You think they are able to see their output and iterate on it? Or is it pure token generation?
reply
infthi 7 hours ago
Wonder when will we get something other than a side view
reply
mikepurvis 6 hours ago
That would be a especially challenging for vector output. I tried just now on ChatGPT 5.2 to jump straight to an image, with this prompt:

"make me a cartoon image of a pelican riding a bicycle, but make it from a front 3/4 view, that is riding toward the viewer."

The result was basically a head-on view, but I expect if you then put that back in and said, "take this image and vectorize it as an SVG" you'd have a much better time than trying to one-shot the SVG directly from a description.

... but of course, if that's so, then what's preventing the model from being smart enough to identify this workflow and follow it on its own to get the task completed?

reply
calny 7 hours ago
Great pelican but what’s up with that fish in the basket?
reply
coldtea 7 hours ago
It's a pelican. What do you expect a pelican to have in his bike's basket?

It's a pretty funny and coherent touch!

reply
embedding-shape 6 hours ago
> What do you expect a pelican to have in his bike's basket?

Probably stuff it cannot fit in the gullet, or don't want there (think trash). I wouldn't expect a pelican to stash fish there, that's for sure.

reply
kridsdale3 5 hours ago
You never travel with a snack fish for later on? He's going to be burning calories.
reply
nicr_22 5 hours ago
Yeah, why only _one_ fish?

It's obvious that pelican is riding long distance, no way a single fish is sufficiently energy dense for more than a few miles.

Can't the model do basic math???

reply
gavinray 7 hours ago
Where else are cycling Pelican's meant to keep their fish?
reply
calny 6 hours ago
I get it, I just meant the fish is poorly done, when I’d have guessed it would be relatively simple part. Maybe the black dot eye is misplaced idk.
reply
mohsen1 7 hours ago
is there something in your prompt about hats? why the pelican always wearing a hat recently?!
reply
bigfishrunning 7 hours ago
At this point, i think maybe they're training on all of the previous pelicans, and one of them decided to put a hat on it?

Disclaimer: This is an unsubstantiated claim that i made up

reply
xnx 7 hours ago
Not even animated? This is 2026.
reply
readitalready 7 hours ago
Jeff Dean just posted an animated version: https://x.com/JeffDean/status/2024525132266688757
reply
benbreen 6 hours ago
One underrated thing about the recent frontier models, IMO, is that they are obviating the need for image gen as a standalone thing. Opus 4.6 (and apparently 3.1 Pro as well) doesn't have the ability to generate images but it is so good at making SVG that it basically doesn't matter at this point. And the benefit of SVG is that it can be animated and interactive.

I find this fascinating because it literally just happened in the past few months. Up until ~summer of 2025, the SVG these models made was consistently buggy and crude. By December of 2026, I was able to get results like this from Opus 4.5 (Henry James: the RPG, made almost entirely with SVG): https://the-ambassadors.vercel.app

And now it looks like Gemini 3.1 Pro has vaulted past it.

reply
embedding-shape 6 hours ago
> doesn't have the ability to generate images but it is so good at making SVG that it basically doesn't matter at this point

Yeah, since the invention of vector images, suddenly no one cares about raster images anymore.

Obviously not true, but that's how your comment reads right now. "Image" is very different from "Image", and one doesn't automagically replace the other.

reply
buu700 6 hours ago
This reminds me of the time I printed a poster with a blown up version of some image for a high school history project. A classmate asked how I did it, so I started going on about how I used software to vectorize the image. Turned out he didn't care about any of that and just wanted the name of the print shop.
reply
Der_Einzige 5 hours ago
You have no idea how badly I want to be teleported to the alternative world where VECTOR COMPUTING was the dominant form of computers.

We had high framerate (yes it was variable), bright, beautiful displays in the 1980s with the vectrex.

reply
cachius 6 hours ago
2025 that is
reply
bigfishrunning 7 hours ago
That Ostrich Tho
reply
cachius 6 hours ago
That Tires Tho
reply
DonHopkins 5 hours ago
How about STL files for 3d printing pelicans!
reply
baq 5 hours ago
Harder: the bike must work

Hardest: the pelican must work

reply
benatkin 6 hours ago
I used the AI studio link and tried running it with the temperature set to 1.75: https://jsbin.com/locodaqovu/edit?html,output
reply
saberience 7 hours ago
I hope we keep beating this dead horse some more, I'm still not tired of it.
reply
Robdel12 7 hours ago
I really want to use google’s models but they have the classic Google product problem that we all like to complain about.

I am legit scared to login and use Gemini CLI because the last time I thought I was using my “free” account allowance via Google workspace. Ended up spending $10 before realizing it was API billing and the UI was so hard to figure out I gave up. I’m sure I can spend 20-40 more mins to sort this out, but ugh, I don’t want to.

With alllll that said.. is Gemini 3.1 more agentic now? That’s usually where it failed. Very smart and capable models, but hard to apply them? Just me?

reply
alpineman 7 hours ago
100% agreed. I wish someone would make a test for how reliably the LLMs follow tool use instructions etc. The pelicans are nice but not useful for me to judge how well a model will slot into a production stack.
reply
embedding-shape 7 hours ago
At first when I got started with using LLMs I read/analyzed benchmarks, looked at what example prompts people used and so on, but many times, a new model does best at the benchmark, and you think it'll be better, but then in real work, it completely drops the ball. Since then I've stopped even reading benchmarks, I don't care an iota about them, they always seem more misdirected than helpful.

Today I have my own private benchmarks, with tests I run myself, with private test cases I refuse to share publicly. These have been built up during the last 1/1.5 years, whenever I find something that my current model struggles with, then it becomes a new test case to include in the benchmark.

Nowadays it's as easy as `just bench $provider $model` and it runs my benchmarks against it, and I get a score that actually reflects what I use the models for, and it feels like it more or less matches with actually using the models. I recommend people who use LLMs for serious work to try the same approach, and stop relying on public benchmarks that (seemingly) are all gamed by now.

reply
cdelsolar 6 hours ago
share
reply
embedding-shape 6 hours ago
The harness? Trivial to build yourself, ask your LLM for help, it's ~1000 LOC you could hack together in 10-15 minutes.

As for the test cases themselves, that would obviously defeat the purpose, so no :)

reply
phamilton 6 hours ago
> For those building with a mix of bash and custom tools, Gemini 3.1 Pro Preview comes with a separate endpoint available via the API called gemini-3.1-pro-preview-customtools. This endpoint is better at prioritizing your custom tools (for example view_file or search_code).

It sounds like there was at least a deliberate attempt to improve it.

reply
pdntspa 7 hours ago
You can delete the billing from a given API key
reply
Stevvo 7 hours ago
You could always use it through Copilot. The credits based billing is pretty simple without surprise charges.
reply
surgical_fire 7 hours ago
May be very silly of me, but I avoid using Gemini on my personal Google account. I use it at work, because my employer provides it.

I am scared some automated system may just decide I am doing something bad and terminate my account. I have been moving important things to Proton, but there are some stuff that I couldn't change that would cause me a lot of annoyance. It's not trivial to set up an alternative account just for Gemini, because my Google account is basically on every device I use.

I mostly use LLMs as coding assistant, learning assistant, and general queries (e.g.: It helped me set up a server for self hosting), so nothing weird.

reply
paganel 2 hours ago
Same feeling here, if it makes you feel any better (for sure it made me better seeing I'm not alone in this).
reply
CamperBob2 5 hours ago
For what it's worth, there was an (unfortunately unsuccessful) HN submission from a guy who got his Gemini account banned, apparently without losing his whole Google account: https://news.ycombinator.com/item?id=47007906
reply
surgical_fire 4 hours ago
Comforting to know that they may ban you from only some of their services, I guess?

I really regret relying so much on my Google account for so long. Untangling myself from it is really hard. Some places treat your email as a login, not as simply as a way to contact you. This is doubly concerning for government websites, where setting up a new account may just not be a possibility.

At some point I suppose Gemini will be the only viable option for LLMs, so oh well.

reply
horsawlarway 7 hours ago
So much this.

It's absolutely amazing how hostile Google is to releasing billing options that are reasonable, controllable, or even fucking understandable.

I want to do relatively simple things like:

1. Buy shit from you

2. For a controllable amount (ex - let me pick a limit on costs)

3. Without spending literally HOURS trying to understand 17 different fucking products, all overlapping, with myriad project configs, api keys that should work, then don't actually work, even though the billing links to the same damn api key page, and says it should work.

And frankly - you can't do any of it. No controls (at best delayed alerts). No clear access. No real product differentiation pages. No guides or onboarding pages to simplify the matter. No support. SHIT LOADS of completely incorrect and outdated docs, that link to dead pages, or say incorrect things.

So I won't buy shit from them. Period.

reply
sciencejerk 6 hours ago
You think AWS is better?
reply
3form 6 hours ago
Exact reason I used none of these platforms for my personal projects, ever.
reply
pdimitar 6 hours ago
Who is comparing to AWS and why? They can both be terrible at the same time, you know.
reply
abiraja 5 hours ago
I've been using it lately with OpenCode and it's working pretty well (except for API reliability issues).
reply
himata4113 7 hours ago
use openrouter instead
reply
Robdel12 5 hours ago
This is actually an excellent idea, I’ll give this a shot tonight!
reply
WarmWash 6 hours ago
3.1 Pro is the first model to correctly count the number of legs on my "five legged dog" test image. 3.0 flash was the previous best, getting it after a few prompts of poking. 3.1 got it on the first prompt though, with the prompt being "How many legs does the dog have? Count Carefully".

However, it didn't get it on the first try with the original prompt (prompt: "How many legs does the dog have?"). It initially said 4, then with a follow up prompt got it to hesitantly say 5, with one limb must being obfuscated or hidden.

So maybe I'll give it a 90%?

This is without tools as well.

reply
merlindru 6 hours ago
your question may have become part of the training data with how much coverage there was around it. perhaps you should devise a new test :P
reply
devsda 5 hours ago
I suggest asking it to identify/count the number of fire hydrants, crosswalks, bridges, bicycles, cars, buses and traffic signals etc.

Pit Google against Google :D

reply
iamdelirium 5 hours ago
3.1 Pro has the same Jan 2025 knowledge cutoff as the other 3 series models. So if 3.1 has it in its training data, the other ones would have as well.
reply
ainch 3 hours ago
The fact it's still Jan 2025 is weird to me. Have they not have a successful pretrain in over a year?
reply
gallerdude 6 hours ago
My job may have become part of the training data with how much coverage there is around it. Perhaps another career would be a better test of LLM capabilities.
reply
suddenlybananas 6 hours ago
Have you ever heard of a black swan?
reply
WarmWash 6 hours ago
Honestly at this point I have fed this image in so many times on so many models, that it also functions as a test for "Are they training on my image specifically" (they are generally, for sure, but that's along with everything else in the ocean of info people dump in).

I genuinely don't think they are. GPT-5.2 still stands by 4 legs, and OAI has been getting this image consistently for over a year. And 3.1 still fumbled with the harder prompt "How many legs does the dog have?". I needed to add the "count carefully" part to tip it off that something was amiss.

Since it did well I'll make some other "extremely far out of the norm" images to see how it fairs. A spider with 10 legs or a fish with two side fins.

reply
wat10000 6 hours ago
Easy fix, make a new test image with six legs, and watch all the LLMs say it has five.
reply
sigmar 8 hours ago
blog post is up- https://blog.google/innovation-and-ai/models-and-research/ge...

edit: biggest benchmark changes from 3 pro:

arc-agi-2 score went from 31.1% -> 77.1%

apex-agents score went from 18.4% -> 33.5%

reply
ripbozo 7 hours ago
Does the arc-agi-2 score more than doubling in a .1 release indicate benchmark-maxing? Though i dont know what arc-agi-2 actually tests
reply
maxall4 7 hours ago
Theoretically, you can’t benchmaxx ARC-AGI, but I too am suspect of such a large improvement, especially since the improvement on other benchmarks is not of the same order.
reply
moffkalast 2 hours ago
https://arcprize.org/arc-agi/1/

It's a sort of arbitrary pattern matching thing that can't be trained on in the sense that the MMLU can be, but you can definitely generate billions of examples of this kind of task and train on it, and it will not make the model better on any other task. So in that sense, it absolutely can be.

I think it's been harder to solve because it's a visual puzzle, and we know how well today's vision encoders actually work https://arxiv.org/html/2407.06581v1

reply
boplicity 7 hours ago
Benchmark maxing could be interpreted as benchmarks actually being a design framework? I'm sure there are pitfalls to this, but it's not necessarily bad either.
reply
energy123 6 hours ago
Francois Chollet accuses the big labs of targeting the benchmark, yes. It is benchmaxxed.
reply
tasuki 5 hours ago
Didn't the same Francois Chollet claim that this was the Real Test of Intelligence? If they target it, perhaps they target... real intelligence?
reply
ainch 3 hours ago
He's always said ARC is a necessary but not sufficient condition for testing intelligence afaik
reply
CamperBob2 6 hours ago
I don't know what he could mean by that, as the whole idea behind ARC-AGI is to "target the benchmark." Got any links that explain further?
reply
layer8 5 hours ago
The fact that ARC-AGI has public and semi-private in addition to private datasets might explain it: https://arcprize.org/arc-agi/2/#dataset-structure
reply
blinding-streak 7 hours ago
I assume all the frontier models are benchmaxxing, so it would make sense
reply
sho_hn 8 hours ago
The touted SVG improvements make me excited for animated pelicans.
reply
takoid 7 hours ago
I just gave it a shot and this is what I got: https://codepen.io/takoid/pen/wBWLOKj

The model thought for over 5 minutes to produce this. It's not quite photorealistic (some parts are definitely "off"), but this is definitely a significant leap in complexity.

reply
onionisafruit 7 hours ago
Good to see it wearing a helmet. Their safety team must be on their game.
reply
BrokenCogs 6 hours ago
Yes but why would a pelican need a helmet? If it falls over it can just fly away... Common sense 1 Gemini 0
reply
throwa356262 3 hours ago
Obviously these domestic pelicans can't fly, otherwise why would they need a bike?
reply
Gander5739 3 hours ago
Why would a pelican be riding a bicycle at all, for that matter?
reply
BrokenCogs 2 hours ago
Because the user asked for it
reply
tasuki 4 hours ago
That's a good pelican. What I like the most is that the SVG is nice and readable. If only Inkscape could output nice SVG like this!
reply
makeavish 7 hours ago
Looks great!
reply
benatkin 7 hours ago
Here's what I got from Gemini Pro on gemini.google.com, it thought for under a minute...might you have been using AI studio? https://jsbin.com/zopekaquga/edit?html,output

It does say 3.1 in the Pro dropdown box in the message sending component.

reply
james2doyle 7 hours ago
The blog post includes a video showcasing the improvements. Looks really impressive: https://blog.google/innovation-and-ai/models-and-research/ge...
reply
aoeusnth1 7 hours ago
I imagine they're also benchgooning on SVG generation
reply
vunderba 6 hours ago
SVG is an under-rated use case for LLMs because it gives you the scalability of vector graphics along with CSS-style interactivity (hover effects, animations, transitions, etc.).
reply
rdtsc 5 hours ago
My perennial joke is as soon as that got on HN front page Google went and hired some interns and they spend a 100% of the time on pelicans.
reply
DonHopkins 5 hours ago
How about STL files for 3d printing pelicans!
reply
attentive 20 minutes ago
A lot of gemini bashing. But flash 3.0 with opencode is reasonably good and reliable coder.

I'd rate it between haiku 4.5 (also pretty good for a price) and sonnet. Closer to sonnet.

Sure, if I am not cost-sensitive I'd run everything in opus 4.6 but alas.

reply
esafak 7 hours ago
Has anyone noticed that models are dropping ever faster, with pressure on companies to make incremental releases to claim the pole position, yet making strides on benchmarks? This is what recursive self-improvement with human support looks like.
reply
emp17344 7 hours ago
Remember when ARC 1 was basically solved, and then ARC 2 (which is even easier for humans) came out, and all of the sudden the same models that were doing well on ARC 1 couldn’t even get 5% on ARC 2? Not convinced these benchmark improvements aren’t data leakage.
reply
culi 3 hours ago
Look at the ARC site. The scores of these models is plotted against their "cost per task". All of these huge jumps come along with massive increases in cost per task. Including Gemini 3.1 Pro which increased by 4.2x
reply
casey2 3 hours ago
ARC 2 was made specifically to artificially lower contemporary LLM scores, therefore any kind of model improvements will have outsized effects

Also people use "saturated" too liberally. The top left corner 1 cent per task is saturated IMO. Since there are billions of people who would perfer to solve arc 1 tasks at 52 cents per task. Arc 2 a human would make thousands of dollars a day with 99.99% accuracy

reply
z3t4 2 hours ago
How much do I get if I solve this? :D

https://arcprize.org/play

reply
alisonkisk 3 hours ago
You are saying something interesting but too esoteric. Can you explain for beginners?
reply
redox99 7 hours ago
I don't think there's much recursive improvement yet.

I'd say it's a combination of

A) Before, new model releases were mostly a new base model trained from scratch, with more parameters and more tokens. This takes many Months. Now that RL is used so heavily, you can make infinitely many tweaks to the RL setup, and in just a month get a better model using the same base model.

B) There's more compute online

C) Competition is more fierce.

reply
culi 3 hours ago
I feel like they're actually dropping slower. Chinese models are dropping right before lunar new year as seems to be an emerging tradition.

A couple of western models have dropped around the same time too but I don't think the "strides on benchmarks" are that impressive when you consider how much tokens are being spent to make those "improvements". E.g. Gemini 3.1 Pro's ARC-AGI-2 score went from 33.6% to 77.1% buuut their "cost per task" also increased by 4.2x. It seems to be the same story for most of these benchmark improvements and similar for Claude model improvements.

I'm not convinced there's been any substantial jump in capabilities. More likely these companies have scaled their datacenters to allow for more token usage

reply
m_ke 6 hours ago
this is mostly because RLVR is driving all of the recent gains, and you can continue improving the model by running it longer (+ adding new tasks / verifiers)

so we'll keep seeing more frequent flag planting checkpoint releases to not allow anyone to be able to claim SOTA for too long

reply
ainch 3 hours ago
It's becoming impossible to keep up - in the last week or so we've had: Gemini 3 Deep Think, Gemini 3.1 Pro, Claude Sonnet 4.6, GPT-5.3-Codex Spark, GLM-5, Minimax-2.5, Step 3.5 Flash, Qwen 3.5 and Grok 4.20.

and I'm sure others I've missed...

reply
ankit219 6 hours ago
not much to do with self improvement as such. openai has increased its pace, others are pretty much consistent. Google last year had three versions of gemini-2.5-pro each within a month of each other. Anthropic released claude 3 in march 24, sonnet 3.5 in june 24, 3.5 new in oct 24, and then 3.7 in feb 25, where they went to 4 series in May 25. then followed by opus 4.1 in august, sonnet 4.5 in oct, opus 4.5 in nov, 4.6 in feb, sonnet 4.6 in feb itself. Yes, they released both within weeks of each other, but originally they only released it together. This staggered release is what creates the impression of fast releases. its as much a function of training as a function of available compute, and they have ramped up in that regard.
reply
oliveiracwb 6 hours ago
With the advent of MoEs, efficiency gains became possible. However, MoEs still operate far from the balance and stability of dense models. My view is that most progress comes from router tuning based on good and bad outcomes, with only marginal gains in real intelligence
reply
nikcub 6 hours ago
and anyone notice that the pace has broken xAI and they were just dropped behind? The frontier improvement release loop is now ant -> openai -> google
reply
gavinray 5 hours ago
xAI just released Grok 4.20 beta yesterday or day before?
reply
dist-epoch 5 hours ago
Musk said Grok 5 is currently being trained, and it has 7 trillion params (Grok 4 had 3)
reply
svara 4 hours ago
My understanding is that all recent gains are from post training and no one (publicly) knows how much scaling pretraining will still help at this point.

Happy to learn more about this if anyone has more information.

reply
dist-epoch 4 hours ago
You gain more benefit spending compute on post-training than on pre-training.

But scaling pre-training is still worth it if you can afford it.

reply
gmerc 6 hours ago
That's what scaling compute depth to respond to the competition look like, lighting those dollars on fire.
reply
toephu2 5 hours ago
This is what competition looks like.
reply
PlatoIsADisease 7 hours ago
Only using my historical experience and not Gemini 3.1 Pro, I think we see benchmark chasing then a grand release of a model that gets press attention...

Then a few days later, the model/settings are degraded to save money. Then this gets repeated until the last day before the release of the new model.

If we are benchmaxing this works well because its only being tested early on during the life cycle. By middle of the cycle, people are testing other models. By the end, people are not testing them, and if they did it would barely shake the last months of data.

reply
KoolKat23 2 hours ago
I have a relatively consistent task that it completed with new information on weekdays at the edge of its intelligence. Interestingly 3.0 flash was good when it came out, took a nose dive a month back and is now excellent, I actually can't fault it it's so good.

It's performance in antigravity has also actually improved since launch day where it was giving non-stop typescript errors (not sure if that was antigravity itself).

reply
boxingdog 6 hours ago
[dead]
reply
davidguetta 7 hours ago
Implementation and Sustainability Hardware: Gemini 3 Pro was trained using Google’s Tensor Processing Units (TPUs). TPUs are specically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs. TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training, which can lead to better model quality. TPU Pods (large clusters of TPUs) also provide a scalable solution for handling the growing complexity of large foundation models. Training can be distributed across multiple TPU devices for faster and more efficient processing.

So google doesn't use NVIDIA GPUs at all ?

reply
dekhn 6 hours ago
When I worked there, there was a mix of training on nvidia GPUs (especially for sparse problems when TPUs weren't as capable), CPUs, and TPUs. I've been gone for a few years but I've heard a few anecdotal statements that some of their researchers have to use nvidia GPUs because the TPUs are busy.
reply
sdeiley 2 hours ago
Googler. We use GPUs, but its a drop in the bucket in the sea of our accelerators. We might sell more GPUs in Cloud than we use internally.

These are not data driven observations just vibes

reply
rjh29 4 hours ago
I assume that's a Gemini LLM response? You can tell Gemini is bullshitting when it starts using "often" or "usually" - like in this case "TPUs often come with large amounts of memory". Either they did or they didn't. "This (particular) mall often has a Starbucks" was one I encountered recently.
reply
w10-1 2 hours ago
It's not bullshit (i.e., intended) but probabilities all the way down, as Hume reminded us: from observations, you can only say the sun will likely rise in the east. You'd need to stand behind a theory of the world to say otherwise (but we were told "attention is all you need"...)
reply
PunchTornado 6 hours ago
no. only tpus
reply
paride5745 6 hours ago
Another reason to use Gemini then.

Less impact on gamers…

reply
TiredOfLife 6 hours ago
TPUs still use ram and chip production capacity
reply
lejalv 6 hours ago
Bla bla bla yada sustainability yada often come with large better growing faster...

It's such an uninformative piece of marketing crap

reply
the_duke 7 hours ago
Gemini 3 is pretty good, even Flash is very smart for certain things, and fast!

BUT it is not good at all at tool calling and agentic workflows, especially compared to the recent two mini-generations of models (Codex 5.2/5.3, the last two versions of Anthropic models), and also fell behind a bit in reasoning.

I hope they manage to improve things on that front, because then Flash would be great for many tasks.

reply
chermi 7 hours ago
You can really notice the tool use problems. They gotta get on that. The agent trend seems real, and powerful. They can't afford to fall behind on it.
reply
HardCodedBias 3 hours ago
"They can't afford to fall behind on it."

They are very, very seriously far behind as of 3.0.

We'll see if 3.1 addresses the issue at all.

reply
verdverm 7 hours ago
I don't really have tool usage issues that I don't put under that doesn't follow system prompt instructions consistently

there are these times where it puts a prefix on all function calls, which is weird and I think hallucination, so maybe that one

3.1 hopefully fixes that

reply
verdverm 7 hours ago
These improvements are one of the things specifically called out on the submitted page
reply
anthonypasq 7 hours ago
yeah, it seems to me like Gemini is a little behind on the current RL patterns and also they dont seem interested in really creating a dedicated coding model. I think they have so much product surface (search, AI mode, gmail, youtube, chrome etc), they are prioritizing making the model very general. but who knows im just talking out of my ass.
reply
spwa4 7 hours ago
In other words: they just need to motivate their employees while giving in to finance's demands to fire a few thousand every month or so ...

And don't forget, it's not just direct motivation. You can make yourself indispensable by sabotaging or at least not contributing to your colleagues' efforts. Not helping anyone, by the way, is exactly what your managers want you to do. They will decide what happens, thank you very much, and doing anything outside of your org ... well there's a name for that, isn't there? Betrayal, or perhaps death penalty.

reply
maxloh 8 hours ago
Gemini 3 seems to have a much smaller token output limit than 2.5. I used to use Gemini to restructure essays into an LLM-style format to improve readability, but the Gemini 3 release was a huge step back for that particular use case.

Even when the model is explicitly instructed to pause due to insufficient tokens rather than generating an incomplete response, it still truncates the source text too aggressively, losing vital context and meaning in the restructuring process.

I hope the 3.1 release includes a much larger output limit.

reply
NoahZuniga 7 hours ago
Output limit has consistently been 64k tokens (including 2.5 pro).
reply
esafak 8 hours ago
People did find Gemini very talkative so it might be a response to that.
reply
jayd16 7 hours ago
> Even when the model is explicitly instructed to pause due to insufficient tokens

Is there actually a chance it has the introspection to do anything with this request?

reply
maxloh 6 hours ago
Yeah, it does. It was possible with 2.5 Flash.

Here's a similar result with Qwen Qwen3.5-397B-A17B: https://chat.qwen.ai/s/530becb7-e16b-41ee-8621-af83994599ce?...

reply
jayd16 6 hours ago
Ok it prints some stuff at the end but does it actually count the output tokens? That part was already built in somehow? Is it just retrying until it has enough space to add the footer?
reply
verdverm 6 hours ago
No, the model doesn't have purview into this afaik

I'm not even sure what "pausing" means in this context and why it would help when there are insufficient tokens. They should just stop when you reach the limit, default or manually specified, but it's typically a cutoff.

You can see what happens by setting output token limit much lower

reply
otabdeveloper4 6 hours ago
No.
reply
MallocVoidstar 7 hours ago
> Even when the model is explicitly instructed to pause due to insufficient tokens rather than generating an incomplete response

AI models can't do this. At least not with just an instruction, maybe if you're writing some kind of custom 'agentic' setup.

reply
maxloh 6 hours ago
Yeah, it does. It was possible with 2.5 Flash.

Here's a similar result with Qwen Qwen3.5-397B-A17B: https://chat.qwen.ai/s/530becb7-e16b-41ee-8621-af83994599ce?...

reply
zhyder 7 hours ago
Surprisingly big jump in ARC-AGI-2 from 31% to 77%, guess there's some RLHF focused on the benchmark given it was previously far behind the competition and is now ahead.

Apart from that, the usual predictable gains in coding. Still is a great sweet-spot for performance, speed and cost. Need to hack Claude Code to use their agentic logic+prompts but use Gemini models.

I wish Google also updated Flash-lite to 3.0+, would like to use that for the Explore subagent (which Claude Code uses Haiku for). These subagents seem to be Claude Code's strength over Gemini CLI, which still has them only in experimental mode and doesn't have read-only ones like Explore.

reply
WarmWash 7 hours ago
>I wish Google also updated Flash-lite to 3.0+

I hope every day that they have made gains on their diffusion model. As a sub agent it would be insane, as it's compute light and cranks 1000+ tk/s

reply
zhyder 6 hours ago
Agree, can't wait for updates to the diffusion model.

Could be useful for planning too, given its tendency to think big picture first. Even if it's just an additional subagent to double-check with an "off the top off your head" or "don't think, share first thought" type of question. More generally would like to see how sequencing autoregressive thinking with diffusion over multiple steps might help with better overall thinking.

reply
zapnuk 2 hours ago
Gemini 3 was:

1. unreliable in GH copilot. Lots of 500 and 4XX errors. Unusable in the first 2 months

2. not available in vertex ai (europe). We have requirements regarding data residency. Funny enough anthropic is on point with releasing their models to vertex ai. We already use opus and sonnet 4.6.

I hope google gets their stuff together and understands that not everyone wants/can use their global endpoint. We'd like to try their models.

reply
tenpoundhammer 6 hours ago
In an attempt to get outside of benchmark gaming I had it make Platypus on a Tricycle. It's not as good as pelican on bicycle. https://www.svgviewer.dev/s/BiRht5hX
reply
textlapse 6 hours ago
To really confuse it, ask it to take that tricycle with the platypus on it to a car wash.
reply
dinosor 6 hours ago
For a moment I assumed the output would look like Perry the Platipus from the Disney (I think?) show. It's suprising to me (as a layman) that a show with lots of media that would've made it to the training corpus didn't show up.
reply
0_____0 6 hours ago
that's better than i thought it would be
reply
n4pw01f 34 minutes ago
I created a nice harness and visual workflow builder for my Gemini agent chains, works very well. I did this so it would create code the way I do, that is very editable.

In contrast, the vs code plugin was pretty bad, and did crazy things like mix languages

reply
datakazkn 50 minutes ago
One underappreciated reason for the agentic gap: Gemini tends to over-explain its reasoning mid-tool-call in a way that breaks structured output expectations. Claude and GPT-4o have both gotten better at treating tool calls as first-class operations. Gemini still feels like it's narrating its way through them rather than just executing.
reply
carbocation 44 minutes ago
I agree with this; it feels like the most likely tool to drop its high-level comments in code comments.
reply
vnglst 4 hours ago
I asked Gemini 3.1 Pro to generate some of the modern artworks in my "Pelican Art Gallery". I particularly like the rendition of the Sunflowers: https://pelican.koenvangilst.nl/gallery/category/modern
reply
mbh159 4 hours ago
77.1% on ARC-AGI-2 and still can't stop adding drive-by refactors. ARC-AGI-2 tests novel pattern induction, it's genuinely hard to fake and the improvement is real. But it doesn't measure task scoping, instruction adherence, or knowing when to stop. Those are the capabilities practitioners actually need from a coding agent. We have excellent benchmarks for reasoning. We have almost nothing that measures reliability in agentic loops. That gap explains this thread.
reply
qingcharles 7 hours ago
I've been playing with the 3.1 Deep Think version of this for the last couple of weeks and it was a big step up for coding over 3.0 (which I already found very good).

It's only February...

reply
nubg 6 hours ago
> I've been playing with the 3.1 Deep Think version of this

How?

reply
verdverm 6 hours ago
A select few have had early access through various programs Google offers. I believe there was a sentence or two to this effect on the Gemini 3 Deep Think post from Deepmind.
reply
XCSme 5 hours ago
reply
XCSme 5 hours ago
Now I need to write more tests.

It's a bit hard to trick reasoning models, because they explore a lot of the angles of a problem, and they might accidentally have an "a-ha" moment that leads them on the right path. It's a bit like doing random sampling and stumbling upon the right result after doing gradient descent from those points.

reply
thevinter 3 hours ago
Are you intentionally keeping the benchmarks private?
reply
XCSme 3 hours ago
Yes.

I am trying to think what's the best way to give most information about how the AI models fail, without revealing information that can help them overfit on those specific tests.

I am planning to add some extra LLM calls, to summarize the failure reason, without revealing the test.

reply
XCSme 3 hours ago
Added one more test, which surprisingly gemini flash 3 reasoning passes, but gemini 3.1 pro not
reply
janalsncm 7 hours ago
This model says it accepts video inputs. I asked it to transcribe a 5 second video of a digital water curtain which spelled “Boo Happy Halloween”, and it came back with “Happy” which wasn’t the first frame, but also is incomplete.

This kind of test is good because it requires stitching together info from the whole video.

reply
aabhay 7 hours ago
It reads videos at 1fps by default. You have to set the video resolution to high in ai studio
reply
janalsncm 2 hours ago
This is inside the Gemini app.
reply
timabdulla 6 hours ago
Google tends to trumpet preview models that aren't actually production-grade. For instance, both 3 Pro and Flash suffer from looping and tool-calling issues.

I would love for them to eliminate these issues because just touting benchmark scores isn't enough.

reply
dxbednarczyk 7 hours ago
Every time I've used Gemini models for anything besides code or agentic work they lean so far into the RLHF induced bold lettering and bullet point list barf that everything they output reads as if the model was talking _at_ me and not _with_ me. In my Openclaw experiment(s) and in the Gemini web UI, I've specifically added instructions to avoid this type of behavior, but it only seemed to obey those rules when I reminded the model of them.

For conversational contexts, I don't think the (in some cases significantly) better benchmark results compared to a model like Sonnet 4.6 can convince me to switch to Gemini 3.1. Has anyone else had a similar experience, or is this just a me issue?

reply
augusto-moura 7 hours ago
Gemini sounds less personal, but I think that is good. From my experience, the quality of response is much higher than ChatGPT or Grok, and it cites real sources. I want to have a mini-wikipedia response for my questions, not a friend's group chat response
reply
gavinray 7 hours ago
I have the opposite viewpoint:

If a model doesn't optimize the formatting of its output display for readability, I don't want to read it.

Tables, embedded images, use of bulleted lists and bold/italicizing etc.

reply
staticman2 7 hours ago
I'm not familiar with Openclaw and but the trick to solve this would be to embed a style reminder at the bottom of each user message and ideally hide that from the user with the UI.

This is how roleplay apps like Sillytavern customize the experience for power users by allowing hidden style reminders as part of the user message that accompany each chat message.

reply
InkCanon 7 hours ago
I think they all output that bold lettering, point by point style output. I strongly suspect it's part of a synthetic data pipeline all these AI companies have, and it improves performance. Claude seems to be the least of them, but it will start writing code at the drop of a hat. What annoys me in Gemini is that it has a really strange tendency to come up with weird analogies, especially in Pro mode. You'll be asking it about something like red black trees and it'll say "Red Black Trees (The F1 of Tree Data Structures)".
reply
hydrolox 6 hours ago
Yes, the analogy habit is the most annoying of all. Overall formatting for me is doable, if it didn't divide up an answer into these silly arbitrary categories with useless analogies. I've tried adding in my user preferences to never use analogies but it inevitably falls back into that habit.
reply
losvedir 5 hours ago
It definitely has the worst "voice" in my opinion. Feels very overachieving McKinsey intern to me.
reply
markab21 7 hours ago
You just articulated why I struggle to personally connect with Gemini. It feels so unrelatable and exhausting to read its output. I prefer to read Opus/Deepseek/GLM over Gemini, Qwen and the open source GPT models. Maybe it is RLHF that is creating my distaste from using it. (I pay for Gemini; I should be using it more... but the outputs just bug me and feel more work to get actionable insight.)
reply
verdverm 7 hours ago
I have no issues adjusting gemini tone & style with system prompt content
reply
agentifysh 4 hours ago
My enthusiasm is a bit muted this cycle because I've been burned by Gemini CLI. These models are very capable but Gemini CLI just doesn't seem to be able to work for one it never follows instructions strictly like its competitors do, and it hallucinates even which is a rarity.

More importantly feels like Google is stretched thin across different Gemini products and pricing reflects this, I still have no idea how to pay for Gemini CLI, in codex/claude its very simple $20/month for entry and $200/month for ton of weekly usage.

I hope whoever is reading this from Google they can redeem Gemini CLI by focusing on being competitive instead of making it look pretty (that seems to be the impression I got from the updates on X)

reply
cheema33 2 hours ago
> I still have no idea how to pay for Gemini CLI, in codex/claude its very simple $20/month for entry and $200/month for ton of weekly usage.

This!

I would like to sign up for a paid plan for Gemini CLI. But I have not been able to figure out how. I already have Codex and Claude plans. Those were super easy to sign up for.

reply
jiggawatts 37 minutes ago
What’s your difficulty? Google has published easy to follow 27-step instructions for how to sign up for the half a dozen services you need to chain together to enable this common usecase!
reply
siliconc0w 2 hours ago
Google has a hugely valuable dataset of changes from decades of changes from top tier software engineers but it's so proprietary they can't use it to train their external models.
reply
veselin 6 hours ago
I am actually going to complain about this: that neither of the Gemini models are not preview ones.

Anthropic seems the best in this. Everything is in the API on day one. OpenAI tend to want to ask you for subscription, but the API gets there a week or a few later. Now, Gemini 3 is not for production use and this is already the previous iteration. So, does Google even intent to release this model?

reply
WarmWash 8 hours ago
It seems google is having a disjointed roll out, and there will likely be an official announcement in a few hours. Apparently 3.1 showed up unannounced in vertex at 2am or something equally odd.

Either way early user tests look promising.

reply
vnglst 3 hours ago
I asked Gemini 3.1 Pro Preview to generate the modern artworks as SVG for my Pelican Art Gallery. I particularly like the rendition of the Sunflowers: https://pelican.koenvangilst.nl/gallery/category/modern
reply
pawelduda 7 hours ago
It's safe to assume they'll be releasing improved Gemini Flash soon? The current one is so good & fast I rarely switch to pro anymore
reply
derac 6 hours ago
When 3 came out they mentioned that flash included many improvements that didn't make it into pro (via an hn comment). I imagine this release includes those.
reply
tucnak 5 hours ago
Gemini 3 Pro (high) is a joke compared to Gemini 3 Flash in Antigravity, except it's not even funny. Flash is insane value, and super capable, too. I've had it implement a decompiler for very obscure bytecode, and it was passing all tests in no time. PITA to refactor later, but not insurmountable. Gemini 3 Pro (high) choked on this problem in the early stages... I'm looking forward to comparing 3.1 Pro vs 3.0 Flash, hopefully they have improved on it enough to finally switch over.
reply
upmind 6 hours ago
In my experience, while Gemini does really well in benchmarks I find it much worse when I actually use the model. It's too verbose / doesn't follow instructions very well. Let's see if that changes with this model.
reply
ismailmaj 2 hours ago
3.1 feels to me like 3.0 but that takes a long time to think, it didn't feel like a leap in raw intelligence like 2.5 pro was.
reply
josalhor 7 hours ago
I speculated that 3 pro was 3.1... I guess I was wrong. Super impressive numbers here. Good job Google.
reply
refulgentis 7 hours ago
> I speculated that 3 pro was 3.1

?

reply
josalhor 6 hours ago
Sorry... I speculated that 3 deep think is 3.1 pro.. model names are confusing..
reply
0xcb0 5 hours ago
I'm trying to find the information, is this available on the Gemini CLI script, or is this just the web front-end where I can use this new model?
reply
fdefitte 5 hours ago
The benchmark jumps are impressive but the real question is whether Gemini can stop being so aggressively helpful. Every time I use it for coding it refactors stuff I didn't ask it to touch. Claude has the opposite problem where it sometimes does too little. Feels like nobody has nailed the "do exactly what I asked, nothing more" sweet spot yet.
reply
Murfalo 6 hours ago
I like to think that all these pelican riding a bicycle comments are unwittingly iteratively creating the optimal cyclist pelican as these comment threads are inevitably incorporated in every training set.
reply
alpineman 6 hours ago
More like half of Google's AI team is hanging out on HN, and they can optimise for that outcome to get a good rep among the dev community.
reply
kridsdale3 5 hours ago
Hello.

(I'm not aware of anyone doing this, but GDM is quite info-siloed these days, so my lack of knowledge is not evidence it's not happening)

reply
alpineman 4 hours ago
Hello.

Please push internally for more reliable tool use across Gemini models. Intelligence is useless if it can't be applied :)

reply
Barbing 6 hours ago
See: fish in bike front basket
reply
impulser_ 7 hours ago
Seems like they actually fixed some of the problems with the model. Hallucinations rate seems to be much better. Seems like they also tuned the reasoning maybe that were they got most of the improvements from.
reply
whynotminot 7 hours ago
The hallucination rate with the Gemini family has always been my problem with them. Over the last year they’ve made a lot of progress catching the Gemini models up to/near the frontier in general capability and intelligence, but they still felt very late 2024 in terms of hallucination rate.

Which made the Gemini models untrustworthy for anything remotely serious, at least in my eyes. If they’ve fixed this or at least significantly improved, that would be a big deal.

reply
SubiculumCode 5 hours ago
Maybe I haven't kept up with how ghatgpt and claude are doing , but 6 monthlatelys ago or so, I thought Gemini was leading on that front.
reply
panarchy 5 hours ago
I had it make a simple HTML/JS canvas game (think flappy bird) and while it did some things mildly better (and others noticeably worse) it still fell into the exact same traps as earlier models. It also had a lot of issues generating valid JS at parts and asking it what the code should be just made it endlessly generate the same exact incorrect code.
reply
solarisos 5 hours ago
The speed of these 3.1 and Preview releases is starting to feel like the early days of web frameworks. It’s becoming less about the raw benchmarks and more about which model handles long-context 'hallucination' well enough to be actually used in a production pipeline without constant babysitting.
reply
pRusya 6 hours ago
I'm using gemini.google.com/app with AI Pro subscription. "Something went wrong" in FF, works in Chrome.

Below is one of my test prompts that previous Gemini models were failing. 3.1 Pro did a decent job this time.

> use c++, sdl3. use SDL_AppInit, SDL_AppEvent, SDL_AppIterate callback functions. use SDL_main instead of the default main function. make a basic hello world app.

reply
robviren 4 hours ago
I have run into a surprising number of basic syntax errors on this one. At least in the few runs I have tried it's a swing and a miss. Wonder if the pressure of the Claude release is pushing these stop gap releases.
reply
getcrunk 3 hours ago
Gemini is so stubborn, and often doesn’t follow explicit and simple instructions. So annoying
reply
onlyrealcuzzo 7 hours ago
We've gone from yearly releases to quarterly releases.

If the pace of releases continues to accelerate - by mid 2027 or 2028 we're headed to weekly releases.

reply
rubicon33 7 hours ago
But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps.
reply
gallerdude 6 hours ago
We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.
reply
minimaxir 6 hours ago
Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.
reply
wahnfrieden 7 hours ago
GPT 5.3 (/Codex) was a huge leap over 5.2 for coding
reply
rubicon33 2 hours ago
Eh, sure, but marginally better if not the same as Claude 4.6, which itself was a small bump over Claud 4.5
reply
zokier 7 hours ago
> Last week, we released a major update to Gemini 3 Deep Think to solve modern challenges across science, research and engineering. Today, we’re releasing the upgraded core intelligence that makes those breakthroughs possible: Gemini 3.1 Pro.

So this is same but not same as Gemini 3 Deep Think? Keeping track of these different releases is getting pretty ridiculous.

reply
WarmWash 7 hours ago
Deep Think is a few 3.1 models working together. It was suspected last week that Deep Think was composed using the new 3.1 model.
reply
verdverm 7 hours ago
3.1 == model

deep think == turning up thinking knob (I think)

deep research == agent w/ search

reply
azuanrb 7 hours ago
The CLI needs work, or they should officially allow third-party harnesses. Right now, the CLI experience is noticeably behind other SOTA models. It actually works much better when paired with Opencode.

But with accounts reportedly being banned over ToS issues, similar to Claude Code, it feels risky to rely on it in a serious workflow.

reply
jeffybefffy519 4 hours ago
Someone needs to make an actual good benchmark for LLM's that matches real world expectations, theres more to benchmarks than accuracy against a dataset.
reply
robotpepi 4 hours ago
this reminds me of that joke of someone saying "it's crazy that we have ten different standards for doing this", and then there're 11 standards
reply
knollimar 2 hours ago
Xkcd 927
reply
casey2 4 hours ago
We don't need real world benchmarks, if they were good for real world tasks people would use them We need scientific benchmarks that tease out the nature of intelligence. There are plenty of unsaturated benchmarks. Solving chess using "mostly" language modeling is still an open problem. And beyond that creating a machine that can explain why that move is likely optimal at some depth. AI that can predict the output of another AI.
reply
mark_l_watson 8 hours ago
Fine, I guess. The only commercial API I use to any great extent is gemini-3-flash-preview: cheap, fast, great for tool use and with agentic libraries. The 3.1-pro-preview is great, I suppose, for people who need it.

Off topic, but I like to run small models on my own hardware, and some small models are now very good for tool use and with agentic libraries - it just takes a little more work to get good results.

reply
throwaway2027 7 hours ago
Seconded. Gemini used to be trash and I used Claude and Codex a lot but gemini-3-flash-preview punches above it's weight, it's decent and I rarely if ever run into any token limit either.
reply
verdverm 6 hours ago
Thirded, I've been using gemini-3-flash to great effect. Anytime I have something more complicated, I give it to pro & flash to see what happens. Coin flip if flash is nearly equivalent (too many moving vars to be analytical at this point)
reply
PlatoIsADisease 7 hours ago
What models are you running locally? Just curious.

I am mostly restricted to 7-9B. I still like ancient early llama because its pretty unrestricted without having to use an abliteration.

reply
mark_l_watson 6 hours ago
I experimented with many models on my 16G and 32G Macs. For less memory, qwen3:4b is good, for the 32B Mac, gpt-oss:20b is good. I like the smaller Mistral models like mistral:v0.3 and rnj-1:latest is a pretty good small reasoning model.
reply
nurettin 7 hours ago
I like to ask claude how to prompt smaller models for the given task. With one prompt it was able to make a low quantized model call multiple functions via json.
reply
atleastoptimal 3 hours ago
Writing style wise, 3.1 seems very verbose, but somehow less creative compared to 3.
reply
mixel 7 hours ago
Google seems to really pull ahead in this AI race. For me personally they offer the best deal and although the software is not quiet there compared to openai or anthropic (in regards to 1. web GUI, 2. agent-cli). I hope they can fix that in the future and I think once Gemini 4 or whatever launches we will see a huge leap again
reply
rubslopes 6 hours ago
I don't understand this sentiment. It may hold true for other LLM use cases (image generation, creative writing, summarizing large texts), but when it comes to coding specifically, Google is *always* behind OpenAI and Anthropic, despite having virtually infinite processing power, money, and being the ones who started this race in the first place.

Until now, I've only ever used Gemini for coding tests. As long as I have access to GPT models or Sonnet/Opus, I never want to use Gemini. Hell, I even prefer Kimi 2.5 over it. I tried it again last week (Gemini Pro 3.0) and, right at the start of the conversation, it made the same mistake it's been making for years: it said "let me just run this command," and then did nothing.

My sentiment is actually the opposite of yours: how is Google *not* winning this race?

reply
hobofan 6 hours ago
> despite having virtually infinite processing power, money

Just because they have the money doesn't mean that they spend it excessively. OpenAI and Anthropic are both offering coding plans that are possibly severely subsidized, as they are more concerned with growth at all cost, while Google is more concerned with profitability. Google has the bigger warchest and could just wait until the other two run out of money rather than forcing the growth on that product line in unprofitable means.

Maybe they are also running much closer to their compute limits then the other ones too and their TPUs are already saturated with API usage.

reply
jeanloolz 4 hours ago
Agreed, also worth pointing out that Google still owns 14% of Anthropic + Anthropic is signing billion dollar scale deals with Google Cloud to train their models on their TPUs. So Claude success indirectly contributes to Google success. The AI race is not only about the frontier models.
reply
mike97 4 hours ago
> OpenAI and Anthropic are both offering coding plans that are possibly severely subsidized

So does Google, in fact I believe their antigravity limits for Opus and Sonnet for the $20 plan has higher limits than CC $20 plan, and there is no weekly cap or I couldn't get it even with heavy usage, and then you have a separate limit for Gemini cli and for other models from antigravity.

reply
hobofan 2 hours ago
Is that so? I haven't personally used Antigravity, I just heard a lot of people complaining as recently as ~1 month ago that they hit the rate limits very quickly by e.g. it accidentally reading in too large files.
reply
eknkc 7 hours ago
I hope they fail.

I honestly do not wish Google to have the best model out there and be forced to use their incomprehensible subscription / billing / project management whatever shit ever again.

I don’t know what their stuff cost. I don’t know why would I use vertex or ai studio. What is included in my subscription what is billed per use.

I pray that whatever they build fails and burns.

reply
otherme123 6 hours ago
They all suck. OpenAI ignores scanning limits and disabled routes in robots.txt, after a 429 "Too Many Requests" they retry the same url half a dozen of times from different IPs in the next couple of minutes, and they once DoS'ed my small VPS trying to do a full scan of sitemaps.xml in less than one hour, trying and retrying if any endpoint failed.

Google and others at least respects both robots.txt and 429s. They invested years scanning all the internet, so they can now train on what they have stored in their server. OpenAI seems to assume that MY resources are theirs.

reply
toraway 6 hours ago
For a personal plan to use premium Gemini AI features or for agentic development with Gemini CLI/Antigravity the billing is no more or less complicated then Claude Code or Codex CLI.

You pay for the $20/mo Google AI Pro plan with a credit card via the normal personal billing flow like you would for a Google One plan without any involvement of Google Cloud billing or AI Studio. Authorize in the client with your account and you're good to go.

(With the bundled drive storage on AI Pro I'm just paying a few bucks more than I was before so for me it's my least expensive AI subscription excluding the Z.ai ultra cheap plan).

Or, just like with Anthropic or OpenAI, it's a separate process for billing/credits for an API key targeted at a developer audience. Which I don't need or use for Gemini CLI or Antigravity at all, it's a one step "click link to authorize with your Google Account" and done.

You could decide to use an API key for usage based billing instead (just like you could with Claude Code) but that's entirely unnecessary with a subscription.

Sure, for the API anything involving a hyperscalar cloud is going to have a higher complexity floor with legacy cruft here and there, but for individual subscriptions that's irrelevant and it's pretty much as straightforward of a click and pay flow you'd find anywhere else.

reply
dybber 6 hours ago
Eventually the models will be generally be so good that the competition moves from the best model to the best user experience and here I think we can expect others will win, e.g. Microsoft with GitHub and VS Code
reply
eknkc 6 hours ago
That's my hope but Google has unlimited cash to throw at model development and can basically burn more cash can openai and anthropic combined. Might tip the scale in the long run.
reply
syspec 5 hours ago
Does anyone know if this is in GA immediately or if it is in preview?

On our end, Gemini 3.0 Preview was very flakey (not model quality, but as in the API responses sometimes errored out), making it unreliable.

Does this mean that 3.0 is now GA at least?

reply
hsaliak 7 hours ago
The eventual nerfing gives me pause. Flash is awesome. What we really want is gemini-3.1-flash :)
reply
clhodapp 8 hours ago
reply
makeavish 7 hours ago
Great model until it gets nerfed. I wish they had a higher paid tier to use non nerfed model.
reply
Mond_ 7 hours ago
Bad news, John Google told me they already quantized it immediately after the benchmarks were done and it sucks now.

I miss when Gemini 3.1 was good. :(

reply
spyckie2 7 hours ago
I think there is a pattern it will always be nerfed the few weeks before launching a new model. Probably because they are throwing a bunch of compute at the new model.
reply
makeavish 7 hours ago
Yeah maybe that but atleast let us know about this Or have dynamic limits? Nerfing breaks trust. Though I am not sure if they actually nerf it intentionally. Haven't heard from any credible source. I did experience in my workflow though.
reply
xnx 7 hours ago
What are you talking about?
reply
quacky_batak 7 hours ago
I’m keen to know how and where are you using Gemini.

Anthropic is clearly targeted to developers and OpenAI is general go to AI model. Who are the target demographic for Gemini models? ik that they are good and Flash is super impressive. but i’m curious

reply
jdc0589 7 hours ago
I use it as my main platform right now both for work/swe stuff, and person stuff. It works pretty well, they have the full suite of tools I want from general LLM chat, to notebookLM, to antigravity.

My main use-cases outside of SWE generally involve the ability to compare detailed product specs and come up with answers/comparisons/etc... Gemini does really well for that, probably because of the deeper google search index integration.

Also I got a year of pro for free with my phone....so thats a big part.

reply
ggregoire 5 hours ago
I use it in Google Search. For example yesterday I typed in Google "postgres generate series 24 hour" and this morning "ffmpeg convert mp4 to wav". Previously I would have clicked on the first StackOverflow result (RIP), now I just take it from the Gemini summary (I'd say 95% of the time it's correct for basic programming language questions. I remember some hallucinations about psycopg3 and date-fns tho. As usual with AI, you need to already know the answer, at least partially, to detect the bs).

Also what's great about Gemini in Google Search is that the answer comes with several links, I use them sometimes to validate the correctness of the solution, or check how old the solution is (I've never used chatGPT so I don't know if chatGPT does it).

reply
hunta2097 7 hours ago
I use the Gemini web interface just as I would ChatGPT. They also have coding environment analogues of Claude-Code in Anti-gravity and Gemini-CLI.

When you sign up for the pro tier you also get 2TB of storage, Gemini for workspace and Nest Camera history.

If you're in the Google sphere it offers good value for money.

reply
dinosor 7 hours ago
I find gemini to be the best at travel planning and for story telling of geographical places. For a road trip, I tried all three mainstream providers and I liked Gemini (also personal preference because Gemini took a verbose approach instead of bullet points from others) for it's responses, ways it discovered stories about places I wanted to explore, places it suggested for me and things it gave me to consider those places in the route.
reply
minimaxir 7 hours ago
Gemini has an obvious edge over its competitors in one specific area: Google Search. The other LLMs do have a Web Search tool but none of them are as effective.
reply
fatherwavelet 6 hours ago
I feel like Gemini 3 was incredible on non-software/coding research. I have learned so much systems biology the last two months it blows my mind.

I had only started using Opus 4.6 this week. Sonnet it seems like is much better at having a long conversation with. Gemini is good for knowledge retrieval but I think Opus 4.6 has caught up. The biggest thing that made Gemini worth it for me the last 3 months is I crushed it with questions. I wouldn't have even got 10% of the Opus use that I got from Gemini before being made to slow down.

I have a deep research going right now on 3.1 for the first time and I honestly have no idea how I am going to tell if it is better than 3.

It seems like agentic coding Gemini wasn't as good but just asking it to write a function, I think it only didn't one shot what I asked it twice. Then fixed the problem on the next prompt.

I haven't logged in to bother with chatGPT in about 3 months now.

reply
dekhn 7 hours ago
I am a professional software developer who has been programming for 40 years (C, C++, Python, assembly, any number of other languages). I work in ML (infrastructure, not research) and spent a decade working at Google.

In short, I consider Gemini to be a highly capable intern (grad student level) who is smarter and more tenacious than me, but also needs significant guidance to reach a useful goal.

I used Gemini to completely replace the software stack I wrote for my self-built microscope. That includes:

writing a brand new ESP32 console application for controlling all the pins of my ESP32 that drives the LED illuminator. It wrote the entire ESP-IDF project and did not make any major errors. I had to guide with updated prompts a few times but otherwise it wrote the entire project from scratch and ran all the build commands, fixing errors along the way. It also easily made a Python shared library so I can just import this object in my Python code. It saved me ~2-3 days of working through all the ESP-IDF details, and did a better job than I would have.

writing a brand new C++-based Qt camera interface (I have a camera with a special SDK that allows controlling strobe and trigger and other details. It can do 500FPS). It handled all the concurrency and message passing details. I just gave it the SDK PDF documentation for the camera (in mixed english/chinese), and asked it to generate an entire project. I had to spend some time guiding it around making shared libraries but otherwise it wrote the entire project from scratch and I was able to use it to make a GUI to control the camera settings with no additional effort. It ran all the build commands and fixed errors along the way. Saved me another 2-3 days and did a better job than I could have.

Finally, I had it rewrite the entire microscope stack (python with qt) using the two drivers I described above- along with complex functionality like compositing multiple images during scanning, video recording during scanning, mesaurement tools, computer vision support, and a number of other features. This involved a lot more testing on my part, and updating prompts to guide it towards my intended destination (fully functional replacement of my original self-written prototype). When I inspect the code, it definitely did a good job on some parts, while it came up with non-ideal solutions for some problems (for example, it does polling when it could use event-driven callbacks). This saved literally weeks worth of work that would have been a very tedious slog.

From my perspective, it's worked extremely well: doing what I wanted in less time than it would take me (I am a bit of a slow programmer, and I'm doing this in hobby time) and doing a better job (With appropriate guidance) than I could have (even if I'd had a lot of time to work on it). This greatly enhances my enjoyment of my hobby by doing tedious work, allowing me to spend more time on the interesting problems (tracking tardigrades across a petri dish for hours at a time). I used gemini pro 3 for this- it seems to do better than 2.5, and flash seemed to get stuck and loop more quickly.

I have only lightly used other tools, such as ChatGPT/Codex and have never used Claude. I tend to stick to the Google ecosystem for several reasons- but mainly, I think they will end up exceeding the capabilities of their competitors, due to their inherent engineering talent and huge computational resources. But they clearly need to catch up in a lot of areas- for example, the VS Code Gemini extension has serious problems (frequent API call errors, messed up formatting of code/text, infinite loops, etc).

reply
aberoham 6 hours ago
Wow, you have to try claude code with Opus-4.6..
reply
dekhn 5 hours ago
I agree, but I don't have a subscription.

The remaining technical challenge I have is related to stage positioning- in my system, it's important that all the image frames we collect are tagged with the correct positions. Due to some technical challenges, right now the stage positions are slightly out of sync with the frames, which will be a fairly tricky problem to solve. It's certainly worth trying all the major systems to see what they propose.

reply
mehagar 7 hours ago
I use Gemini for personal stuff such as travel planning and research on how to fix something, which product to buy, etc. My company has as Pro subscription so I use that instead of ChatGPT.
reply
jug 7 hours ago
I personally use it as my general purpose and coding model. It's good enough for my coding tasks most of the time, has very good and rapid web search grounding that makes the Google index almost feel like part of its training set, and Google has a family sharing plan with individual quotas for Google AI Pro at $20/month for 5 users which also includes 2 TB in the cloud. Family sharing is a unique feature for Gemini 3 Flash Thinking (300 prompts per day and user) & Pro (100 prompts per day and user).
reply
thornewolf 6 hours ago
I have swapped to using gemini over chatgpt for casual conversation and question answering. there are some lacking features in the app but i get faster and more intelligent responses.
reply
epolanski 7 hours ago
Various friends of mine work in non-technology companies (banking, industries, legal, Italy) and in pretty much all of them there's Gemini enterprise + NotebookLM.

In all of them the approach is: this is the solution, now find problems you can apply it to.

reply
esafak 7 hours ago
I'd use it for planning, knowledge, and anything visual.
reply
verdverm 6 hours ago
I use gemini for everything because I trust google to keep the data I send them safe, because they know how to run prod at scale, and they are more environmentally friendly than everyone else (tpu,us-central1).

This includes my custom agent / copilot / cowork (which uses vertex ai and all models therein). This is where I do more searching now (with genAi grounding) I'm about to work on several micro projects that will hold Ai a little differently.

All that being said, google Ai products suck hard. I hate using every one of them. This is more a reflection on the continued degradation of PM/Design at Big G, from before Ai, but accellationally worse since. I support removing Logan from the head of this shit show

disclaimer: long time g-stan, not so stan any more

reply
denysvitali 7 hours ago
Where is Simon's pelican?
reply
Mashimo 2 hours ago
It's also quite impressive with SVG animations.

> Create an SVG animation of a Beaver sitting next to a recordplayer and a create of records, his eyes follows the mouse curser.

https://gemini.google.com/share/717be5f9b184

reply
codethief 7 hours ago
reply
denysvitali 7 hours ago
Thank you!
reply
saberience 7 hours ago
Please no, let's not.
reply
__jl__ 7 hours ago
Another preview release. Does that mean the recommended model by Google for production is 2.5 Flash and Pro? Not talking about what people are actually doing but the google recommendation. Kind of crazy if that is the case
reply
hn_throw2025 2 hours ago
Yeah great, now can I have my pinned chats back please?

https://www.google.com/appsstatus/dashboard/incidents/nK23Zs...

reply
seizethecheese 7 hours ago
I use Gemini flash lite in a side project, and it’s stuck on 2.5. It’s now well behind schedule. Any speculation as to what’s going on?
reply
foruhar 7 hours ago
Gemini-3.0-flash-preview came out right away with the 3.0 release and I was expecting 3.0-flash-lite before a bump on the pro model. I wonder if they have abandoned that part of the Pareto/price-performance.
reply
yuvalmer 5 hours ago
Gemini 3.0 Pro is bad model for its class. I really hope 3.1 is a leap forward.
reply
eric15342335 7 hours ago
My first impression is that the model sounds slightly more human and a little more praising. Still comparing the ability.
reply
jdthedisciple 3 hours ago
Why should I be excited?
reply
matrix2596 7 hours ago
Gemini 3.1 Pro is based on Gemini 3 Pro
reply
skerit 7 hours ago
Lol, and this line:

> Geminin 3.1 Pro can comprehend vast datasets

Someone was in a hurry to get this out the door.

reply
kuprel 4 hours ago
Why don't they show Grok benchmarks?
reply
andxor 3 hours ago
They've fallen way behind.
reply
kuprel 3 hours ago
GPT 5.2 loses at everything but they included that
reply
1024core 6 hours ago
It's been hugged to death. I keep getting "Something went wrong".
reply
mrcwinn 2 hours ago
It's fascinating to watch this community react to positively to Google model releases and so negatively toward OpenAI's. You all do understand that an ad revenue model is exactly where Google will go, right?
reply
sidrag22 39 minutes ago
It's all so astroturfed so its hard to tell. I got the opposite impression though. Seemed like OpenAI had more fake positivity towards the top that i tried to skim, and this had way less and a lot of complaints.

Im biased I dont trust either of them, so perhaps im just hard looking for the hate and attributing all the positive stuff to advertising.

reply
webtcp 26 minutes ago
An enemy is better than a traitor
reply
jeffbee 43 minutes ago
Gemini already drives ad revenue. If the conversation goes in that direction it will use product search results with the links attributable to Google.
reply
msavara 7 hours ago
Somehow doesn't work for me :) "An internal error has occurred"
reply
dude250711 7 hours ago
I hereby allow you to release models not at the same time as your competitors.
reply
sigmar 7 hours ago
It is super interesting that this is the same thing that happened in November (ie all labs shipping around the same week 11/12-11/23).
reply
zozbot234 6 hours ago
They're just throwing a big Chinese New Year celebration.
reply
PunchTornado 8 hours ago
The biggest increase is LiveCodeBench Pro: 2887. The rest are in line with Opus 4.6 or slightly better or slightly worse.
reply
shmoogy 7 hours ago
but is it still terrible at tool calls in actual agentic flows?
reply
trilogic 6 hours ago
Humanity last exam 44%, Scicode 59, and that 80, and this 78 but not 100% ever.

Would be nice to see that this models, Plus, Pro, Super, God mode can do 1 Bench 100%. I am missing smth here?

reply
Topfi 8 hours ago
Appears the only difference to 3.0 Pro Preview is Medium reasoning. Model naming has long gone from even trying to make sense, but considering 3.0 is still in preview itself, increasing the number for such a minor change is not a move in the right direction.
reply
GrayShade 8 hours ago
Maybe that's the only API-visible change, saying nothing about the actual capabilities of the model?
reply
xnx 7 hours ago
> increasing the number for such a minor change is not a move in the right direction

A .1 model number increase seems reasonable for more than doubling ARC-AGI 2 score and increasing so many other benchmarks.

What would you have named it?

reply
Topfi 6 hours ago
My issue is that we haven't even gotten the release version of 3.0, that is also still in Preview, so may stick with 3.0 till that has been deemed stable.

Basically, what does the word "Preview" mean, if newer releases happen before a Preview model is stable? In prior Google models, Preview meant that there'd still be updates and improvements to said model prior to full deployment, something we saw with 2.5. Now, there is no meaning or reason for this designation to exist if they forgo a 3.0 still in Preview for model improvements.

reply
xnx 5 hours ago
Given the pace AI is improving and that it doesn't give the exact same answers under many circumstances, is the the [in]stability of "preview" a concern?

GMail was in "beta" for 5 years.

reply
verdverm 5 hours ago
ChatGPT 4.5 was never released to the public, but it is widely believed to be the foundation the 5.x series is built on.

Wonder how GP feels about the minor bumps for other model providers?

reply
argsnd 8 hours ago
I disagree. Incrementing the minor number makes so much more sense than “gemini-3-pro-preview-1902” or something.
reply
jannyfer 7 hours ago
According to the blog post, it should be also great at drawing pelicans riding a bicycle.
reply
naiv 7 hours ago
ok , so they are scared that 5.3 (pro) will be released today/tomorrow and blow it out of the water and rushed it while they could still reference 5.2 benchmarks.
reply
PunchTornado 7 hours ago
I don't think models blow other models anymore. We have the big 3 which are neck to neck in most benchmarks and the rest. I doubt that 5.3 will blow the others.
reply
scld 7 hours ago
easy now
reply
LZ_Khan 7 hours ago
biggest problem is that it's slow. also safety seems overtuned at the moment. getting some really silly refusals. everything else is pretty good.
reply
makeavish 7 hours ago
I hope to have great next two weeks before it gets nerfed.
reply
unsupp0rted 7 hours ago
I've found Google (at least in AI Studio) are the only provider NOT to nerf their models after a few weeks
reply
makeavish 7 hours ago
I don't use AI studio for my work. I used Antigravity/Gemini CLI and 3 pro was great for few weeks and now it's worse than 3 flash or any smaller model from competitor which are rated lower on benchmarks
reply
scrlk 6 hours ago
IME, they definitely nerf models. gemini-2.5-pro-exp-03-25 through AI Studio was amazing at release and steadily degraded. The quality started tanking around the time they hid CoT.
reply
mustaphah 7 hours ago
Google is terrible at marketing, but this feels like a big step forward.

As per the announcement, Gemini 3.1 Pro score 68.5% on Terminal-Bench 2.0, which makes it the top performer on the Terminus 2 harness [1]. That harness is a "neutral agent scaffold," built by researchers at Terminal-Bench to compare different LLMs in the same standardized setup (same tools, prompts, etc.).

It's also taken top model place on both the Intelligence Index & Coding Index of Artificial Analysis [2], but on their Agentic Index, it's still lagging behind Opus 4.6, GLM-5, Sonnet 4.6, and GPT-5.2.

---

[1] https://www.tbench.ai/leaderboard/terminal-bench/2.0?agents=...

[2] https://artificialanalysis.ai

reply
saberience 7 hours ago
Benchmarks aren't everything.

Gemini consistently has the best benchmarks but the worst actual real-world results.

Every time they announce the best benchmarks I try again at using their tools and products and each time I immediately go back to Claude and Codex models because Google is just so terrible at building actual products.

They are good at research and benchmaxxing, but the day to day usage of the products and tools is horrible.

Try using Google Antigravity and you will not make it an hour before switching back to Codex or Claude Code, it's so incredibly shitty.

reply
mustaphah 7 hours ago
That's been my experience too; can't disagree. Still, when it comes to tasks that require deep intelligence (esp. mathematical reasoning [1]), Gemini has consistently been the best.

[1] https://arxiv.org/abs/2602.10177

reply
gregorygoc 7 hours ago
What’s so shitty about it?
reply
BMFXX 5 hours ago
Just wish iI could get 2.5 daily limit above 1000 requests easily. Driving me insane...
reply
lysecret 5 hours ago
Please I need 3 in ga…
reply
nautilus12 7 hours ago
Ok, why don't you work on getting 3.0 out of preview first? 10 min response time is pretty heinous
reply
mucai82 6 hours ago
I agree, according to Googles terms you are not allowed to use the preview model for production use cases. And 3.0 has been in preview for a loooong time now :(
reply
jeffbee 7 hours ago
Relatedly, Gemini chat seems to be if not down then extremely slow.

ETA: They apparently wiped out everyone's chats (including mine). "Our engineering team has identified a background process that was causing the missing user conversation metadata and has successfully stopped the process to prevent further impact." El Mao.

reply
sergiotapia 7 hours ago
To use in OpenCode, you can update the models it has:

    opencode models --refresh
Then /models and choose Gemini 3.1 Pro

You can use the model through OpenCode Zen right away and avoid that Google UI craziness.

---

It is quite pricey! Good speed and nailed all my tasks so far. For example:

    @app-api/app/controllers/api/availability_controller.rb 
    @.claude/skills/healthie/SKILL.md 

    Find Alex's id, and add him to the block list, leave a comment 
    that he has churned and left the company. we can't disable him 
    properly on the Healthie EMR for now so 
    this dumb block will be added as a quick fix.
Result was:

    29,392 tokens
    $0.27 spent
So relatively small task, hitting an API, using one of my skills, but a quarter. Pricey!
reply
gbalduzzi 7 hours ago
I don't see it even after refresh. Are you using the opencode-gemini-auth plugin as well?
reply
sergiotapia 6 hours ago
No I am not just vanilla OpenCode. I do have OpenCode Zen credits, and I did opencode login whatever their command is to auth against opencode itself. Maybe that's the reason I see these premium models.
reply
cmrdporcupine 7 hours ago
Doesn't show as available in gemini CLI for me. I have one of those "AI Pro" packages, but don't see it. Typical for Google, completely unclear how to actually use their stuff.
reply
saberience 7 hours ago
I always try Gemini models when they get updated with their flashy new benchmark scores, but always end up using Claude and Codex again...

I get the impression that Google is focusing on benchmarks but without assessing whether the models are actually improving in practical use-cases.

I.e. they are benchmaxing

Gemini is "in theory" smart, but in practice is much, much worse than Claude and Codex.

reply
rocho 3 hours ago
I find Gemini is outstanding at reasoning (all topics) and architecture (software/system design). On the other hand, Gemini CLI sucks and so I end up using Claude Code and Codex CLI for agentic work.

However, I heavily use Gemini in my daily work and I think it has its own place. Ultimately, I don't see the point of choosing the one "best" model for everything, but I'd rather use what's best for any given task.

reply
konart 7 hours ago
> but without assessing whether the models are actually improving in practical use-cases

Which cases? Not trying to sound bad but you didn't even provide of cases you are using Claude\Codex\Gemini for.

reply
skerit 7 hours ago
I'm glad someone else is finally saying this, I've been mentioning this left and right and sometimes I feel like I'm going crazy that not more people are noticing it.

Gemini can go off the rails SUPER easily. It just devolves into a gigantic mess at the smallest sign of trouble.

For the past few weeks, I've also been using XML-like tags in my prompts more often. Sometimes preferring to share previous conversations with `<user>` and `<assistant>` tags. Opus/Sonnet handles this just fine, but Gemini has a mental breakdown. It'll just start talking to itself.

Even in totally out-of-the-ordinary sessions, it goes crazy. After a while, it'll start saying it's going to do something, and then it pretends like it's done that thing, all in the same turn. A turn that never ends. Eventually it just starts spouting repetitive nonsense.

And you would think this is just because the bigger the context grows, the worse models tend to get. But no! This can happen well below even the 200.000 token mark.

reply
reilly3000 5 hours ago
Flash is (was?) was better than Pro on these fronts.
reply
user34283 7 hours ago
I exclusively use Gemini for Chat nowadays, and it's been great mostly. It's fast, it's good, and the app works reliably now. On top of that I got it for free with my Pixel phone.

For development I tend to use Antigravity with Sonnet 4.5, or Gemini Flash if it's about a GUI change in React. The layout and design of Gemini has been superior to Claude models in my opinion, at least at the time. Flash also works significantly faster.

And all of it is essentially free for now. I can even select Opus 4.6 in Antigravity, but I did not yet give it a try.

reply
cmrdporcupine 7 hours ago
Honestly doesn't feel like Google is targeting the agentic coding crowd so much as they are the knowledge worker / researcher / search-engine-replacement market?

Agree Gemini as a model is fairly incompetent inside their own CLI tool as well as in opencode. But I find it useful as a research and document analysis tool.

reply
verdverm 5 hours ago
For my custom agentic coding setup, I use Claude Code derived prompts with Gemini models, primarily flash. It's night and day compared to Google's own agentic products, which are all really bad.

The models are all close enough on the benchmarks and I think people are attributing too much difference in the agentic space to the model itself. I strongly believe the difference is in all the other stuff, which is why Antropic is far ahead of the competition. They have done great work with Claude Code, Cowork, and their knowledge share through docs & blog, bar none on this last point imo.

reply
himata4113 6 hours ago
The visual capabilities of this model are frankly kind of ridicioulus what the hell.
reply
johnwheeler 6 hours ago
I know Google has anti-gravity but do they have anything like Claude code as far as user interface terminal basically TUI?
reply
leecommamichael 5 hours ago
Whoa, I think Gemini 3 Pro was a disappointment, but Gemini 3.1 Pro is definitely the future!
reply
throwaw12 6 hours ago
Can we switch from Claude Code to Google yet?

Benchmarks are saying: just try

But real world could be different

reply
foruhar 6 hours ago
My sense is that the Gemini models are very capable but the Gemini CLI experience is subpar compared to Claude Code and Codex. I'm guess that it's the harness but since it can get confused, fall into doom loops, and generally lose the plot in a way that the model does not in Gemini Studio or the Gemini app.

I think a bunch of these harnesses are open source so it surprises me that there can be such a gulf between them.

reply
dana321 3 hours ago
Its not just subpar, its not even sub-sub-par.

It goes into loops and never completes a task 8 times out of 10 that i've used it.

reply
cmrdporcupine 5 hours ago
It's not just the tooling. If you use Gemini in opencode it malfunctions in similar ways.

I haven't tried 3.1 yet, but 3 is just incompetent at tool use. In particular in editing chunks of text in files, it gets very confused and goes into loops.

The model also does this thing where it degrades into loops of nonsense thought patterns over time.

For shorter sessions where it's more analysis than execution, it is a strong model.

We'll see about 3.1. I don't know why it's not showing in my gemini CLI as available yet.

reply
pickle-pixel 5 hours ago
does it still crash out after couple prompts?
reply
taytus 3 hours ago
Another preview model? Why google keep doing this?
reply
486sx33 12 minutes ago
[dead]
reply
boxingdog 6 hours ago
[dead]
reply
rohithavale3108 8 hours ago
[flagged]
reply
Filip_portive 5 hours ago
My new comment
reply
techgnosis 7 hours ago
I'd love a new Gemini agent that isn't written with Node.js. Not sure why they think that's a good distribution model.
reply
CamperBob2 5 hours ago
(Shrug) Ask it to write one!
reply
jcims 7 hours ago
Pelican on a bicycle in drawio - https://imgur.com/a/tNgITTR

(FWIW I'm finding a lot of utility in LLMs doing diagrams in tools like drawio)

reply
pqdbr 6 hours ago
How are you prompting it to draw diagrams in drawio
reply
jcims 53 minutes ago
Here's the chat I used for the drawing - https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...

Save the xml, import to drawio

reply
jcims 5 hours ago
Sometimes it helps to also provide a drawio file that has the elements you wan't (eg. cloud service icons or whatever), but you just feed it the content you want diagrammed and let it eat.

Even if it's not completely correct, it usually creates something that's much closer to complete than a blank page.

reply
ac29 6 hours ago
Drawio drawings are just XML, its possible it can generate that directly
reply
riku_iki 5 hours ago
hopefully op will answer if that's what he is doing
reply