There was a passing comment about "when we open up the GitHub repository" in the talk. So it's not open yet, but they've suggested it might be in the future.
I recently (as in, last night) added WebSockets to my backend, push notifications to my frontend iOS, and notification banner to the webapp. It all kinda just works. Biggest issues have been version-matching across with Django/Gunicorn/Amazon Linux images.
The problem is unless your ready to waste hours prompting to get something exactly how you want it, instead of spending those few minutes doing it yourself, you start to get complacent for whatever the LLM generated for you.
IMO it feels like being a geriatric handicap, there's literally nothing you can do because of the hundreds or thousands of lines of code that's been generated already, you run into the sunk cost fallacy really fast. No matter what people say about building "hundreds of versions" you're spending time doing so much shit either prompting or spec writing that it might not feel worth getting things exactly right in case it makes you start all over again.
It's literally not as if with the LLM things are literally instantaneous, it takes upwards or 20-30 minutes to "Ralph" through all of your requirements and build.
If you start some of it yourself first and you have an idea about where things are supposed to go it really helps you in your thinking process too, just letting it vibe fully in an empty directory leads to eventual sadness.
The way I use LLM's is that I design main data structures, function interfaces etc. and ask LLM's to fill them. Also test cases and assertions.
That is one of the three uses I give them.
The other two are: infra scripting, which tends to be isolated: "generate a python script to deploy blabla with oarameters for blabla...". That saves time.
The third use is exploring alternative solutions, high level, not with code generation, to stimulate my thinking faster and explore solutions. A "better" and more reasoned search engine. But sometimes it also outputs incorrect information, so careful there and verify. But at least it is successful at the "drop me ideas".
For big systems, generating a lot of code that I have no idea of what I end up with, that when I get bugs is going to be more difficult to modify, understand and track (Idk even the code, bc it outputs too much of it!).
Or for designing a system from zero, code-wise is not good enough INHO.
oh, a fourth thing it does well is code review, that one yes. As long as you are the expert and can quickly discard bs feedback there is always something valuable.
And maybe finding bugs in a piece of code.
Definitely, for designing from scratch it is not reliable.
To the degree that those same people are now writing 10-100x more code...that is scary, but the doom and gloom is pretty tiring.
It looks very productive at first sight but when you start to find problems it is going to be a lot of fun on a production system.
Because basically you cannot study all the output that the LLM throws line by line if what you want is speed.
Which leaves reliability compromised.
Also, sometimes LLMs throw a lot of extra and unncessary code making things more barroquw than if you had sat down and thought a bit about the problem a bit.
Yes, you can deliver faster code with LLMs, maybe. But it is going to be good enough for maintenance and bug fixing?
I am not sure at all.
Any engineer worth their weight will always try to avoid adding code. Any amount of code you add to a system, whether is written by you or a all knowing AI is a liability. If you spent a majority of your work day writing code it's understandable to want to rely heavily on LLMs.
Where I'd like for people to draw a line on is not knowing at all what the X thousand lines of code are doing.
In my career, I have never been in a situation where my problems could be a solved by piecing together code from SO. When I say "spend those few minutes doing it yourself" I am specifically talking about UI, but it does apply to other situations too.
For instance, if you had to change your UI layout to something specific. You could try to collect screenshots and articulate what you need to see changed. If you weren't clear enough that cycle of prompting with the AI would waste your time, you could've just made the change yourself.
There are many instances where the latter option is going to be faster and more accurate. This would only be possible if you had some idea of your code base.
When you've let an agent take full control of your codebase you will have to sink time into understanding it. Since clearly everyone is too busy for that you get stuck in a loop, the only way to make those "last 10%" changes is *only* via the agent.
It is still possible to write code with AI AND educate yourself on what the codebase architecture is. Even better, you can educate yourself on good software engineering and architecture and build that into making better specs. You can understand what the code is doing by having good tests, observability, and actually seeing it work. But if you're after peeping what every character is doing, I am not going to stop you!
In fact, humans do targetted bug fixing reasonably well and knowing they did not change the structure of other code better than LLMs currently do in my experience.
I do not find them reliable enough TBH to leave such delicate tasks in a production system in their hands.
You can already see people running into these issues, they have a spec in mind. They work on the spec with and LLM, the spec has stuff added to it that wasn't what they were expecting.
And again, I am not against LLMs but I can be against how they're being used. You write some stuff down, maybe have the LLM scaffold some skeleton for you. You could even discuss with the LLM what classes should be named what should they do etc. just be in the process so the entire code base isn't 200% foreign to you by the time it's done.
Also I am no one's mother, everyone has freewill they can do whatever they'd like. If you don't think you have better things to do than to produce 3-5 pieces of abandonware software every weekend then good for you.
I've only ever joined teams with large, old codebases where most of the code was written by people who haven't been at the company in years, and my coworkers commit giant changes that would take me awhile to understand so genAI feels pretty standard to me.
And, as well as noticing actual semantic issues, it's worth noting where they've mixed up abstractions or just allowed a file to grow to an unsustainable size and needs refactoring. You can ask the AI agent to do the refactoring, with some guidance (e.g. split up this file into three files named x, y, z; put this sort of thing in x, ...). This helps you as a human to understand their changes, and also helps the AI. It also makes you feel in control of the overall code design, even though you're no longer writing all the details.
They'll often need a little final tuning afterwards (either by hand or ask the AI again) e.g. move this flag from x to y. As is often the case, it's just like you have an enthusiastic and very fast but quite junior dev working for you.
I've tried fixing some code manually and then reused an agent but it removed my fix.
Once you vibe code, you don't look at the code.
Truly one of the statements of all time. I hope you look at the code, even frontier agents make serious lapses in "judgement".
It's sad to think we may be going backwards and introducing more black boxes, our own apps.
Offloading your thinking, typing all the garbled thoughts in your head with respect to a problem in a prompt and getting a coherent, tailored solution in almost an instant. A superpowered crutch that helps you coast through tiring work.
That crutch soon transforms into dependence and before you know it you start saying things like "Once you vibe code, you don't look at the code".
Most of it is rather terrible, but a lot of the times it really doesn't matter. At least most of it scales better than Excel, and for the most part they can debug/fix their issues with more prompts. The stuff that turns out to matter eventually makes it to my team, and then it usually gets rewritten from scratch.
I think you underestimate how easy it is to get something to work well enough with AI.
To the AI optimist, the idea of reading code line by line will see as antiquated as perusing CPU registers line by line. Something do when needed, but typically can just trust your tooling to do the right thing.
I wouldn’t say I am in that camp, but that’s one thought on the matter. That natural language becomes “the code” and the actual code becomes “machine language”.
And therein lies the problem
I've worked places where junior made bad code that was accepted because the QA tests were ok.
I even had a situation in production where we had memory leaks because nobody tried to use it for more than 20 minutes when we knew that the app is used 24/7.
We aim for 99% quality when no-one wants it. No-one wants to pay for it.
Github is down to one 9 and I haven't heard them losing many clients, people just cope.
We've reached a level where we have so much ram that we find garbage collection and immutability normal, even desired.
We are wasting bandwidth by using json instead of binary because it's easier to read when have to debug, because it's easier to debug while running than to think before coding.
For me, GUI and Web code falls into "throwaway". I'm trying to do something else and the GUI code development is mostly in my way. GUI (especially phone) and Web programming knowledge has a half-life measured in months and, since I don't track them, my knowledge is always out-of-date. Any GUI framework is likely to have a paroxysm and completely rewrite itself in between points when I look at it, and an LLM will almost always beat me at that conversion. Generating my GUI by creating an English description and letting an AI convert that to "GUI Flavour of the Day(tm)" is my least friction path.
This should be completely unsurprising to everybody. GUI programming is such a pain in the ass that we have collectively adopted things like Electron and TUIs. The fact that most programmers hate GUI programming and will embrace anything to avoid that unwelcome task is pretty obvious application of AI.
> It all kinda just works.
> Can usability test in-tandem.
Man, people say this kind of thing, and I go… really? …because I use Claude code, and an iOS MCP server (1) and hot damn I would not describe the experience as “just works”.
What MCP and model are you using to automate the testing on your device and do automated QA with to, eg. verify your native device notifications are working?
My experience is that Claude is great at writing code, but really terrible at verifying that it works.
What are you using? Not prompts; like, servers or tools or whatever, since obviously Claude doesn’t support this at all out of the box.
(1) - specifically, this one https://github.com/joshuayoes/ios-simulator-mcp
Aside: GL is still a good practical choice for games built by small teams.
Certainly, you could define a console as an NES and the claim "console grade" but I'm guessing they are claiming "console grade" means, competitive with the renderer in games like Battlefield 6, Elden Ring, Horizon Forbidden West, etc.. Filiment is not up to those tasks.
Yes, you can make great games without the features of top console game graphics engines. But again, there's no reason for the hyperbole of putting "console grade" in the description then.
That said, this is cool and I would have probably celebrated a similarly fun project in their shoes. Perhaps the real accomplishment here is getting Toyota to employ you to build a new, niche game engine.
They already tried other engines, such as Unity. The team didn't just go off and build something without trying existing solutions first.
I don't know how bloated Godot is, but AFAIK libgodot development started as part of Migeran's automotive AR HUD prototype so I'm surprised to hear it has poor startup time for a car.
If literally the only option was an embedded flutter view, then there was never more than one viable solution and looking into unity etc was wasted effort.
The places where it poses challenges in my experience are high quality typesetting/rich text, responsive UI that requires a wide range of presentations (i.e. different layout structures on mobile vs desktop), and granular control over rendering details.
But for functionality-oriented UI, it's hard to beat its simplicity and scalability, particularly when you want to develop on one platform (e.g. computer) and have everything "just work" identically on other form factors (web/tablet/etc.).
For example, Godot's editor is bootstrapped/built in Godot, and runs natively on Web, Android, and Quest XR among other platforms.
Toyota assuming they move forward with this, might even become the main corporate sponsor since Google appears to be disinterested.
for anyone considering cross platform for games: the rendering abstraction is mostly a solved problem now, the real question is how well the framework handles platform specific stuff like photo access, push notifications, and background processing. that's where you'll spend 60% of your debugging time regardless of engine choice.
curious about fluorite's approach to native plugin integration, that's usually where these frameworks either shine or completely fall apart.
for my use case the bottleneck ended up being camera api quirks across ios and android. i'm building a game where the core mechanic revolves around taking photos and the inconsistencies between platforms are wild. nothing to do with rendering at all, just getting reliable camera access with the right permissions flow on both platforms without the app feeling clunky.
https://github.com/google/filament
but if they're targeting embedded systems, maybe they haven't prioritized a public web demo yet. If the bulk of the project is actually in C++, making a web demo probably involves a whole WASM side-quest. I suspect there's a different amount of friction between "I wanna open source this cool project we're doing" and "I wanna build a rendering target we won't use to make the README look better."
They dont want a build process.
The page says that it's "Powered by Google's Filament renderer" which is written in C++, and that Fluorite itself has "a data-oriented ECS (Entity-Component-System) architecture ... written in C++".
Also, although Flutter/Dart applications can run on web (compile to WASM or transpile to Javascript) and so can Filament, the Fluorite FOSDEM page says target platforms are "mobile, desktop, embedded and console platforms" so it's not clear Fluorite even cares about running in browser.
The UI toolkits in game engine usually suck hard, so here they started from a good UI toolkit and made it possible to make relatively performant games.
There's more info at https://www.reddit.com/r/programming/comments/1r0lx9g/fluori...
Makes me wonder if you might eventually see the OG Flutter team move to a shop like Toyota, the same way the original React team moved to Vercel. It's nice to see open source projects be portable beyond the companies that instigated them.
- fancy HDR rendering with reflection planes,atmospheric effects, tone mapping, camera effects, all kinds of animations for doors opening, lights turning on off etc
- content pipelines to get all this data from digital creation tools into packages deployable on target
When everything is said and done this is the same bread and butter what game engines use so the industry has pushed to leverage those and spread to these markers. Both unity and epic have tried with but not without issues.In either way, it will be open source and quite fun to build stuff on top.
Bevy is the opposite of an old boring solution. It's a cool engine, but I imagine a manufacturer would like to have long-term support with 15-year timelines. Bevy doesn't offer that, and even trying to have that wouldn't be good for Bevy.
I've been burned by using closed source game engines before. There's just too many edge cases and nuances that come up when debugging physics or graphical issues. I strongly recommend against using this until they become at the very least source-available.
Something about games authored by a giant company that will presumably actually ship in some products: "Hello, human resources?"
Please stop, all this does is introduce new ways for things to break.
Funny how “game engines” are now car parts in 2026.
Can I just have an electric car that’s a car and nothing else? Seats, wheels pedals, mirrors, real buttons, no displays just a aux jack. I’d buy it, hell I might even take the risk and pre-order it
In the US, no. Backup cameras are required by federal law as of 2018. The intent of the law was to reduce the number of children killed by being backed over because the driver couldn't see them behind the car.
I have unusually good spatial skills. I have parallel parked and reverse parked perfectly every single time for over 5 years…
…but no matter what, I cannot see behind my bumper. No mirror on any car points there.
Another was a HUD. Being able to see how fast I'm going, what the speed limit is, and other info; all while keeping my eyes on the road... is safer.
i only have those two data points; but give me an older car with larger windows every. single. time.
how much the conversation diverts on a commentary about someone not wanting a car shipped with an OS capturing telemetry even of farts on the right back seat
I can use my eyes and look around but I can’t see through objects.
The camera and sensors have an incredibly wide view. I only have to get my rear end out a few inches to be able to see everything I couldn’t before. Pray and pull out isn’t very safe.
There was the chip shortage during covid which held car production back becasue the auto makers couldnt source their chips fast enough. I am waiting to see if the current supply issue for ram chips modules will produce a similar effect.
Was there a single mass market consumer car sold in the United States in this millennium that didn’t already have processors and RAM in them?
I would be absolutely shocked if there was a single car for which the relatively recent backup camera requirement required them to introduce processors and RAM for the first time.
There's the yellow composite plug, a 12V input, and a small bit of wire to be cut to rotate image 180 degrees, at the other end of a 30ft cable from the camera. The composite goes into the existing infotainment. There would be a wire from shifter to infotainment that switches the display to external composite video when the gear lever is in reverse. I think it even came with a miniature hole saw in size of the camera module.
$10 and one afternoon later, I could have upgraded a dumb car to have one, complete with auto switch to backup on reverse. No software hacking needed. It's fundamentally an extremely simple thing.
Call me old fashioned but in my opinion, processors/ram/chips/components are a good trade-off versus squished children
It's so silly when they make some "Advanced Technology Package" with a VGA camera and a 2-inches-bigger infotainment screen that's still worse than junk from Aliexpress, and charge $3000 extra for it.
I know it's just a profit-maximizing market segmentation, but I like to imagine their Nokia-loving CEO has just seen an iPad for the first time.
They might as well be complaining about the costs of a rear view mirror, it is nonsense from the start. If a $20 gadget breaks the bank on a $30,000 minimum vehicle, they are a shitty business to start with and we should all be clapping our hands when they go out of business.
You shouldn’t need any dedicated RAM. A decent microcontroller should be able to handle transcoding the output from the camera to the display and provide infotainment software that talks to the CANbus or Ethernet.
And the bare minimum is probably just a camera and a display.
Even buffering a full HD frame would only require a few megabytes.
Pretty sure the law doesn’t require an electron app running a VLM (yet) that would justify anything approaching gigabytes of RAM.
Tech for cars is “standard-sized”. Not everything revolves around datacenters and tech, the car industry easily predates the computer industry and operates on a lot tighter margins and a lot stricter regulations.
So having a smaller, simpler chip that ultimately costs less physical resources at scale and is simpler to test is better when you’re planning on selling millions of units and you need to prove that it isn’t going to fail and kill somebody. Or, if it does fail and kill somebody, it’s simpler to analyze to figure out why that happened. You’ve also got to worry about failure rates for things like a separate RAM module not being seated properly at the factory and slipping out of the socket someday when the car is moving around.
Now - yes, modern cars have gotten more complex, and are more likely to run some software using Linux rather than an RTOS or asic. But the original complaint was that a backup camera adds non-negligible complexity / cost.
For a budget car where that would even make sense, that means you’re expecting to sell at high volume and basically nothing else requires electronics. So sourcing 1GB RAM chips and a motherboard that you can slot them in would be complete overkill and probably a regulatory nightmare, when you could just buy an off-the-shelf industrial-grade microcontroller package that gets fabbed en masse, dozens or hundreds of units to a single silicon wafer.
In practice, you’re not going to tie intimate knowledge of the matrix headlights into the infotainment system, that’s just bad engineering. At most it would know how to switch them on and off, maybe a few very granular settings like brightness or color or some kind of frequency adjustment, not worrying about every single LED, but I can’t imagine a budget car ever exposing all that to the end user. Even if you did, that would be some kind of legendarily bad implementation to require a gigabyte of RAM to manage dozens of LEDs. Like, is it launching a separate node instance exposing a separate HTTPS port for every LED at that point?
Ditto for the satellite radio. That can and probably is a separate module, and that’s more of a radio / AV domain piece of tech that’s going to operate in a world that historically hasn’t had the luxury of gigabytes of RAM.
Sensors - if this is a self-driving car with 3D LIDAR and 360-degree image sensors, the backup camera requirement is obviously utterly negligible.
Remember, we had TV for most of the 20th century, even before integrated circuits even existed, let alone computers and RAM. We didn’t magically lose the ability to send video around without the luxury of storing hundreds of frames’ worth of data.
Yeah, at some point it makes more sense to make or grab a chip with slightly more RAM so it has more market reach, but cars are manufactured at a scale where they actually are drivers of microcontroller technology. We are talking about a few dollars for a chip in a car being sold for thousands of dollars used, or tens of thousands of dollars new.
There is just no way that adding a backup camera is an existential issue for product lines.
So what microcontroller do you have in mind that can run a 1-2 megapixel screen on internal memory? I would have guessed that a separate ram chip would be cheaper.
But mostly it’s the fundamental problem space from an A/V perspective. You don’t need iPhone-grade image processing - you just need to convert the raw signal from the CMOS chip to some flavor of YUV or RGB, and get that over to the screen via whatever interface it exposes.
NTSC HD was designed to be compatible with pretty stateless one-way broadcast over the air. And that was a follow-on to analog encodings that were laid down based on timing of the scanning CRT gun from dividing the power line frequency in an era where 1GB of RAM would be sci-fi. We use 29.97 / 59.94 fps from shimming color signal into 30 fps B&W back when color TV was invented in the early-mid 1900s, that’s how tight this domain is.
It’s like saying your family of four is going to take a vacation, so you might need to reserve an entire Hyatt for a week, rather than a single room in a Motel 6.
Blaming trucks and SUVs for everything is a favorite pasttime of internet comments, but all vehicles benefit from backup cameras and collision detection sensors.
https://www.cdc.gov/mmwr/volumes/74/wr/mm7408a2.htm
The US was ahead of the EU in requiring backup cameras on new vehicles.
The majority of pedestrian accidents aren't involved with backup cameras.
Are you just trying to turn this into a US vs EU argument?
So pedestrian deaths would start rising again.
Americans drive significantly more miles per year, and larger/more comfortable cars are in part needed because Americans spend far more time in their cars than Europeans.
Euro governments are also increasingly anti-car, which means citizens are loosing their freedom to travel as they wish and unreasonably taxed, policed, and treated like cash cows for the "privilege" of driving.
Most of my European friends brag about how they can get anywhere via train and how much more comfortable it is to travel that way. When I visit Europe I have to agree. Just haven't really seen this viewpoint, though I do think I would feel this way as an American if I moved to Europe to some extent (though I'd be extremely happy to have viable mass transit).
I have a 2016 vehicle with no console screen and they have saved me from hitting all sorts on things, and are sensitive enough to detect minor obstacles like long grass.
https://www.cdc.gov/mmwr/preview/mmwrhtml/mm5406a2.htm
I suspect older children are more likely to be able to be aware of their surroundings and have better gross motor skills to react.
When I reverse, there can't possibly be something behind my car, because I've just driven forwards over that area. When I begin to reverse, I'm looking all around behind and I'll be able to see if an infant, or dog or whatever, runs into the path I intend to take.
A lot of people tend to drive forwards into parking spaces then reverse out. I've no idea why, because it's far easier to reverse in then drive forwards out. And I reckon much safer too. If people are sitting in their cars for extended periods then beginning to drive in reverse, I can see this being a problem. But there are also vehicles that you wouldn't be able to see an infant in front of the car either.
The perk of not having to twist your body around while steerins is also pretty nice.
That's it. That's all our problems.
Was a great example of the ridiculous expectations some of us Americans have on ridiculously huge vehicles.
Backup cameras are required for new vehicles in a lot of markets: EU, Canada, Japan, and more.
So it's not just a US requirement.
It doesn't need to be a giant infotainment display.
The problem with modern cars is that everything is so heavily integrated and proprietary. If I swapped out the OEM touchscreen, apparently I would also lose the ability to set the clock on my instrument cluster. Now that this has become normalized, automakers have realized they can lock Android Auto/CarPlay behind a paywall and you’ll have no recourse but to buy one of those tablets that you stick on your dashboard and plug into the aux port. If your car still has an aux port.
I’m excited for the Slate, but unfortunately I have the feeling that the people who buy new cars aren’t the same people that want the Slate. The rest of us who keep our 20+ year old vehicles reliably plugging along don’t make any money for automakers.
Every single car I have been in in the last 5 years or so has Bluetooth. No need for aux ports in this day and age, especially when devices dont have headphone jacks anymore.
Are you stuck in the 2000's?
Bluetoothing to your car is to me the same energy as using "wireless" charging stands for your phone. You are just replacing a physical tether with a less efficient digital tether of higher complexity for no actual gains.
I've now seen that Android has an option to turn on Bluetooth every day... I turned it off.
Wish they would do that for all the trucks with 5ft high hoods with no cameras.
It's like, at least one exists in Japan, on used market even, if you absolutely have to have one, I guess
0: https://www.honda.co.jp/N-ONE-e/webcatalog/design/images/e_g...
1: https://driver-web.jp/articles/gallery/41396/36291
2: https://www.carsensor.net/usedcar/detail/AU6687733258/index.... | https://archive.is/gbBzc
One of the example uses given in the talk is 3D tutorials, which I could imagine being handy. Not sure I'd want to click on the car parts for it but with the correct affordances I could imagine a potentially useful interface.
Power windows are standard. 169hp. Automatic climate control, central locking and key fobs, Automatic emergency braking and other radar based features. Digital gauge cluster. Modern infotainment. Modern crash safety, which is really good compared to 20 years ago.
That's a lot of car for $10k in 1996 dollars.
That's ignoring the $3k in fees, taxes, and whatever scam the dealer runs.
The reason we don't see more of it is that selling one $23k Corolla to one value minded shopper can't make line go up as much as selling one $60k MEGATRUCK to one easily influenced shopper. The new car market is exclusively for people who buy new cars regularly, and are therefore willing to get very bad deals for cars. The market is driven by people who self select for bad ability to parse value.
There has been real price decrease in small cars!
Wait until you see how cars are made now.
Comparing it to other products made by machines that actually have reduced in price since 1995 like kettles, LED lights, pc components, peripherals.
Cars should be far cheaper but they're not, and that's on purpose.
Crazy to think had the federal subsidy not been cut, that car would be possible to get for around $15k. Unheard of.
To have a decent travel experience in an EV you'd likely at least need this data ported out to your phone via an OBD adapter or CarPlay / Android Auto integration with an in-car infotainment display.
Today this is done via an OBD Bluetooth adapter or via CarPlay/Android Auto APIs that allow the phone to get data from the car.
Ol' Dirty Bastard? I jest, but I think the theory behind wanting an 'On-board Diagnostics' [1] connection would be to get data from the vehicle. You can get cheap bluetooth OBD-II adapters to transmit that info to your phone, it's not a given. I don't know much about electric cars, but if you want your phone to know the fuel level in an ICE vehicle then you'd need this kind of connection.
[1] https://en.wikipedia.org/wiki/On-board_diagnostics
Their pitch is to ship a pretty minimal platform that you can customize up as you want it.
"V8"
"Which kind of V8?"
https://en.wikipedia.org/wiki/V8_(JavaScript_engine)
I loved the Viper, but its spartan interior and features list were its detriment.
sounds like slate:
https://www.slate.auto/en
More expensive cars will have more electronic. They kinda want to sell them.
You can buy a tubular frame chassis for Beetle-based kit cars from a factory in the south of England, that's been adapted to take modern coilover suspension and an MGF or MGTF engine and gearbox, because Beetles are so rare that anyone wants to put the engine back into a Beetle.
I reckon with a minor amount of fettling you could squeeze a Nissan Leaf transaxle and a sufficient amount of batteries in, and still drop your Manx beach buggy shell over the top. Or any other shell you like.
You'd be running around in a solar-powered beach buggy. THAT is the future.
Personally, I'd be happy with some kind of situation where:
1. You have a small in-dash touchscreen, as most small sedans have these days, as the basic level of "backup camera and radio view" 2. Everything the car does has a physical button so you don't NEED to use the touchscreen 3. The car has a USB-C port that can power a tablet and which provides a standardized interface that e.g. iOS and Android can interface with, so that users don't have to worry about their new OS doesn't support the not-updated app, or the app doesn't support their not-updated device 4. Sell an optional tablet mount that attaches to the dash the way a built-in one would be 5. Sell an optional 'tablet' that does nothing but interface with the USB-C port and provide what it needs, in case someone wants a larger screen without having to buy an iPad Pro
Then again I don't drive, so I'd be happy with none of this also.
Seems almost inevitable. Game engines end up supporting user interface elements and text with translations, but with an emphasis on simplicity, performance, and robustness. Many currently trending user interface stacks readily generate bursts of complexity, have poor performance even with simple usage, and are buggy and crash prone.
There is functionally no difference between the powertrain of an electric road car and a brushless drill. How much software is there in your brushless drill? More than zero, far less than an electric road car.
Game engines are probably trivially cheap to produce in 2026. You forget that Toyota sells 10M cars per year. In 3 years thats 30M cars. What does it cost each buyer for the game engine? 30 cents?
https://unity.com/blog/industry/automotive-hmi-template-take...
https://www.unrealengine.com/en-US/uses/automotive
It might add up to a lot of money for the manufacturer who is cranking out thousands or millions of vehicles, but to the consumer buying one car it isn't a meaningful difference.
Then you had wiring each button wire I believe was $1. This wasnt 1 wire, but a few wires, power, ground, signal. Each button had them. This wasnt my job, so I didn't follow this price too much, but I asked the question at the time. I think going into the ECU, there is also a cost associated with it.
Anyway, you could assume 10 years ago, each button was $2. A car has 40-70 buttons? So its probably like $100 a car. Maybe $150 or $200 in today's money.
Also buttons and wires break, causing warranty problems.
At the time these vehicles were selling for under $20k at the bottom, and $40k at the top. So 1% of costs were buttons.
This doesn't even include the cost of hiring ~20 engineers to handle the buttons. ~6 people to check appearance and do testing... It doesn't include the assembly costs on the line. That 1% was just the cost of button + wire.
It's a good thing that doesn't happen to giant 15" integrated touchscreens. Imagine how much of a problem that would be!
That doesn't make sense. $1 uninstalled might make sense for a fancy custom-molded button, even if it's too much for a generic button. (I'd rather have some generic buttons with labels than use a touchscreen, by the way.) But there's no way a few feet of signal wire and the proportional share of power wires get anywhere near $1 uninstalled.
Also I can find entire car stereo units with 15 buttons on them for $15? That kind of integrated button is cheap, has been common in cars for a long time, and can control things indirectly like a touch screen button if that's cheaper than direct wiring.
Your after market has not been tested to react with sunscreen.
But also that kind of button doesn't need dedicated wires.
But for some reason people buy new nice vehicles and don't buy crappy new vehicles...
Touchscreen controls are crappy. They're less nice than ugly buttons.
(And of course people still buy cars with flaws. An entire car is an amalgamation of so many features that's it's hard to use purchases to measure people's reaction to the vast majority of specific changes. And features like controls often take longer than a test drive to evaluate, too.)
I have a late 90s Range Rover. It has about 12 buttons on the dashboard, most of which I never have to bother with (they do things that turn on and off the fog lamps, which I don't need to use, or adjust the air suspension, which I rarely need to use). I turn the lights off and on, and I switch the heating from "normal" to "BLAST EVERYTHING ON, FRONT AND REAR DEMIST ON, SEAT HEATERS ON, EVERYTHING ON, EVERYTHING ON, EVERYTHING UP FULL, WE'RE AN AIR FRYER NOW" mode.
What do you actually need an LCD for in a car?
Backup camera. They are required by law.
The real problem is that the whole is not designed to be user-servicable.