Sue me, I have that right.
I still haven't found a single person willing to go to the movies, and watch an AI movie. If it wasn't made by a person, there is no 'personal'-ity to it. It's just bland.
Eventually things will slow and slide back to thoughtful first, crapload second.
The last 27 marvel movies might as well have been written by ai, plenty of people have been to see those.
I feel like a lot of the stuff my nieces listen to are AI music. It's like a hodgepodge of popular songs with little rhyme or reason. Very 'sloppy' but if they like it....
It's hard for me to confirm if they really are AI or not. But I'm willing to bet that (random Roblox game they're interested in today) == heavily AI made. Maybe there's some real human effort here or there but I have heavy suspicions.
Didn't we all start as kids listening to music that is so formulaic that it could as well be AI-generated? A subset of people iteratively refines their music tastes and starts listening everything from bebop to obscure Canadian hardcore bands and will recognize quality in music.
But I am of the opinion that AI slop is displacing a lot of would-be beginner musicians and making it even harder for them to break out.
For better or worse, a lot of beginner artists were relying upon my nieces and their classmates) clicking on their music and sharing them for Spotify $$$.
When I started in tech, at the dawn of the internet, it was an exciting field full of hope and the promise to empower and enrich the lives of people. Tech now is largely the opposite.
Enshitification is making things progressively worse. tech companies are creating systems and tools with dark patterns abound to ensure you no longer own anything, are under constant surveillance, and populations at large are manipulated through the magic of propaganda and illusory truth. Even the productivity gains are perversely used to not give people more time through fewer work days/hours but to instead give them more work. People are losing their connection to others and the world around them.
Everyone tends to focus on Orwell’s 1984, but I find Fahrenheit 451 to be the more prescient book. I used to be annoyed by the book people’s choice to leave society and wait for it to collapse so they could help rebuild. In my mind, they should have been mounting an resistance. Fair to say I understand the book people’s perspective so much more now.
And they were all right.
I speculate it has a lot to do with surveillance capitalism. It's the same type of tactics that have been used for things like the banning of marijuana, or the health merits of cigarettes. Fear mongering and lying so a few robber Barron's can profiteer.
I think AI is useful. I think it was rolled out haphazardly similar to how people used to gargle radioactive isotopes or slather them on as after shave so others can profit quickly. There are so many issues with the technology that the press won't even cover yet because we all have to play stupid until trends emerge to report on otherwise billionaire defense contractors will send their figurative or possibly literal hit squads after us. We have to wait for the tumors to grow, the jaws to fall off before society will remember "maybe we shouldn't be slapping radioactive stuff all over ourselves so some wealthy white dude gets wealthier"
The future of ai is in small local models people pay 0 dollars to upgrade or use. Anything else is meritless exploitation and destruction. That's why the US will lose. Reality has a liberal bias. Tough pill for ai libertarians to swallow. So they mud fling.
after all, they think that a) they have a right to my property and b) creativity and hard work are dead
Interesting results regardless when they compare the shift of 2025 to 2026
I love the cognitive dissonance.
Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
Everyone in America is now fed and most children grow up spending a ton of time with both parents. This is because of automation bringing costs down.
It's easy to think things are terrible, but they are actually insanely good. Just 100 years ago life was horrible, now it's not.
And yet, people pine for what "once was".
AI will do the same, bring costs down. Now it's for white collar output, instead of manufacturing and agriculture.
The labor force disruption will be painful, as it always is, but things will be better on the other side because we just made a ton of work more efficient.
And all the benefits that brings. Not just in raw economic terms, but in quality of (family, community, recreational, commercial, ecological, medical) life.
Kind hard to imagine it will suck if another order-of-magnitude leap along that long line happens.
A bit of a tangential anecdote from my dad, who is a retired a biologist. He was one of the first in the department to use a computer in the 1970s and wrote some programs to do tedious calculations that had to be done by hand before and took days of human labor. Even a 1970s computer could finish the calculations with his programs in a few minutes.
His boss, an older tenured professor, could not believe that 'these damn computers' can possibly be right. Doing the same calculations in a few minutes? Impossible. So for a few weeks (or months, I forget), he did all the calculations done on the computer by hand to prove that the computer must be wrong.
One day he comes to my dad and says "can you show me how to use one of these computers?"
The world is changing quickly. Our most coveted defining traits - our minds - are under attack. This is a technology that seeks to replicate your thought processes and critical thinking and then to execute it at machine speeds.
If you think this is like the industrial revolution, you're actually right. We're still replacing animals with machines. But now we are the animals.
Anything other than a serious discussion about UBI or a post-labour economy is a joke. This is technology that aims to displace most of us.
The main social problem with automation in general was that less intelligent people have been left behind as only boring physical tasks are left for them to do, and people don't generally want to go back destroying their body from the prospects of an office job.
At some point frontier AI will only getting only worthwile to use for only super highly intelligent and motivated AI researchers which is a tiny part of the population.
May I also add that this isn't just (or at all) about intelligence.
I'm lucky enough to be at a company where I have a large budget in terms of what I can spend in tokens. This gives me an enormous advantage over someone who is just as intelligent as me and who has the same experience as me minus the interaction I have with LLMs.
In this case the crucial difference is not intelligence, it's that I found myself in the right place to be able to go up, whereas a lot of people which are otherwise like me didn't get that opportunity through no fault of their own.
People tend to attribute their successes to their own merit and their failures to happenstance, but if we're honest with ourselves the real world has a lot of randomness in it.
I guess cynicism is trendy.
It's not an anomalous sense of cynicism, hundreds of thousands of people are looking at their options and feeling hopeless. I'm glad I am not in that camp. The reason I'm not is because I was born sooner than they were. I don't blame them at all, it's looking a lot like the generation after them is cannon fodder if things trend the way they are now.
I would tell them this is the problem to fix. Taking your anger out on AI is the most shortsighted thing. When faced with a powerful new capability, disavowing the capability instead of enabling society to leverage it is absurd.
AI is fundamentally the automation of labor, and we can all see the incredible fruits we all reap from similar past leaps in capability.
Structure your society for a post-labor world. Don't halt the progress that has dramatically improved the human condition. To do so is a disservice to the species and all future humans - concretely, your own loved ones and especially your children.
UBI also won't fix things. A post ai world that the us tech ceos want us to imagine is not a utopia. The us manufacturers almost nothing on the world scale. Our biggest contributors to the world economy were things like farm goods(which are in peril), fuel (which most countries are trying to phase out for environmental and recent geopolitical issues), software which will be commoditized through AI. Anything the us can manufacture China can do better, cheaper, and faster. It's not been in our culture for decades, and our infrastructure is shoddy.and will be shoddier once data centers spin up and more wealth is concentrated to people who do not pay any taxes.
GenZ and those coming after have no chance at a sustainable life if the billionaires get what they are asking for. Also in a capitalist society asking them to sacrifice their lives for the good of others is hilarious. Especially if there is no foreseeable good to come after.
You clearly accept this as Progress, but isn't the core debate here that it doesn't improve life for humans?
Does literally no one look at things from a historical perspective? The history of automation is right there on the Internet, for you to peruse at will.
Of course no one sees it as a collective achievement when the announcements are aimed at either scaring people about how even the team behind them is worried about releasing it or for CEOs to replace workers.
Artemis II, at least in the states, was an example of people genuinely feeling collective achievement. There is absolutely no reason this AI moment couldn't be that. Instead though the companies involved have explicitly chosen fear and capital as their marketing tools. We should be seeing this as an incredible time but those involved do not want us to and plan to keep the spoils for themselves so we shouldn't.
> But instead we're seeing them explicitly marketed as tools for capital centralization.
And labor automation, which is the single most valuable thing any technology can do. But if your answer is "kill the technology" instead of "structure society to live with it," of course you will experience pain.
It is a completely coherent position to like most technological progress, but at the same time be critical of some uses of ML/AI.
You are just making straw men here by suggesting that people that are critical of AI are critical of all technology.
Well, yes, but if humans need to stay in the loop (as most previous automations of labor), it is also moving the means of production into the hands of a small number of tech companies. In 2010 or 2020, anyone with a laptop could create a startup. It might be the case that in 2030, you could only do so if the major frontier model providers allow you to do so and do not make it so expensive that it's only usable by entrenched players.
I am not fundamentally against AI, on the contrary, but I think the models should be in the hands of the wider population (i.e. open weight models), so that everyone has the means of production and can benefit from the automation. Also, it would only be fair, since the models are trained on the collective output of humanity. Of course, there are several barriers currently. There are pretty good open models, but running the near-frontier versions requires a lot of capital in the form of GPUs.
Sell NVIDIA!!!
31% seems remarkably high. Here we seem to be running up against the limitations of statistics. It is hard to interpret whether this is a scared-and-angry sort of angry or if there is something AI-related happening that is making them angry. I might have been lucky in my experiences, but generally if people get angry there is a reason other than "things are changing".
Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture. The main selling point of AI so far is that it makes things cheaper to produce while still looking good at a glance.
"The thing I used to like costs the same or more but is now cheaper quality and worse and they think I'm dumb enough not to notice" really isn't a selling point, but pretty much the universal western post-2008 experience, and nothing quite embodies this transformation like AI.
But yeah, you also have all the AI CEOs chewing the scenery like Jeremy Irons in the DnD movie which really hasn't done the image of AI any favors either.
There are at least some redeeming features of AI, but I think it's become this scapegoat for a lot of things that it touches that are also larger unsolved problems with the economy, and it's even used that way, e.g. to motivate layoffs that would otherwise signal to investors that a company isn't doing as well as they'd like you to think.
That's my personal impression of the anger. It's not so much luddite anger, its like Clippy anger and millenial anti-Boomer anger mixed together.
It's like a twist on the Turing test, where some humans can't tell the difference between a human and a computer, but others can, and they tend to be younger on average. The Turing test ironically ends up telling you more about the person taking the test.
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
In the past there was an implicit contract for white-collar employment that was based on the concept of earned experience through a period of manufacturing type work. You enter your profession by performing uninteresting, low-paying manufacturing tasks (such as, writing boilerplate type code or performing low-level quality assurance) while you gain domain expertise and gain the perspective necessary to perform high-value work at a higher level.
LLMs are now exceptionally good at consuming the 20% of an employees entry-level responsibilities.
What I see happening in the enterprise is that management is using AI to justify pulling the ladder up behind them and closing the door behind them. When a senior engineer's or senior analyst's productivity has increased by 30% due to using LLMs, the executive's response is typically not great, we have more time to work on bigger projects, but instead great, we can freeze junior hiring for 2 years.
The entry-level positions in the labor force are being automated, causing seriously low access to those roles for the Gen Z workforce. On the other hand, most senior-level positions are not being available to Gen Z workers as they lack the skills and experience required to qualify for those positions.
Stagnation in the adoption of artificial intelligence (AI) technology is the direct result of having no entry or junior level employees to work underneath senior staff members, causing a bottleneck for seniors. Employees generating raw output with AI technology have to check the results (output) for accuracy before integrating into work systems and processes as there are no entry-level employees to provide assistance to senior workers.
Gen Z workers do not dislike the tool (AI) however, they do not like how the tool is being implemented and used currently. Currently, the implementation of AI is driven by cost cutting in terms of labor rather than being focused on providing training and developing Gen Z's human capital for future use.