Skip to main content


WIRED article forecasting the generative AI bubble will burst in 2025. This is more optimistic than my own expectations, but if WIRED are printing it, it's the direction sentiment in Silicon Valley is running in.

(Hint: there's gold in AI, but it's in *analytical* AI, aka big data, not stochastic parrot bullshit.)

wired.com/story/generative-ai-โ€ฆ

in reply to Charlie Stross

Gary Marcus has been banging this drum since 2016 (when it we were still calling it deep learning).

This is his grift, he just predicts the end times for whatever flavour of AI is most popular, with zero insight in the technology or the industry.

For some reason people keep falling for it. Even Wired, apparently.

This entry was edited (1 day ago)
in reply to Profane tmesis

@ptmesis It's so very obviously a market bubble. Generative AI makes for amusing toys but it's fundamentally not conscious or aware in any wayโ€”so its answers veer towards "plausible bullshit" rather than "meaningtful".

Which is why spicy autocomplete gets you results like this screencap ...

Meanwhile using non-generative tools for spotting cancer in breast scans seems great, but you can't milk gullible non-techie investors of billions that way.

in reply to Sarah Brown

@ptmesis @goatsarah Interestingly, someone opened a door for me in what looked like a large store room in the Kilburn Building in Manchester University this week and showed me Steve Furberโ€™s SpiNNaker machine whirring away. Now THATโ€™S a real attempt at constructing something that thinks, even if it is only a mouse brain. nowpublishers.com/article/Bookโ€ฆ
in reply to Sarah Brown

@goatsarah @ptmesis
The trend that many unusual, fascinating, and beautiful things, in particular images, are more and more assumed to be AI generated by default is disconcerting. The dilution of the human sense of wonder is not only sad, but risks undermining our ability to be inquisitive, to seek out new knowledge and ideas.

Couple that with these systems being very good and creating an illusion of understanding and being able to reason, and I can't help but worry for the future.

in reply to Charlie Stross

That wasn't my point. My point is that Marcus specifically has been consistently wrong, and I recommend ignoring his predictions.

There are plenty of things wrong with AI, but there are grifters on both sides.

in reply to Charlie Stross

"Generative" "AI" is fun to mess with and can do some fun things, but it never really had any *true* usage. The inevitability of its failure as more and more began to realize its inability to meet its claims only surprised me in how long it has taken for investors to finally get tired of it. And it's really weird because it seems quite a lot of this has consisted of speculative VC trading and stuff. (I don't fully understand how that makes money, but it sure is screwing the world.)
in reply to Nazo

@nazokiyoubinbou I can see one viable use for generative-AI: procedurally generated content in games. (Eg. to allow dialog with NPCs.) In that context, "hallucinations" may be a feature, not a bug.

That is NOT sufficient to justify OpenAI's $80Bn valuation, though.

@Nazo
in reply to Charlie Stross

There is actually a mod for Skyrim that uses a LLM for generating speech for people if you talk to them (intended primarily for use in VR so you can just walk around talking, but it can be used outside of it too.) It's actually really cool even though it gets a bit weird at times (it needs more context generated telling it what to actually do properly. For example, you can talk to the dog or the chicken and a male nord voice talks back, lol.)

It absolutely can work if done right!

in reply to Charlie Stross

I go with your hint. In the medical domain that Iโ€™m in, itโ€™s simply being researched, developed and used. Just like other algorithms. The rest is hype and big money.
in reply to Charlie Stross

I would be fine with this, but I must admit that I made a bet with my students last year that it would take about 4 years.
in reply to Charlie Stross

And that it is ethical, non-biased and won't lead to massive global warming
in reply to Charlie Stross

I don't think there's gold on big data either. There's marginal gold, small optimisation of complex processes, marginally improved margins...

All worth trillions, but hard to extract trillions.

in reply to Charlie Stross

those data centers do have real application however, for pervasive surveillance and drone swarm kill-nets. Itโ€™s automated surveillance and military power.

Itโ€™s perfect for a band man-gods living in their right wing libertarian utopia of a patchwork of city states, built on the ashes of nations. They would maintain absolute control with their automated monopoly on violence.

in reply to Charlie Stross

I haven't seen any big benefits for "AI" in analysis at all, only heard lots of promises that turn out to be complete shit.

The only thing i've seen that is useful is generation of data, like videos, images etc. Will do wonders for computer games and films, but nothing for the field i'm in (Cyber Security).

People say that it is good at summarising stuff, but it still introduce stuff that isn't in it, and that turn analytics into untrustworthy shit.

edition.cnn.com/2024/12/19/medโ€ฆ

in reply to Ichinin โœ…๐ŸŽฏ๐Ÿ™„

@Ichinin The broken promises I've been hearing are about using *generative* AI to perform analysis -- which, no surprise, it's spectacularly bad at.

Meanwhile, I can go on Github and freely download a ML-based porn filter, or a program that can sort my photos by subject matter, or any number of other purpose-built tools. It turns out that once you stop trying to make dovetail joints with a hammer, ML can work pretty well. It just doesn't make headlines.

in reply to Charlie Stross

It's slightly depressing that the last product people used a lot, but kept asking "what is it for?", was Twitter...and the final answer was "fascism".
in reply to Charlie Stross

thereโ€™s gold in AI. For those making the tools (Nvidia). Just like the gold rush in Alaska.
in reply to Charlie Stross

โ€œGenerative AI doesnโ€™t actually work that well, and maybe it never will.โ€ โœจ
in reply to Charlie Stross

- and then still only if one knows how to pose the right questions for data analysis to answer, which was the problem with Big Data the last time round (which is about 10 years ago BTW); everybody thought answers would just magically manifest themselves if you collected enough data, even though you had no idea how to use it...
in reply to Charlie Stross

hey, this is paywalled for me, do you possibly have a gift link? i would love to share it with my students. thank you
in reply to Charlie Stross

I personally think that the AI bubble will pop at the same time as the stock market and big tech won't learn a thing.
in reply to Charlie Stross

GenAI is a bubble like the World Wide Web in the 90s mobile in the late 2000s/early 2010s. Extremely useful technology that has been overhyped.

It's not a bubble in the way that blockchain and crypto are bubbles โ€” technology that is *at best* of extremely limited use, that is ridiculously overhyped.

Will AI achieve true personhood or superinteligence? My answer today is the same as it would have been in 1994: I guess so? Probably? Maybe tomorrow, or in a thousand years, or never.

in reply to Charlie Stross

LLMs are good for specific tasks, especially if they can be tailored for the application and the model minified enough to run efficiently (ideally locally).

But yeah, analysis of big data has always been where the power is at. Classifiers FTW.

This entry was edited (1 day ago)
in reply to Charlie Stross

ie. Machine learning which has been developing and improving over the past 15 years. The kind of thing that categorises shadows on X rays.
in reply to Charlie Stross

"there's gold in AI, but it's in *analytical* AI, aka big data, not stochastic parrot bullshit."

This has basically been my outlook for awhile as a layman observer.

A couple Science Magazine articles months back suggested that there *could* be use for AI in finding patterns worthy of further (human) investigation and analysis (e.g. chemical structure combinations more likely to be functional in drug discovery).

in reply to Charlie Stross

@hacks4pancakes I'll be so happy when we no longer talk about AI like it's as essential as blockchain was 7 or so years ago and talk about AI like it's 2024 blockchain.
in reply to Charlie Stross

Will submit for consideration the alternative term for generative AI: bullshit engine
in reply to Charlie Stross

On Ars Technica, Conde Nast's other tech publication that's for actual tech people, the commenters generally consider Wired a clueless rag written by people who don't know what they're talking about.
in reply to Charlie Stross

But, I view GenAI as a toy. It's fun to make pictures using it, and that can do cool stuff like making a picture from Second Life (or The Sims) look near photographic (points to the left). And LLMs (which I run locally) can quickly make stuff like handouts for tabletop role playing games.

But people using it for purposes where quality actually matters makes no sense, given their well-know tendency to lie and double-down when called out on it.

in reply to ื—ื ืŸ ื›ื”ืŸ โ€ข Hanan Cohen

@hananc Do you remember their Long Boom issue?

wired.com/1997/07/longboom/

Completely missed the dot-com crash, the election of George W. Bush, 9/11, the global war on terror, the Iraq war, the global financial crash of 2008, the Russian invasion of Crimea, the election of Donald Trump, and is it unfair to add a global pandemic on top?

โ‡ง