Skip to main content


A well-funded Moscow-based global ‘news’ network has infected Western artificial intelligence tools worldwide with Russian propaganda

An audit found that the 10 leading generative AI tools advanced Moscow’s disinformation goals by repeating false claims from the pro-Kremlin Pravda network 33 percent of the time
newsguardrealitycheck.com/p/a-…

in reply to Charlie Stross

this is an example of recursive pollution

berryvilleiml.com/2024/01/29/t…

in reply to Charlie Stross

It is difficult to see how this will not always be a problem with today's "Spicy Search" version of AI. If it isn't the Russians polluting the model with false information, who is to say that a vendor like Google will not modify the model so that Soylent Green, one of their highest paying customers, gets rated as the most nutritious food you can eat. It's Google we are talking about here, you know they will do it.
in reply to Charlie Stross

if all the good people opt everything they write out of LLM training data, what we will get is LLMs trained exclusively on evil output.
in reply to Ben Curthoys

@bencurthoys Except that the vast majority of people on the internet have zero understanding of how their data is being used or even that it is. And of the ones that do, to many bought into the AI myth that LLMs “ think” instead of just regurgitate. @cstross
in reply to @pineywoozle ‘s #3WordNote

@Pineywoozle
So what?

No matter what everyone understands about how their data is used and no matter what magical thinking they apply to LLM's magical "thinking" - IF all content by good people is excluded from the training data, THEN what remains will, by definition, not be good.

I'm not suggesting that everyone embrace their new AI overlords, merely observing that the more you avoid being scraped the easier you make the job of the people deliberately trying to poison the LLMs.

in reply to Ben Curthoys

@bencurthoys Sorry I ment that there are more than enough good people innocently offering up good content. I left out that those of us who do understand it should be actively working to sabotage it not just opting out. Sometimes the thought is complete in your head & doesn’t survive the journey to the post. @cstross
in reply to Charlie Stross

The Hybrid War continues unabatted. Honestly Trump turning Russian and taking the US intelligence services with him is so incredibly damaging.

A Russian defeat might have ended it all.

in reply to Charlie Stross

Information is being weaponized. It is a practice that the Soviets and the fascists too for that matter have been practicing since the 1920s and with the current IT tools this has reached a next level where truth is not certain anymore. There should be much more public awareness around this…
in reply to Charlie Stross

See mastodon.social/@SteveThompson… ?
in reply to Charlie Stross

Is it infection when the owners desire it? Seems like most “Western” LLM scrapers are run by pro-oligarch scumbags.
in reply to Charlie Stross

I'm embarrassed to say I only just this moment realized Trump literally named his social media company after a Russian propaganda network
in reply to Charlie Stross

Wouldn't it be great, Charlie, if the moneybags behind Ai grew as concerned about its glaringly dangerous shortcomings as you and Eye are?
What a wonderful world it could be.
in reply to Charlie Stross

It's scary how many of the false narratives this network has created and promoted are regurgitated in social media posts.

The difficulty is in knowing if the accounts are genuine people, people whose accounts have been taken over or just fictional. Also difficult to know if they are deluded, trolls or AI accounts.

in reply to Charlie Stross

I said it before, and I'll keep saying it. With current AI, it's "Putin in, Putin out "
in reply to Charlie Stross

Something that strikes me is that news disinformation is a relatively minor threat here; there's enough news around for fact-checking AI-boosted propaganda.

I think a bigger threat is AI-boosted unsafe programming. Too many people rely on Copilot or GPT to write their code for them, perform no sanity-checking, and put it into workplace production. If you pollute a LLM with backdoored code, how many people will roll it out?

The phrase 'Word macros for the 21st Century' springs to mind.