Skip to main content


Techbros: self driving cars are inevitable!

Also techbros: prove you are human by performing a task that computers can’t do, like identifying traffic lights.

in reply to Sarah Brown

Why do you think the tech bros want us to identify traffic lights, bicycles etc. Its so they can better train their "AI" models to spot them so the self driving cars become a reality. We are all beta testers
in reply to RobCornelius

@RobCornelius I’m fully aware. I’m also aware that they are STILL doing both, simultaneously. Only one can be true. Which is it?

(Hint, it’s “computers can’t identify traffic lights”. Self driving cars are, in fact, randomised murderbots)

reshared this

in reply to Sarah Brown

@RobCornelius notice, BTW, it’s always American street furniture, a country well known for being a huge international outlier in terms of not using internationally standardised vernacular for street furniture.

reshared this

in reply to Sarah Brown

@ajlanes Man, these aren't even that hard to follow, why isn't Canada a signatory for this either :I Probably just playing follow the leader with the US on that one, bah
in reply to Sarah Brown

Its down to certain peoples egos (specifically Muskrats) versus reality. Its not going to end well, that's for sure.

Have you ever noticed you are never asked to identify street objects in upscale neighbourhoods? OK posh neighbourhoods are less cluttered but....

in reply to RobCornelius

@RobCornelius Or anywhere that’s not North America, with its own “not invented here” take on how streetscapes should look.
in reply to Sarah Brown

Perhaps it is because the self-driving AI's already know what objects are unambiguosly traffic lights but still require human input to answer the philosophical questions of whether the iconic depiction of a traffic light painted on a sign counts as a traffic light
in reply to Bornach

@Bornach @RobCornelius Except that's nothing like the widely internationally ratified sign for a traffic light, as used in most of the world (completely the wrong shape for a start!)

en.wikipedia.org/wiki/Vienna_C…

in reply to Sarah Brown

@robcornelius

There's also the issue of context. Those are indeed physical traffic lights but the AI should realise why it shouldn't obey them
futurism.com/the-byte/tesla-au…

in reply to Sarah Brown

@robcornelius

Self-driving cars are for sure not ready yet, and some companies and people are pushing their deployment too hard.

I'm convinced they will get good enough though.

On a related note, I saw in your profile you moved to Portugal. I hope you like it there, and there is indeed much to like, but you must have noticed how *people* drive over there. It shows up in the accident and death stats too.
I'm Portuguese and grew up there. I'm not at all convinced that people can drive safely.

in reply to Sarah Brown

in reply to Sarah Brown

@robcornelius
In the context of the trolley problem, they literally must be.

(As are BTW humans, generally speaking, when an accident happens, humans do not have the time to take a well reasoned decision, it generally is more or less a random decision if at all.)

Taking any stand on the trolley problem, and related ones, immediately raises questions of liability. So random() is the "safe" out for moral cowards.

in reply to Andreas K

@robcornelius
But coming back to the more general problem, and taking the point of view of a budding data scientist.

It's not a question if algorithm-driven cars cause accidents. (the AI buzzword gives me migraine)

Humans DO cause accidents too.

So the question is: Do computer-driven cars cause more or less accidents than humans.

It's hard to assess this, at the moment, as there are literally only a tiny number of really self-driving cars.

in reply to Andreas K

@robcornelius
And the involved entities are commercial, so they tend to throw the veil of trade secrets over most of their data.
in reply to Andreas K

@yacc143 @robcornelius There are roughly 70 companies researching driving automation here in the Bay Area that provide data to the state. Safety of these vehicles peaked around 3 years ago with the best performing cars having 5 times as many accidents per mile as the average driver. They appear to have reached the asymptote of the improvement curve.

Pippin reshared this.

in reply to Marty Fouts

@Marty Fouts @Andreas K @RobCornelius why, it’s almost like there’s a whole pile of bullshit surrounding the whole endeavour. So unlike the tech industry, that.

[object Object] reshared this.

in reply to Sarah Brown

@yacc143 @robcornelius @MartyFouts In really advice to be watchful for companies trying to solve the problem by making cities more self-driving-car friendly. By making streets even more the insta-death zone in front of your door.
in reply to Marty Fouts

@MartyFouts
That begs the question how come they got the permission to test on public roads and endanger the public of that's their best case?
@goatsarah @robcornelius
in reply to Andreas K

@yacc143 @robcornelius The politics at the state level are fascinating or would be if lives weren’t at stake. Google started using public roads without permission as Tesla still does. Cal DMV stepped in and designed a program requiring safety drivers in the test vehicles but Google’s Waymo spin-off got another state agency involved and so they and Cruise have licenses to run driverless taxi service in San Francisco. Or as someone else pointed out: money
in reply to Marty Fouts

@MartyFouts
Don't take it wrongly, but the German car makers literally spent years in R&D and ended up offering way less (co pilot system for limited use cases eg high ways/autobahns), literally citing this as the safe state of the art. They could offer more of they were willing to associate their brands with unsafe cars.

@goatsarah @robcornelius

in reply to Andreas K

@Andreas K @RobCornelius @Marty Fouts My car has a similar autopilot system (lane following and distance maintenance). It works very well, but you HAVE to be aware of its limitations. You CANNOT remove the human from the system
in reply to Marty Fouts

@MartyFouts @yacc143 @robcornelius This article jibes with my outsider (out of the car, not out of the area) impression. Human drivers have gotten a lot worse since covid cleared the streets in 2020 from what I see.
arstechnica.com/cars/2023/09/a…
in reply to Eli the Bearded

@elithebearded @MartyFouts @robcornelius
COVID/2020 has made statistics/relative references suspicious. (Actually that applies to subjective perception too)

It was such an extreme outlier, that 2021 might have some number at 75% of the 2019 value, and be still perceived as a huge relative raise over 2020.

in reply to Andreas K

@yacc143 @MartyFouts @robcornelius My observations walking around a lot are speeding/ignoring traffic signals (lights and stop signs)/ terrible choices about u-turns etc, got bad in 2020 _and have not improved_. Which has me question comparisons between pre-covid and now. I am just one person walking around one city, so very much ancedote and not conclusive data, but I don't see self-driving cars doing those dangerous driving things. I see them block traffic but that's about it.
in reply to Sarah Brown

@robcornelius Computers can identify traffic lights.

digit.fyi/ai-bots-can-beat-cap…

in reply to Seb

@seb321
I listened to an excellent podcast on just that about 5 years ago. No chance of finding it again. That's in the equivalent of early Neolithic now.
@Seb
in reply to RobCornelius

@robcornelius That makes the websites I was designing in the late 90s like primordial slime. Very low bandwidth primordial slime.
in reply to Seb

@seb321
Same.... though I made this 20+ years ago and its still there and being used jigsawstaff.com/

I think they finally got rid of my login that had god level permissions a few years ago at least.

@Seb
in reply to RobCornelius

@robcornelius That’s impressive. I’m sure all mine disappeared into Wayback Machine eons ago.
in reply to Seb

@seb321 @robcornelius side note, just in case it has relevance: some years ago, a buddy of mine wrote a piece about the trolley problem --

medium.com/personified-systems…

in reply to Sarah Brown

"select every bicycle in this image we ripped from a half loaded GeoCities site from 1996."
in reply to Sarah Brown

They still ask about bikes too. It's been years. If they still can't tell the difference between them and say a lamppost, it's not good.
in reply to Sarah Brown

Just last week I heard on a podcast that captchas became obsolete since AI can now read those better than humans.
in reply to Martin Senk

@MartinSenk
If you successfulky solve the captcha, you are considered a bot and only get the SEO version of the website, that is meant for indexing by search engines. To get the actual hidden amazing content you have to choose wrong answers only!
in reply to Sarah Brown

somebody once said CAPTCHA is the reverse Turing test. Where humans try to convince computers we aren't also computers.
in reply to Sarah Brown

I wonder if this means that if you spend hours intentionally providing the wrong answers round and round in circles, that it will serve to confuse AI learning somewhat 🤔
in reply to Sarah Brown

The great plan:
1️⃣ Give techbros all your data for free
2️⃣ spend your meaningless life watching boring and annoying ads all time, even while farting at WC, giving techbros bucks in the process.
3️⃣ Help techbros to train visual AI via captchas
4️⃣ be smashed in an driving incident by a self-driving car.
5️⃣ PROFIT!
This entry was edited (1 year ago)

Sarah Brown reshared this.

in reply to Sarah Brown

About five years:

"It seems like you're connecting from an unrecognized device. To access your account, please use the controls below to operate the taxi. Your pickup is at Christopher and Washington, and their destination is Columbia Heights and Vine. Be sure to drive on the right side of the street and try not to hit any pedestrians, or you will have to wait 24 hours before trying again."

in reply to Sarah Brown

there is no contradiction. You are only the cheap annotation tool. After we provide enough learning data, there will be another "challenge". 😀
in reply to Sarah Brown

Yes, there is some irony to it 😀

I guess the question for self-driving cars is when and how many.

Personally I don't drive so I'd like cars to drive themselves. And I'd like all cars to be electric.

But most urgently I'd like a *lot fewer cars* traveling *a lot fewer Kms*, regardless of the kind of car and who or what is driving.

Self-driving could even help with that in a small way, but above anything we have to want less car in the world.

I do believe that self-driving capability is inevitable though. It's just a question of time.
Incidentally, when we solve captchas they give us images that are labeled to check it we are "human", but also some that are not or by few people so aren't reliable. So we are contributing to the training data that advances computer vision. In fact, computers can solve most of the captchas already.

So I think self driving is inevitable but I hope cars aren't.

in reply to Sarah Brown

FYI machine learning models are now significantly better than humans at solving capthas
in reply to Sarah Brown

well, in theory you could have the traffic light status communicated to the cars in some other manner

but at the same time, self-driving cars would solve traffic issues no more than adding "just one more lane" every other year

in reply to Sarah Brown

to be fair, there was a study published recently that showed that AI bots solve CAPTCHAs 90% faster and 70% more accurately than humans 😅

I HAVE seen a Tesla's entertainment unit spawn an endless row of traffic lights on both sides of the road when driving behind a truck carrying traffic lights, though, so the original point still stands.

in reply to Sarah Brown

iirc that's the exact reason why captchas include things that can be encountered on roads, to help the AI recognize them.
in reply to adam :neocat_floof:

@Ádám :fedora: 🇭🇺 Ok, 2 problems: 1. American roads. The US has not signed up to the Vienna Convention on traffic signs which most of the rest of the world uses.

2. Those cars are out there today, so if they still need training to recognise things that they aren't supposed to run into, then they're basically murderbots.

in reply to Sarah Brown

I didn't want to imply that I support self-driving cars. I think it's a terrible idea. And in its current form it actually does more harm than good.
in reply to Sarah Brown

the irony is that ML models are now better at solving captchas than humans, making captchas entirely pointless.

arxiv.org/abs/2307.12108

in reply to Ariadne Conill 🐰

@Ariadne Conill 🐰 Swap out the Captchas for ones that use Vienna Convention traffic signals, and see what happens.

I bet the machine accuracy plummets.

in reply to Ariadne Conill 🐰

@ariadne Please let captchas die. Since my browser doesn't keep cookies I'm constantly solving them. I often refresh until I get the easiest one possible.
in reply to Charles Christolini

@binarypie @ariadne I don't keep cookies and several months ago using ArtStation website was a nightmare. 2-3 captchas every time I had to log in. For some reason it disappeared and is ok now, unless I would try to log in from Tor.
in reply to Ariadne Conill 🐰

@ariadne which means it's probably reasonable to assume at this point that captchas are now bots training bots.
in reply to Ariadne Conill 🐰

@ariadne Plot twist: If you can solve a #captcha on the first try, you're now more likely to be a robot than a human.
in reply to Sarah Brown

If I was a techbro I would totally use the data collected from captchas to train AI models.
in reply to Anton Podolsky

@Anton Podolsky That's what they're doing.

As a result, they will get very good at spotting things that look like American traffic lights used in Captchas, and probably dreadful at spotting anything else.

Unknown parent

friendica (DFRN) - Link to source
Sarah Brown

@kurtseifried (he/him) @Ariadne Conill 🐰 @Cloudflare And suffer from a terminal case of r/USDefaultism.

The rest of the world has to learn what American street furniture looks like to solve them.

in reply to Sarah Brown

That's a straw target. They're training them to be better than _all_ humans, not just as good as _a_ human.

AI automation won't come about from a machine being able to identify some specific traffic light, it'll come when a machine can identify more traffic light signals far quicker than an average human, and on average drive safer per mile - not perfect, but better than an average human. That's maybe already happened. But there are always more traffic lights to make them better.

in reply to 'ingie

[ to be clear/more general to the above, "traffic light" of course could be any object requiring identification or assessment in such a situation ]

I understand the repulsion, tho.
I personally suspect we're at least on a cusp of where self-driving machines will likely be better, on average, than an average human, which would therefore be safer for everyone, even if there's mistakes, there'll be fewer mistakes.

AGI automation, well that's a different kettle of very scary fish.

in reply to 'ingie

@'ingie Ever seen a Captcha that uses Vienna Convention traffic signs, as used in most of the world?

I’d say they’re training them to be better at people who’ve never been to America working out what American street furniture looks like.

And that’s the problem with this sort of AI training. It is almost certainly not optimising for what you think it is.

in reply to Sarah Brown

Not sure what you meant with the American versus Viennan Convention bits. The training is to allow them to learn things that aren't easily trained on standards.

Tho I totally agree with your latter point. It's not really important for automated driving as it's not AI driving, as the AI model isn't in control, it's part of a feedback system of sensors to a conventional autopilot system.
Like in an airplane... which can already land themselves safely if needed.

in reply to 'ingie

... but i fully agree, we should never, at least not in any time scale I can see currently solvable, put an AGI actually in charge of driving *anything*. As that's indeed when the optimisation - the alignment problem, as they call it, rears its very ugly head. (an AGI will never have the same philosophical world view as a human)

We haven't got AGIs yet... but that's a very italicised yet. That's that moment we need to really be very very cautious of.

in reply to 'ingie

related: I've found Rob Miles' videos on Computerphile very good on the subjects - he's a researcher into such things at Nottingham Uni.
I found his stuff very eye opening to things I hadn't even considered as "threats" in that sense - his research is a good reference of concerns that really need to be made more widely known. Most of his interviews tend to be a little technical, but not excludingly so. I think this is a good example of such worrying concepts:
youtube.com/watch?v=3TYT1Qfdfs…
in reply to 'ingie

@'ingie I mean that they're all trained on American stuff.

5% of the world population.

So when they unleash the result on the other 95% of us, it's going to have no fucking idea what it's doing, because it doesn't look a damn thing like the training data.

in reply to Sarah Brown

Idea: a CAPTCHA reading "select all squares with idiots" and with images of Musk, Zuckerberg, Bezos etc.
in reply to Sarah Brown

Actually this is a misconception. By now AI are much better at passing reCapthchas than we are. How you might ask?

Because we taught them.

techradar.com/news/captcha-if-…

in reply to Sarah Brown

fun fact, when you find the pictures that contain traffic lights that actually contributes to the training data to these self drivint cars. Thats why captchas used to be just words since google was working on image to text stuff.
Unknown parent

pleroma - Link to source
jlines
@weirdwriter I already find myself, as someone with a reputation for knowing about computers, being told more personal and financial details than I would like to help out friends who are blocked by the complexity of the process from, for example making charitable donations, or sponsoring someone for a good cause.
in reply to Sarah Brown

Very funny. Truly!

With that said, I think the point is that self driving cars will be here someday, but they are not here yet.
Or maybe they are? Supposedly, AI is better at captchas than humans now.

in reply to Sarah Brown

if self driving cars are inevitable, shouldn't AI be able to identify traffic lights? seems counterproductive.
in reply to Sarah Brown

I saw in a news article musk had to intervene on a live video when his twitter car was going to go through a red light 😂
in reply to Ben Todd

@monkeyben yep, he was first in line at the intersection, the green arrow came on for opposing traffic to turn left and the car hit the gas
in reply to Sarah Brown

to be fair to Google re-captcha, those traffic lights contain special grainy noise patterns that throw off computer algorithms...
in reply to Sarah Brown

same people making my thumbs do a massive amount of work for this experience.
in reply to Sarah Brown

@toplesstopics
M Knight Shyamalan twist:

We are all trapped in a simulation inside a self driving Ford Windstar 200 years in the future and the traffic captchas help them drive.

in reply to Diabetic Heihachi

@DavBot or the tech bros remote control the car to drive you to the hit man location of their choice, as happens in many not-so-sci-fi movies I could name 😩
in reply to Sarah Brown

if they were able to take a step back they may feel the pain.
in reply to Sarah Brown

The slowly dawning realisation that somewhere in San Francisco a "self driving" car is sat patiently waiting for you to complete a CAPTCHA :tiredcat2:
Unknown parent

mastodon - Link to source
Eli the Bearded
@MartyFouts @yacc143 @robcornelius
What statistics? That human drivers have gotten a lot more reckless since three years ago or something else? Where can someone else see these stats?
in reply to Sarah Brown

Worst part is the computer then considering it incorrect when the human says that a sign with an image of a traffic light is not actually a traffic light.

If that AI is going to be driving cars, I'll pass.

Unknown parent

mastodon - Link to source
Andreas K

@MartyFouts @elithebearded @robcornelius

Funny Swiss RE just published a self-driving cars are safer than human-driven cars, purely based on insurance data.

arxiv.org/pdf/2309.01206.pdf

Unknown parent

friendica (DFRN) - Link to source
Sarah Brown
@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius and if they’re looking at driving assistance systems, then I will note that they routinely try to kill you. It’s just that the driver interrupts them in the act (source: have one)
Unknown parent

mastodon - Link to source
Marty Fouts
@yacc143 @elithebearded @robcornelius I’ve had a chance to read the article, which I should have done earlier. The flaw is that they are only looking at Waymo One 3rd party insurance data; but Waymo is self insured and so does not report all of their incidents this way. You have to look at data reported to the DMV for a more comprehensive analysis. Also, Waymo One only represents a fraction of Waymo’s data.
Unknown parent

friendica (DFRN) - Link to source
Sarah Brown

@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius for me the benefit of active systems is that they are immensely valuable in keeping you fresh on a long journey by reducing cognitive load.

This more than compensates for the occasional blip where they try to kill you at 120kph and you have to intervene to stop that.

But they absolutely still do it. Not often, but dead is dead, right?

in reply to Sarah Brown

at intersections automated vehicles continue in between crossing traffic without stopping, all vehicles being aware of the others and keeping necessary distance. No lights needed.
in reply to Sarah Brown

I know this is a funny, but given that your responses suggest you really believe it I should point out that the action of identifying them traffic light (or bridge, or motorcycle, or boat, or whatever) is not in itself sufficient to identify you to the recaptcha as human, it's the way in which you select the items, the movement of the cursor, the time taken to read the page etc.
in reply to Sarah Brown

Nice joke!! I actually learned recently why captcha or recaptcha got so simple, it's because it's not the result that matters 😂 It's the mouse path 🐁
in reply to Sarah Brown

Why do you think "techbros" are a homogenous amorphous mass with a single will? Do you even have a working definition of what a "techbro" is? This sounds like classical prejudice to me - invent a group, then ascribe all opinions by one member to the group as a whole.
in reply to Sarah Brown

Trying to pull seniority on someone you don't know is not really a power move. Nor does it advance the discussion.
in reply to Stephan Schulz

@Stephan Schulz "Nor does it advance the discussion", he said, nasally.

Nice fedora. Did your mum get it for you?

in reply to Sarah Brown

Whenever I encounter a captcha like that I imagine there’s somewhere just in this very moment a poor little self driving car needing my assistance.
in reply to Sarah Brown

With enough people solving captchas constantly we'll be able to use them to help cars solve ethical problems on the fly like should I swerve to avoid the small child and hit the old lady