Techbros: self driving cars are inevitable!

Also techbros: prove you are human by performing a task that computers can’t do, like identifying traffic lights.

in reply to Sarah Brown

The media in this post is not displayed to visitors. To view it, please go to the original post.

@robcornelius

There's also the issue of context. Those are indeed physical traffic lights but the AI should realise why it shouldn't obey them
futurism.com/the-byte/tesla-au…

in reply to Sarah Brown

@robcornelius

Self-driving cars are for sure not ready yet, and some companies and people are pushing their deployment too hard.

I'm convinced they will get good enough though.

On a related note, I saw in your profile you moved to Portugal. I hope you like it there, and there is indeed much to like, but you must have noticed how *people* drive over there. It shows up in the accident and death stats too.
I'm Portuguese and grew up there. I'm not at all convinced that people can drive safely.

in reply to Sarah Brown

in reply to Sarah Brown

@robcornelius
In the context of the trolley problem, they literally must be.

(As are BTW humans, generally speaking, when an accident happens, humans do not have the time to take a well reasoned decision, it generally is more or less a random decision if at all.)

Taking any stand on the trolley problem, and related ones, immediately raises questions of liability. So random() is the "safe" out for moral cowards.

in reply to Andreas K

@robcornelius
But coming back to the more general problem, and taking the point of view of a budding data scientist.

It's not a question if algorithm-driven cars cause accidents. (the AI buzzword gives me migraine)

Humans DO cause accidents too.

So the question is: Do computer-driven cars cause more or less accidents than humans.

It's hard to assess this, at the moment, as there are literally only a tiny number of really self-driving cars.

in reply to Andreas K

@yacc143 @robcornelius There are roughly 70 companies researching driving automation here in the Bay Area that provide data to the state. Safety of these vehicles peaked around 3 years ago with the best performing cars having 5 times as many accidents per mile as the average driver. They appear to have reached the asymptote of the improvement curve.

Pippin reshared this.

in reply to Andreas K

@yacc143 @robcornelius The politics at the state level are fascinating or would be if lives weren’t at stake. Google started using public roads without permission as Tesla still does. Cal DMV stepped in and designed a program requiring safety drivers in the test vehicles but Google’s Waymo spin-off got another state agency involved and so they and Cruise have licenses to run driverless taxi service in San Francisco. Or as someone else pointed out: money
in reply to Marty Fouts

@MartyFouts
Don't take it wrongly, but the German car makers literally spent years in R&D and ended up offering way less (co pilot system for limited use cases eg high ways/autobahns), literally citing this as the safe state of the art. They could offer more of they were willing to associate their brands with unsafe cars.

@goatsarah @robcornelius

in reply to Marty Fouts

@MartyFouts @yacc143 @robcornelius This article jibes with my outsider (out of the car, not out of the area) impression. Human drivers have gotten a lot worse since covid cleared the streets in 2020 from what I see.
arstechnica.com/cars/2023/09/a…
in reply to Andreas K

@yacc143 @MartyFouts @robcornelius My observations walking around a lot are speeding/ignoring traffic signals (lights and stop signs)/ terrible choices about u-turns etc, got bad in 2020 _and have not improved_. Which has me question comparisons between pre-covid and now. I am just one person walking around one city, so very much ancedote and not conclusive data, but I don't see self-driving cars doing those dangerous driving things. I see them block traffic but that's about it.
in reply to Seb

@seb321 @robcornelius side note, just in case it has relevance: some years ago, a buddy of mine wrote a piece about the trolley problem --

medium.com/personified-systems…

in reply to Sarah Brown

The great plan:
1️⃣ Give techbros all your data for free
2️⃣ spend your meaningless life watching boring and annoying ads all time, even while farting at WC, giving techbros bucks in the process.
3️⃣ Help techbros to train visual AI via captchas
4️⃣ be smashed in an driving incident by a self-driving car.
5️⃣ PROFIT!
This entry was edited (2 years ago)

Sarah Brown reshared this.

in reply to Sarah Brown

About five years:

"It seems like you're connecting from an unrecognized device. To access your account, please use the controls below to operate the taxi. Your pickup is at Christopher and Washington, and their destination is Columbia Heights and Vine. Be sure to drive on the right side of the street and try not to hit any pedestrians, or you will have to wait 24 hours before trying again."

in reply to Sarah Brown

Yes, there is some irony to it 😀

I guess the question for self-driving cars is when and how many.

Personally I don't drive so I'd like cars to drive themselves. And I'd like all cars to be electric.

But most urgently I'd like a *lot fewer cars* traveling *a lot fewer Kms*, regardless of the kind of car and who or what is driving.

Self-driving could even help with that in a small way, but above anything we have to want less car in the world.

I do believe that self-driving capability is inevitable though. It's just a question of time.
Incidentally, when we solve captchas they give us images that are labeled to check it we are "human", but also some that are not or by few people so aren't reliable. So we are contributing to the training data that advances computer vision. In fact, computers can solve most of the captchas already.

So I think self driving is inevitable but I hope cars aren't.

in reply to adam :neocat_floof:

@Ádám :fedora: 🇭🇺 Ok, 2 problems: 1. American roads. The US has not signed up to the Vienna Convention on traffic signs which most of the rest of the world uses.

2. Those cars are out there today, so if they still need training to recognise things that they aren't supposed to run into, then they're basically murderbots.

in reply to Sarah Brown

The media in this post is not displayed to visitors. To view it, please go to the original post.

the irony is that ML models are now better at solving captchas than humans, making captchas entirely pointless.

arxiv.org/abs/2307.12108

in reply to Sarah Brown

That's a straw target. They're training them to be better than _all_ humans, not just as good as _a_ human.

AI automation won't come about from a machine being able to identify some specific traffic light, it'll come when a machine can identify more traffic light signals far quicker than an average human, and on average drive safer per mile - not perfect, but better than an average human. That's maybe already happened. But there are always more traffic lights to make them better.

in reply to 'ingie

[ to be clear/more general to the above, "traffic light" of course could be any object requiring identification or assessment in such a situation ]

I understand the repulsion, tho.
I personally suspect we're at least on a cusp of where self-driving machines will likely be better, on average, than an average human, which would therefore be safer for everyone, even if there's mistakes, there'll be fewer mistakes.

AGI automation, well that's a different kettle of very scary fish.

in reply to Sarah Brown

Not sure what you meant with the American versus Viennan Convention bits. The training is to allow them to learn things that aren't easily trained on standards.

Tho I totally agree with your latter point. It's not really important for automated driving as it's not AI driving, as the AI model isn't in control, it's part of a feedback system of sensors to a conventional autopilot system.
Like in an airplane... which can already land themselves safely if needed.

in reply to 'ingie

... but i fully agree, we should never, at least not in any time scale I can see currently solvable, put an AGI actually in charge of driving *anything*. As that's indeed when the optimisation - the alignment problem, as they call it, rears its very ugly head. (an AGI will never have the same philosophical world view as a human)

We haven't got AGIs yet... but that's a very italicised yet. That's that moment we need to really be very very cautious of.

in reply to 'ingie

related: I've found Rob Miles' videos on Computerphile very good on the subjects - he's a researcher into such things at Nottingham Uni.
I found his stuff very eye opening to things I hadn't even considered as "threats" in that sense - his research is a good reference of concerns that really need to be made more widely known. Most of his interviews tend to be a little technical, but not excludingly so. I think this is a good example of such worrying concepts:
youtube.com/watch?v=3TYT1Qfdfs…
Unknown parent

mastodon - Link to source

Marty Fouts

@yacc143 @elithebearded @robcornelius I’ve had a chance to read the article, which I should have done earlier. The flaw is that they are only looking at Waymo One 3rd party insurance data; but Waymo is self insured and so does not report all of their incidents this way. You have to look at data reported to the DMV for a more comprehensive analysis. Also, Waymo One only represents a fraction of Waymo’s data.
Unknown parent

friendica (DFRN) - Link to source

Sarah Brown

@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius for me the benefit of active systems is that they are immensely valuable in keeping you fresh on a long journey by reducing cognitive load.

This more than compensates for the occasional blip where they try to kill you at 120kph and you have to intervene to stop that.

But they absolutely still do it. Not often, but dead is dead, right?

in reply to Sarah Brown

I know this is a funny, but given that your responses suggest you really believe it I should point out that the action of identifying them traffic light (or bridge, or motorcycle, or boat, or whatever) is not in itself sufficient to identify you to the recaptcha as human, it's the way in which you select the items, the movement of the cursor, the time taken to read the page etc.