Why do you think the tech bros want us to identify traffic lights, bicycles etc. Its so they can better train their "AI" models to spot them so the self driving cars become a reality. We are all beta testers
@RobCornelius notice, BTW, it’s always American street furniture, a country well known for being a huge international outlier in terms of not using internationally standardised vernacular for street furniture.
@ajlanes Man, these aren't even that hard to follow, why isn't Canada a signatory for this either :I Probably just playing follow the leader with the US on that one, bah
Perhaps it is because the self-driving AI's already know what objects are unambiguosly traffic lights but still require human input to answer the philosophical questions of whether the iconic depiction of a traffic light painted on a sign counts as a traffic light
@Bornach @RobCornelius Except that's nothing like the widely internationally ratified sign for a traffic light, as used in most of the world (completely the wrong shape for a start!)
Self-driving cars are for sure not ready yet, and some companies and people are pushing their deployment too hard.
I'm convinced they will get good enough though.
On a related note, I saw in your profile you moved to Portugal. I hope you like it there, and there is indeed much to like, but you must have noticed how *people* drive over there. It shows up in the accident and death stats too. I'm Portuguese and grew up there. I'm not at all convinced that people can drive safely.
@robcornelius captcha is a scam for crowd-sourcing training data and has been such for the past 16 years since Google got the idea they can use people solving captchas as free labor for helping OCR their scanned library of books.
It’s not that currently computers can’t do it, it’s that they can’t do it perfectly 100% of the time and the crowdsourcing serves to refine the data they recognise for further training. Do you think a human sits there choosing pics from StreetView and marking which squares contain traffic lights? It’s automatically generated from computers recognising this stuff, and the pics that get chosen are those where the computer is mostly sure it recognises some of the objects or the lack thereof (this is the actual human detection part to stop dumb spam answers), and some it’s not sure abut (these are the ones you can solve either way and it’ll accept your solution, try it if you detect which images that
... show more
@robcornelius captcha is a scam for crowd-sourcing training data and has been such for the past 16 years since Google got the idea they can use people solving captchas as free labor for helping OCR their scanned library of books.
It’s not that currently computers can’t do it, it’s that they can’t do it perfectly 100% of the time and the crowdsourcing serves to refine the data they recognise for further training. Do you think a human sits there choosing pics from StreetView and marking which squares contain traffic lights? It’s automatically generated from computers recognising this stuff, and the pics that get chosen are those where the computer is mostly sure it recognises some of the objects or the lack thereof (this is the actual human detection part to stop dumb spam answers), and some it’s not sure abut (these are the ones you can solve either way and it’ll accept your solution, try it if you detect which images that might be).
Then how does that captcha even work if you can have a computer solve it? Simple, spam bots are insanely dumb. You can filter out most of them by including a hidden form field and checking if it remains hidden since bots love filling out all form fields, no image recognition required. To solve captchas with robots you need much more computing power than makes sense for a spam bot and a hand-crafted solving script so that the bot even knows which parts it’s supposed to do image recognition on. These are not problems encountered by computers driving cars – they have a metric crapton of computing power dedicated to image recognition of their surroundings, and they are supplied with the images and all the information about them immediately in the form they best understand.
@robcornelius In the context of the trolley problem, they literally must be.
(As are BTW humans, generally speaking, when an accident happens, humans do not have the time to take a well reasoned decision, it generally is more or less a random decision if at all.)
Taking any stand on the trolley problem, and related ones, immediately raises questions of liability. So random() is the "safe" out for moral cowards.
@yacc143 @robcornelius There are roughly 70 companies researching driving automation here in the Bay Area that provide data to the state. Safety of these vehicles peaked around 3 years ago with the best performing cars having 5 times as many accidents per mile as the average driver. They appear to have reached the asymptote of the improvement curve.
@Marty Fouts @Andreas K @RobCornelius why, it’s almost like there’s a whole pile of bullshit surrounding the whole endeavour. So unlike the tech industry, that.
@yacc143 @robcornelius @MartyFouts In really advice to be watchful for companies trying to solve the problem by making cities more self-driving-car friendly. By making streets even more the insta-death zone in front of your door.
@MartyFouts That begs the question how come they got the permission to test on public roads and endanger the public of that's their best case? @goatsarah @robcornelius
@yacc143 @robcornelius The politics at the state level are fascinating or would be if lives weren’t at stake. Google started using public roads without permission as Tesla still does. Cal DMV stepped in and designed a program requiring safety drivers in the test vehicles but Google’s Waymo spin-off got another state agency involved and so they and Cruise have licenses to run driverless taxi service in San Francisco. Or as someone else pointed out: money
@MartyFouts Don't take it wrongly, but the German car makers literally spent years in R&D and ended up offering way less (co pilot system for limited use cases eg high ways/autobahns), literally citing this as the safe state of the art. They could offer more of they were willing to associate their brands with unsafe cars.
@Andreas K @RobCornelius @Marty Fouts My car has a similar autopilot system (lane following and distance maintenance). It works very well, but you HAVE to be aware of its limitations. You CANNOT remove the human from the system
@MartyFouts @yacc143 @robcornelius What statistics? That human drivers have gotten a lot more reckless since three years ago or something else? Where can someone else see these stats?
@elithebearded @yacc143 @robcornelius The human driver statistics are published by the Insurance Institute of America. The automation statistics are reported to the California DMV. They were published until a few years ago but now you have to ask the DMV each year for the data. Human drivers have not gotten a lot worse but automation has stopped getting better and was never as good as average drivers
@yacc143 @elithebearded @robcornelius There are not enough automated cars on the roads to have sufficient data for such a comparison unless they have a definition of “self driving” that is mainly based on driving assistance automation rather than driver replacement systems.
@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius and if they’re looking at driving assistance systems, then I will note that they routinely try to kill you. It’s just that the driver interrupts them in the act (source: have one)
@elithebearded @yacc143 @robcornelius There are active and passive assistance systems. Passive systems like backup cameras and blind spot warning are safer than no assistance. I don’t know specifics of active systems but they seem to vary widely in quality. But certainly they can all cause problems especially lane following and automatic emergency brakes. I have had a Subaru attempt to counter my steering on ice and nearly crashed as a result. Traction control makes me cringe.
@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius for me the benefit of active systems is that they are immensely valuable in keeping you fresh on a long journey by reducing cognitive load.
This more than compensates for the occasional blip where they try to kill you at 120kph and you have to intervene to stop that.
But they absolutely still do it. Not often, but dead is dead, right?
@yacc143 @elithebearded @robcornelius I’ve had a chance to read the article, which I should have done earlier. The flaw is that they are only looking at Waymo One 3rd party insurance data; but Waymo is self insured and so does not report all of their incidents this way. You have to look at data reported to the DMV for a more comprehensive analysis. Also, Waymo One only represents a fraction of Waymo’s data.
@elithebearded @MartyFouts @robcornelius COVID/2020 has made statistics/relative references suspicious. (Actually that applies to subjective perception too)
It was such an extreme outlier, that 2021 might have some number at 75% of the 2019 value, and be still perceived as a huge relative raise over 2020.
@yacc143 @elithebearded @robcornelius That may be but it is not relevant here. Covid data, as you say, represents an outlier for various reasons. The driving automation data on the other hand is produced under controlled circumstances and reflects the best that the cars can do. It is part of why Waymo has publicly stated that it is an intractable problem.
@yacc143 @MartyFouts @robcornelius My observations walking around a lot are speeding/ignoring traffic signals (lights and stop signs)/ terrible choices about u-turns etc, got bad in 2020 _and have not improved_. Which has me question comparisons between pre-covid and now. I am just one person walking around one city, so very much ancedote and not conclusive data, but I don't see self-driving cars doing those dangerous driving things. I see them block traffic but that's about it.
@seb321 I listened to an excellent podcast on just that about 5 years ago. No chance of finding it again. That's in the equivalent of early Neolithic now.
The advent of “smart” and autonomous systems is becoming so important a feature of contemporary life that issues of “Machine Ethics” are gaining a rapidly growing hold on our attention. Along with…
@MartinSenk If you successfulky solve the captcha, you are considered a bot and only get the SEO version of the website, that is meant for indexing by search engines. To get the actual hidden amazing content you have to choose wrong answers only!
I wonder if this means that if you spend hours intentionally providing the wrong answers round and round in circles, that it will serve to confuse AI learning somewhat 🤔
The great plan: 1️⃣ Give techbros all your data for free 2️⃣ spend your meaningless life watching boring and annoying ads all time, even while farting at WC, giving techbros bucks in the process. 3️⃣ Help techbros to train visual AI via captchas 4️⃣ be smashed in an driving incident by a self-driving car. 5️⃣ PROFIT!
"It seems like you're connecting from an unrecognized device. To access your account, please use the controls below to operate the taxi. Your pickup is at Christopher and Washington, and their destination is Columbia Heights and Vine. Be sure to drive on the right side of the street and try not to hit any pedestrians, or you will have to wait 24 hours before trying again."
I guess the question for self-driving cars is when and how many.
Personally I don't drive so I'd like cars to drive themselves. And I'd like all cars to be electric.
But most urgently I'd like a *lot fewer cars* traveling *a lot fewer Kms*, regardless of the kind of car and who or what is driving.
Self-driving could even help with that in a small way, but above anything we have to want less car in the world.
I do believe that self-driving capability is inevitable though. It's just a question of time. Incidentally, when we solve captchas they give us images that are labeled to check it we are "human", but also some that are not or by few people so aren't reliable. So we are contributing to the training data that advances computer vision. In fact, computers can solve most of the captchas already.
So I think self driving is inevitable but I hope cars aren't.
to be fair, there was a study published recently that showed that AI bots solve CAPTCHAs 90% faster and 70% more accurately than humans 😅
I HAVE seen a Tesla's entertainment unit spawn an endless row of traffic lights on both sides of the road when driving behind a truck carrying traffic lights, though, so the original point still stands.
@Ádám :fedora: 🇭🇺 Ok, 2 problems: 1. American roads. The US has not signed up to the Vienna Convention on traffic signs which most of the rest of the world uses.
2. Those cars are out there today, so if they still need training to recognise things that they aren't supposed to run into, then they're basically murderbots.
For nearly two decades, CAPTCHAs have been widely used as a means of
protection against bots. Throughout the years, as their use grew, techniques to
defeat or bypass CAPTCHAs have continued to improve.
@ariadne Please let captchas die. Since my browser doesn't keep cookies I'm constantly solving them. I often refresh until I get the easiest one possible.
@binarypie @ariadne I don't keep cookies and several months ago using ArtStation website was a nightmare. 2-3 captchas every time I had to log in. For some reason it disappeared and is ok now, unless I would try to log in from Tor.
@ariadne worse than useless, even I struggle to solve them and I have no (major) impairments. My son with autism would find most of them impossible. Captchas are inherently exclusionary and I’m guessing also violate laws like the Americans with Disabilities Act. And now they don’t work, in the sense of keeping computers out but letting people in. Even @cloudflare figured this out last year https://blog.cloudflare.com/end-cloudflare-captcha/ before the AIs took over.
@ariadne @cloudflare I’ve actually noticed I’ve been getting photos from other parts of the world and asked to identify motorcycles where there are scooters? And some very strange looking fire hydrants.
"For instance, with a 10,000 machine botnet (which would be considered relatively small these days), given broadband connections and multi-threaded attack code, even with only 10 threads per machine, a 0.01% success rate would yield 10 successes every second, which would provide the attacker with 864,000 new accounts per day if they were attacking a registration interface."
As a result, they will get very good at spotting things that look like American traffic lights used in Captchas, and probably dreadful at spotting anything else.
That's a straw target. They're training them to be better than _all_ humans, not just as good as _a_ human.
AI automation won't come about from a machine being able to identify some specific traffic light, it'll come when a machine can identify more traffic light signals far quicker than an average human, and on average drive safer per mile - not perfect, but better than an average human. That's maybe already happened. But there are always more traffic lights to make them better.
[ to be clear/more general to the above, "traffic light" of course could be any object requiring identification or assessment in such a situation ]
I understand the repulsion, tho. I personally suspect we're at least on a cusp of where self-driving machines will likely be better, on average, than an average human, which would therefore be safer for everyone, even if there's mistakes, there'll be fewer mistakes.
AGI automation, well that's a different kettle of very scary fish.
Not sure what you meant with the American versus Viennan Convention bits. The training is to allow them to learn things that aren't easily trained on standards.
Tho I totally agree with your latter point. It's not really important for automated driving as it's not AI driving, as the AI model isn't in control, it's part of a feedback system of sensors to a conventional autopilot system. Like in an airplane... which can already land themselves safely if needed.
... but i fully agree, we should never, at least not in any time scale I can see currently solvable, put an AGI actually in charge of driving *anything*. As that's indeed when the optimisation - the alignment problem, as they call it, rears its very ugly head. (an AGI will never have the same philosophical world view as a human)
We haven't got AGIs yet... but that's a very italicised yet. That's that moment we need to really be very very cautious of.
related: I've found Rob Miles' videos on Computerphile very good on the subjects - he's a researcher into such things at Nottingham Uni. I found his stuff very eye opening to things I hadn't even considered as "threats" in that sense - his research is a good reference of concerns that really need to be made more widely known. Most of his interviews tend to be a little technical, but not excludingly so. I think this is a good example of such worrying concepts: https://www.youtube.com/watch?v=3TYT1QfdfsM
How do you implement an on/off switch on a General Artificial Intelligence? Rob Miles explains the perils.Part 1: https://www.youtube.com/watch?v=4l7Is6vOAOA...
@'ingie I mean that they're all trained on American stuff.
5% of the world population.
So when they unleash the result on the other 95% of us, it's going to have no fucking idea what it's doing, because it doesn't look a damn thing like the training data.
fun fact, when you find the pictures that contain traffic lights that actually contributes to the training data to these self drivint cars. Thats why captchas used to be just words since google was working on image to text stuff.
@weirdwriter I already find myself, as someone with a reputation for knowing about computers, being told more personal and financial details than I would like to help out friends who are blocked by the complexity of the process from, for example making charitable donations, or sponsoring someone for a good cause.
With that said, I think the point is that self driving cars will be here someday, but they are not here yet. Or maybe they are? Supposedly, AI is better at captchas than humans now.
@DavBot or the tech bros remote control the car to drive you to the hit man location of their choice, as happens in many not-so-sci-fi movies I could name 😩
Worst part is the computer then considering it incorrect when the human says that a sign with an image of a traffic light is not actually a traffic light.
If that AI is going to be driving cars, I'll pass.
at intersections automated vehicles continue in between crossing traffic without stopping, all vehicles being aware of the others and keeping necessary distance. No lights needed.
I know this is a funny, but given that your responses suggest you really believe it I should point out that the action of identifying them traffic light (or bridge, or motorcycle, or boat, or whatever) is not in itself sufficient to identify you to the recaptcha as human, it's the way in which you select the items, the movement of the cursor, the time taken to read the page etc.
Why do you think "techbros" are a homogenous amorphous mass with a single will? Do you even have a working definition of what a "techbro" is? This sounds like classical prejudice to me - invent a group, then ascribe all opinions by one member to the group as a whole.
With enough people solving captchas constantly we'll be able to use them to help cars solve ethical problems on the fly like should I swerve to avoid the small child and hit the old lady
RobCornelius
•Kristeil L. likes this.
RooftopJacks
•RooftopJacks
•RobCornelius
•@rooftopaxx Yes to both.
The real world is not simulated or controlled.
Seb
•Mr E McKissock
•That puts me in mind of a Poe #poem
Sarah Brown
@RobCornelius I’m fully aware. I’m also aware that they are STILL doing both, simultaneously. Only one can be true. Which is it?
(Hint, it’s “computers can’t identify traffic lights”. Self driving cars are, in fact, randomised murderbots)
reshared this
Claudius Link and I am Jack's Lost 404 reshared this.
Sarah Brown
reshared this
Lisa and Earth, Wind and Wire🌪️ reshared this.
Alexandra Lanes
Sarah Brown
Elfi, :verifiedtransbian: cute moth
•RobCornelius
•Its down to certain peoples egos (specifically Muskrats) versus reality. Its not going to end well, that's for sure.
Have you ever noticed you are never asked to identify street objects in upscale neighbourhoods? OK posh neighbourhoods are less cluttered but....
Sarah Brown
Bornach
•Sylvain
•@bornach
🎶
I like traffic lights
No matter where they've been
I like traffic lights
But only when they're green
🎶
@goatsarah @robcornelius
Sarah Brown likes this.
Sarah Brown
David S
•Sarah Brown
Sylvain
•I'm so worried about whether you ought to have stopped...
@bornach @Pionir @robcornelius
Sarah Brown
@Bornach @RobCornelius Except that's nothing like the widely internationally ratified sign for a traffic light, as used in most of the world (completely the wrong shape for a start!)
https://en.wikipedia.org/wiki/Vienna_Convention_on_Road_Signs_and_Signals
Bornach
•@robcornelius
There's also the issue of context. Those are indeed physical traffic lights but the AI should realise why it shouldn't obey them
https://futurism.com/the-byte/tesla-autopilot-bamboozled-truck-traffic-lights
Tesla Autopilot Glitch of Truck Hauling Traffic Lights | Futurism
Dan Robitzski (Futurism)Carlos Guerreiro
•@robcornelius
Self-driving cars are for sure not ready yet, and some companies and people are pushing their deployment too hard.
I'm convinced they will get good enough though.
On a related note, I saw in your profile you moved to Portugal. I hope you like it there, and there is indeed much to like, but you must have noticed how *people* drive over there. It shows up in the accident and death stats too.
I'm Portuguese and grew up there. I'm not at all convinced that people can drive safely.
Alexandra Lanes likes this.
Sarah Brown
Amikke
•@robcornelius captcha is a scam for crowd-sourcing training data and has been such for the past 16 years since Google got the idea they can use people solving captchas as free labor for helping OCR their scanned library of books.
It’s not that currently computers can’t do it, it’s that they can’t do it perfectly 100% of the time and the crowdsourcing serves to refine the data they recognise for further training. Do you think a human sits there choosing pics from StreetView and marking which squares contain traffic lights? It’s automatically generated from computers recognising this stuff, and the pics that get chosen are those where the computer is mostly sure it recognises some of the objects or the lack thereof (this is the actual human detection part to stop dumb spam answers), and some it’s not sure abut (these are the ones you can solve either way and it’ll accept your solution, try it if you detect which images that
... show more@robcornelius captcha is a scam for crowd-sourcing training data and has been such for the past 16 years since Google got the idea they can use people solving captchas as free labor for helping OCR their scanned library of books.
It’s not that currently computers can’t do it, it’s that they can’t do it perfectly 100% of the time and the crowdsourcing serves to refine the data they recognise for further training. Do you think a human sits there choosing pics from StreetView and marking which squares contain traffic lights? It’s automatically generated from computers recognising this stuff, and the pics that get chosen are those where the computer is mostly sure it recognises some of the objects or the lack thereof (this is the actual human detection part to stop dumb spam answers), and some it’s not sure abut (these are the ones you can solve either way and it’ll accept your solution, try it if you detect which images that might be).
Then how does that captcha even work if you can have a computer solve it? Simple, spam bots are insanely dumb. You can filter out most of them by including a hidden form field and checking if it remains hidden since bots love filling out all form fields, no image recognition required. To solve captchas with robots you need much more computing power than makes sense for a spam bot and a hand-crafted solving script so that the bot even knows which parts it’s supposed to do image recognition on. These are not problems encountered by computers driving cars – they have a metric crapton of computing power dedicated to image recognition of their surroundings, and they are supplied with the images and all the information about them immediately in the form they best understand.
Andreas K
•@robcornelius
In the context of the trolley problem, they literally must be.
(As are BTW humans, generally speaking, when an accident happens, humans do not have the time to take a well reasoned decision, it generally is more or less a random decision if at all.)
Taking any stand on the trolley problem, and related ones, immediately raises questions of liability. So random() is the "safe" out for moral cowards.
Andreas K
•@robcornelius
But coming back to the more general problem, and taking the point of view of a budding data scientist.
It's not a question if algorithm-driven cars cause accidents. (the AI buzzword gives me migraine)
Humans DO cause accidents too.
So the question is: Do computer-driven cars cause more or less accidents than humans.
It's hard to assess this, at the moment, as there are literally only a tiny number of really self-driving cars.
Andreas K
•And the involved entities are commercial, so they tend to throw the veil of trade secrets over most of their data.
Marty Fouts
•Pippin likes this.
Pippin reshared this.
Sarah Brown
[object Object] reshared this.
Der Giga
•Andreas K
•That begs the question how come they got the permission to test on public roads and endanger the public of that's their best case?
@goatsarah @robcornelius
RobCornelius
•@yacc143 @MartyFouts
Money
Next Question.
Sarah Brown likes this.
Marty Fouts
•Andreas K
•@MartyFouts
Don't take it wrongly, but the German car makers literally spent years in R&D and ended up offering way less (co pilot system for limited use cases eg high ways/autobahns), literally citing this as the safe state of the art. They could offer more of they were willing to associate their brands with unsafe cars.
@goatsarah @robcornelius
Sarah Brown
Alexandra Lanes
Sarah Brown
Alexandra Lanes likes this.
Eli the Bearded
•https://arstechnica.com/cars/2023/09/are-self-driving-cars-already-safer-than-human-drivers/
Are self-driving cars already safer than human drivers?
Ars TechnicaMarty Fouts
•Eli the Bearded
•What statistics? That human drivers have gotten a lot more reckless since three years ago or something else? Where can someone else see these stats?
Marty Fouts
•Andreas K
•@MartyFouts @elithebearded @robcornelius
Funny Swiss RE just published a self-driving cars are safer than human-driven cars, purely based on insurance data.
https://arxiv.org/pdf/2309.01206.pdf
Marty Fouts
•Sarah Brown
Marty Fouts
•Sarah Brown
@Marty Fouts @Eli the Bearded @Andreas K @RobCornelius for me the benefit of active systems is that they are immensely valuable in keeping you fresh on a long journey by reducing cognitive load.
This more than compensates for the occasional blip where they try to kill you at 120kph and you have to intervene to stop that.
But they absolutely still do it. Not often, but dead is dead, right?
Marty Fouts
•Andreas K
•@elithebearded @MartyFouts @robcornelius
COVID/2020 has made statistics/relative references suspicious. (Actually that applies to subjective perception too)
It was such an extreme outlier, that 2021 might have some number at 75% of the 2019 value, and be still perceived as a huge relative raise over 2020.
Marty Fouts
•Eli the Bearded
•Cyber Yuki
•@robcornelius Relevant xkcd
https://xkcd.com/1897/
(transcript available at: https://explainxkcd.com/1897/ )
1897: Self Driving - explain xkcd
explainxkcd.comJosep Pueyo-Ros :rstats:
•@robcornelius Computers can identify traffic lights.
https://www.digit.fyi/ai-bots-can-beat-captcha-tests-better-than-humans-now/
AI bots can beat captcha tests better than humans now
elizabeth (DIGIT.FYI)Seb
•RobCornelius
•I listened to an excellent podcast on just that about 5 years ago. No chance of finding it again. That's in the equivalent of early Neolithic now.
Seb
•RobCornelius
•@seb321
Same.... though I made this 20+ years ago and its still there and being used http://www.jigsawstaff.com/
I think they finally got rid of my login that had god level permissions a few years ago at least.
Jigsawstaff.com
www.jigsawstaff.comSeb
•RobCornelius
•jonsinger
•@seb321 @robcornelius side note, just in case it has relevance: some years ago, a buddy of mine wrote a piece about the trolley problem --
https://medium.com/personified-systems/killing-the-runaway-trolley-problem-bf679b59baef
Killing the Runaway Trolley Problem - Personified Systems - Medium
Jim Burrows (Personified Systems)SOB2
•While true, the way you worded this comment seems condescending
Angela Scholder
•Melpomene
•Sarah Brown likes this.
redhero :taito:
•Wilhelm Gere
•Sarah Brown
Alda Vigdís :topspicy: 🧱 reshared this.
DaveOfTheNui
•The Doctor
•europlus :autisminf:
•BrisVegas
•Martin Senk
•C.Suthorn :prn:
•If you successfulky solve the captcha, you are considered a bot and only get the SEO version of the website, that is meant for indexing by search engines. To get the actual hidden amazing content you have to choose wrong answers only!
AJ Venter
•CharlieG
•Laimamberg ✅
•MostlyTato
•Sarah Brown likes this.
Kote Isaev
•1️⃣ Give techbros all your data for free
2️⃣ spend your meaningless life watching boring and annoying ads all time, even while farting at WC, giving techbros bucks in the process.
3️⃣ Help techbros to train visual AI via captchas
4️⃣ be smashed in an driving incident by a self-driving car.
5️⃣ PROFIT!
Sarah Brown likes this.
Sarah Brown reshared this.
ᴚ uɐᗡ
•Kadin
•About five years:
"It seems like you're connecting from an unrecognized device. To access your account, please use the controls below to operate the taxi. Your pickup is at Christopher and Washington, and their destination is Columbia Heights and Vine. Be sure to drive on the right side of the street and try not to hit any pedestrians, or you will have to wait 24 hours before trying again."
Karel Košnar
•Carlos Guerreiro
•Yes, there is some irony to it 😀
I guess the question for self-driving cars is when and how many.
Personally I don't drive so I'd like cars to drive themselves. And I'd like all cars to be electric.
But most urgently I'd like a *lot fewer cars* traveling *a lot fewer Kms*, regardless of the kind of car and who or what is driving.
Self-driving could even help with that in a small way, but above anything we have to want less car in the world.
I do believe that self-driving capability is inevitable though. It's just a question of time.
Incidentally, when we solve captchas they give us images that are labeled to check it we are "human", but also some that are not or by few people so aren't reliable. So we are contributing to the training data that advances computer vision. In fact, computers can solve most of the captchas already.
So I think self driving is inevitable but I hope cars aren't.
Luna :neofox_snug: :therian:
•Reiddragon :ablobcatattention:
•well, in theory you could have the traffic light status communicated to the cars in some other manner
but at the same time, self-driving cars would solve traffic issues no more than adding "just one more lane" every other year
🇸🇭🇮🇳🇲🇦🇮:verify:
•to be fair, there was a study published recently that showed that AI bots solve CAPTCHAs 90% faster and 70% more accurately than humans 😅
I HAVE seen a Tesla's entertainment unit spawn an endless row of traffic lights on both sides of the road when driving behind a truck carrying traffic lights, though, so the original point still stands.
Sarah Brown
Ádám :fedora: 🇭🇺
•Sarah Brown
@Ádám :fedora: 🇭🇺 Ok, 2 problems: 1. American roads. The US has not signed up to the Vienna Convention on traffic signs which most of the rest of the world uses.
2. Those cars are out there today, so if they still need training to recognise things that they aren't supposed to run into, then they're basically murderbots.
Ádám :fedora: 🇭🇺
•Sarah Brown likes this.
Ariadne Conill 🐰
•the irony is that ML models are now better at solving captchas than humans, making captchas entirely pointless.
https://arxiv.org/abs/2307.12108
An Empirical Study & Evaluation of Modern CAPTCHAs
arXiv.orgBDM reshared this.
Sarah Brown
@Ariadne Conill 🐰 Swap out the Captchas for ones that use Vienna Convention traffic signals, and see what happens.
I bet the machine accuracy plummets.
Charles Christolini
•Madiana A. Argon :blobcatace:
•kurtseifried (he/him)
•Sarah Brown
@kurtseifried (he/him) @Ariadne Conill 🐰 @Cloudflare And suffer from a terminal case of r/USDefaultism.
The rest of the world has to learn what American street furniture looks like to solve them.
kurtseifried (he/him)
•Farce Majeure
•Scott Williams 🐧
•ティージェーグレェ
•@ariadne Such weaknesses were pointed out at least as far back as 2009.
To quote Jonathan Wilkins' https://web.archive.org/web/20110723025202/http://bitland.net/captcha.pdf
"For instance, with a 10,000 machine botnet (which would be considered relatively small these days),
given broadband connections and multi-threaded attack code, even with only 10 threads per machine, a
0.01% success rate would yield 10 successes every second, which would provide the attacker with 864,000
new accounts per day if they were attacking a registration interface."
@goatsarah
Anton Podolsky
•Sarah Brown
@Anton Podolsky That's what they're doing.
As a result, they will get very good at spotting things that look like American traffic lights used in Captchas, and probably dreadful at spotting anything else.
'ingie
•That's a straw target. They're training them to be better than _all_ humans, not just as good as _a_ human.
AI automation won't come about from a machine being able to identify some specific traffic light, it'll come when a machine can identify more traffic light signals far quicker than an average human, and on average drive safer per mile - not perfect, but better than an average human. That's maybe already happened. But there are always more traffic lights to make them better.
'ingie
•[ to be clear/more general to the above, "traffic light" of course could be any object requiring identification or assessment in such a situation ]
I understand the repulsion, tho.
I personally suspect we're at least on a cusp of where self-driving machines will likely be better, on average, than an average human, which would therefore be safer for everyone, even if there's mistakes, there'll be fewer mistakes.
AGI automation, well that's a different kettle of very scary fish.
Sarah Brown
@'ingie Ever seen a Captcha that uses Vienna Convention traffic signs, as used in most of the world?
I’d say they’re training them to be better at people who’ve never been to America working out what American street furniture looks like.
And that’s the problem with this sort of AI training. It is almost certainly not optimising for what you think it is.
'ingie
•Not sure what you meant with the American versus Viennan Convention bits. The training is to allow them to learn things that aren't easily trained on standards.
Tho I totally agree with your latter point. It's not really important for automated driving as it's not AI driving, as the AI model isn't in control, it's part of a feedback system of sensors to a conventional autopilot system.
Like in an airplane... which can already land themselves safely if needed.
'ingie
•... but i fully agree, we should never, at least not in any time scale I can see currently solvable, put an AGI actually in charge of driving *anything*. As that's indeed when the optimisation - the alignment problem, as they call it, rears its very ugly head. (an AGI will never have the same philosophical world view as a human)
We haven't got AGIs yet... but that's a very italicised yet. That's that moment we need to really be very very cautious of.
'ingie
•I found his stuff very eye opening to things I hadn't even considered as "threats" in that sense - his research is a good reference of concerns that really need to be made more widely known. Most of his interviews tend to be a little technical, but not excludingly so. I think this is a good example of such worrying concepts:
https://www.youtube.com/watch?v=3TYT1QfdfsM
AI "Stop Button" Problem - Computerphile
YouTubeSarah Brown
@'ingie I mean that they're all trained on American stuff.
5% of the world population.
So when they unleash the result on the other 95% of us, it's going to have no fucking idea what it's doing, because it doesn't look a damn thing like the training data.
BlankRagnarRocker451
•queerthoughts
•Sarah Brown likes this.
707Kat
•Actually this is a misconception. By now AI are much better at passing reCapthchas than we are. How you might ask?
Because we taught them.
https://www.techradar.com/news/captcha-if-you-can-how-youve-been-training-ai-for-years-without-realising-it
Captcha if you can: how you’ve been training AI for years without realising it
James O'Malley (TechRadar)Orb 2069
•Jetison333
•André
•Roo_44
•klausfiend
•jlines
•sz_duras
•+>e
•doragasu
•Self Driving
xkcdNeil E. Hodges reshared this.
Neil E. Hodges
•end
•Very funny. Truly!
With that said, I think the point is that self driving cars will be here someday, but they are not here yet.
Or maybe they are? Supposedly, AI is better at captchas than humans now.
The Cadence Collective
•Ben Todd
•aburka 🫣
•marcorobotics
•rood
•I'm a T-Rex now *deep breath*
•@toplesstopics
M Knight Shyamalan twist:
We are all trapped in a simulation inside a self driving Ford Windstar 200 years in the future and the traffic captchas help them drive.
Cleo of Topless Topics
•Hans Hafner
•Jim Schoch
•Sarah Brown
Cysio :verified_gay:
•Claudius Link
•Martin Hamilton
•xebiche
•Martijn Vos
•Worst part is the computer then considering it incorrect when the human says that a sign with an image of a traffic light is not actually a traffic light.
If that AI is going to be driving cars, I'll pass.
Erik Andersen
•Sarah Brown
Geoff Winkless
•Patrice Mascalchi
•Sarah Brown
Stephan Schulz
•Sarah Brown
Stephan Schulz
•Sarah Brown
@Stephan Schulz "Nor does it advance the discussion", he said, nasally.
Nice fedora. Did your mum get it for you?
Christian Gudrian
•Sarah Brown likes this.
Andreas K
•Sure.
Nick Morgan
•Sarah Brown likes this.