Quizzes & Puzzles24 mins ago
Why Humans Can Never Be Replaced As Drivers.....
30 Answers
Answers
Best Answer
No best answer has yet been selected by ToraToraTora. Once a best answer has been selected, it will be shown here.
For more on marking an answer as the "Best Answer", please visit our FAQ.sorry pressed enter before I meant to..
https:/ /www.bb c.co.uk /news/t echnolo gy-4599 1093
in the situation described in the picture a page down, eg run into the barrier or kill some pedestrians. I would save both, easy for a person, not for software.
https:/
in the situation described in the picture a page down, eg run into the barrier or kill some pedestrians. I would save both, easy for a person, not for software.
the issue with accidents are they are usually exactly that, TTT. You can think about how to avoid a situation but until you're in the situation those options may not be available in the time you have before potential impact. One example is oil on the road.
Also, no matter how good of a driver you are, you'll never be as good as a computer. A computer knows the precise movements, angles and directions to turn the wheels to avoid the worst case scenario. Unless you've had multiple skid pan experiences you've got no chance.
Also, no matter how good of a driver you are, you'll never be as good as a computer. A computer knows the precise movements, angles and directions to turn the wheels to avoid the worst case scenario. Unless you've had multiple skid pan experiences you've got no chance.
//The awkward stage is half auto half not. //
the awkward stage will persist long after that - unless pedestrians, cyclists and animals are also prohibited from the road space of driverless vehicles. this is why the use of autonomous vehicles is likely to happen first on motorways, where legislation already exists to prohibit classes of user, which will in time be extended to include all non-autonomous vehicles.
the awkward stage will persist long after that - unless pedestrians, cyclists and animals are also prohibited from the road space of driverless vehicles. this is why the use of autonomous vehicles is likely to happen first on motorways, where legislation already exists to prohibit classes of user, which will in time be extended to include all non-autonomous vehicles.
It’s a good moral question.
Similar to the one where test subjects are offered the scenario of a runaway train. If it continues, it will kill their child. They have the choice of pulling a lever to move a set of points and diverting the train into the path of four track workers, who would certainly be killed by the train.
It’s much more of an ethical and moral question than one about autonomous cars.
Having said that, the auto industry is having deep doubts about so-called level-3 autonomous control.
There are six stages on the path to autonomy. Most talk about five steps and call them levels 1 to level 5.
See BMW’s explanation for example.
https:/ /www.bm w.com/e n/autom otive-l ife/aut onomous -drivin g.html
Broadly, Level 1 offers systems such as lane warning advisories, but the driver always chooses whether to act on those advisories
Level 2 is the highest we have available today – this is the Tesla Model S, for example. The car can steer for itself and avoid other vehicles, but will relatibely often require intervention from the driver.
Level 3 is the most problematic. The driver can hand over control to the vehicle systems for extended periods of time, but the vehicle is likely to request intervention in situations that it cannot understand
Level 4 will still have driver-operated controls, and may occasionally require the driver to take control.
Level 5 will not even have driver-operated controls. It is fully automatic.
The sixth level is fully manual – such as cars from the 1950s, and even some from the 2000s. You might call it level 0
For reference, there are no cars above level 2 in production today.
There are a number of questions relating to level 2 and level 3 driving. These relate to many issues where technology is in advance of legislation. There are some interesting ones:
1. Who is responsible when the car causes a collision when in automatic mode: the manufacturer, or the driver?
2. How long does driver training take? Is the training programme longer or shorter than for a fully manual vehicle?
3. In a world of GDPR, is the vehicle permitted to send data about driving conditions to the vehicle maker, or national road authorities in order to enhance safety?
4. When designing a level-3 autonomous vehicles, do we prefer to create false positives (a bit like crying wolf, and risk drivers ignoring a warning to take control), or false negatives, which will probably lead to a collision with no warning.
The reality is that when drivers become habituated to autonomous driving for extended periods (such as on a quiet motorway), they pay much less attention, and take much longer to respond when the vehicle insists they take control under dangerous circumstances.
Competent drivers, who can respond to an emergency in less than a second when alert and driving, take much longer – many seconds – to respond in the same way, after an extended period of allowing the vehicle to have control.
That increased delay leads to more collisions, as the vehicle systems tend to identify emergency situations later than an alert driver, giving less tome to respond.
That’s not to say computers will never take over the task of driving. That happens, for example, on the Docklands Light Railway, and the POD system at Heathrow terminal 5.
In those systems, the routes are highly controlled and the extraneous traffic is very severely limited.
As a result, the OECD has suggested that level 3 vehicles should not be developed, never mind be allowed on the roads with other vehicles.
One of the challenges is that many of the systems are being developed by engineers – and there are very significant engineering challenges. However, as TTT loves to remind us there is a much greater psychological challenge in building confidence in the capabilities and safety of the technologies.
Similar to the one where test subjects are offered the scenario of a runaway train. If it continues, it will kill their child. They have the choice of pulling a lever to move a set of points and diverting the train into the path of four track workers, who would certainly be killed by the train.
It’s much more of an ethical and moral question than one about autonomous cars.
Having said that, the auto industry is having deep doubts about so-called level-3 autonomous control.
There are six stages on the path to autonomy. Most talk about five steps and call them levels 1 to level 5.
See BMW’s explanation for example.
https:/
Broadly, Level 1 offers systems such as lane warning advisories, but the driver always chooses whether to act on those advisories
Level 2 is the highest we have available today – this is the Tesla Model S, for example. The car can steer for itself and avoid other vehicles, but will relatibely often require intervention from the driver.
Level 3 is the most problematic. The driver can hand over control to the vehicle systems for extended periods of time, but the vehicle is likely to request intervention in situations that it cannot understand
Level 4 will still have driver-operated controls, and may occasionally require the driver to take control.
Level 5 will not even have driver-operated controls. It is fully automatic.
The sixth level is fully manual – such as cars from the 1950s, and even some from the 2000s. You might call it level 0
For reference, there are no cars above level 2 in production today.
There are a number of questions relating to level 2 and level 3 driving. These relate to many issues where technology is in advance of legislation. There are some interesting ones:
1. Who is responsible when the car causes a collision when in automatic mode: the manufacturer, or the driver?
2. How long does driver training take? Is the training programme longer or shorter than for a fully manual vehicle?
3. In a world of GDPR, is the vehicle permitted to send data about driving conditions to the vehicle maker, or national road authorities in order to enhance safety?
4. When designing a level-3 autonomous vehicles, do we prefer to create false positives (a bit like crying wolf, and risk drivers ignoring a warning to take control), or false negatives, which will probably lead to a collision with no warning.
The reality is that when drivers become habituated to autonomous driving for extended periods (such as on a quiet motorway), they pay much less attention, and take much longer to respond when the vehicle insists they take control under dangerous circumstances.
Competent drivers, who can respond to an emergency in less than a second when alert and driving, take much longer – many seconds – to respond in the same way, after an extended period of allowing the vehicle to have control.
That increased delay leads to more collisions, as the vehicle systems tend to identify emergency situations later than an alert driver, giving less tome to respond.
That’s not to say computers will never take over the task of driving. That happens, for example, on the Docklands Light Railway, and the POD system at Heathrow terminal 5.
In those systems, the routes are highly controlled and the extraneous traffic is very severely limited.
As a result, the OECD has suggested that level 3 vehicles should not be developed, never mind be allowed on the roads with other vehicles.
One of the challenges is that many of the systems are being developed by engineers – and there are very significant engineering challenges. However, as TTT loves to remind us there is a much greater psychological challenge in building confidence in the capabilities and safety of the technologies.
IKJM: "It’s a good moral question. " - yes but in the situation in the link a human would avoid killing anyone. Eg turn into the side barrier, take most of the energy out a bit at a time and by the time you hit the block it's a minor collision. How many other situations would be like that? It's never a simple choice. there are other options that software will never be advanced enough to carry out.
here was a situation I was once in. I was delivering a car for a mate and I was in the outside line of a dual carraigeway waiting at a roundabout. I'd fortunately (as it turned out) left about 8 feet between myself and the car in front. I looked in the mirror and saw this car coming up too fast to stop, I thought he would hit me up the jaxy so I quickly checked my left door mirror and zipped into the left lane. He screeched to a halt just before the car that was originally in front of me. Collision avoided, I'll believe in driverless cars when they can do that.
TTT
"there are other options that software will never be advanced enough to carry out."
When it comes to computers, 'never say never' is a good maxim - Moore's law says computing power increases exponentially, and software gets smarter, as well as more complex. 'Never' is a bit strong in that context.
You gave an example of someone hitting the brakes hard an suggested that this would not be possible for automated systems, saying, "I'll believe in driverless cars when they can do that. "
One option being discussed for autonomous vehicles is a kind of anchor that can be deployed into the road surface to arrest the car quickly in case of an emergency.
The anchor destroys the road surface, but when you look at the cost of a life (estimated at around GBP10mn in some government calculations) it is significantly less than the cost of repairing the road.
Source: https:/ /www.br istol.a c.uk/me dia-lib rary/si tes/cab ot/news /2017/W hat%20i s%20the %20Valu e%20of% 20Life. pdf
Standard road engineering puts a normal driver with good tyres, good brakes and a mid- to high-friction road surface at a maximum decleration of around 1g. With emergency brake assist, that rises to 1.3g.
Most humans can easily withstand 3g for short periods. Only the very frail suffer from that kind of deceleration. Standard braking would give a stopping distance of around 14m at 1g, or 11m at 1.3g.
That kind of emergency brake could reduce stopping distances (from 60kph to zero) to less than 5m at 3g deceleration.
The primary thinking among vehicle makers is to deliver very powerful braking in urban and rural environments, where the environment is less controlled than a mass-transit system, or motorway.
It's still a way off, but engineers and designers are aware of these issues and working toward solutions that can give drivers, passengers and other road users confidence.
"there are other options that software will never be advanced enough to carry out."
When it comes to computers, 'never say never' is a good maxim - Moore's law says computing power increases exponentially, and software gets smarter, as well as more complex. 'Never' is a bit strong in that context.
You gave an example of someone hitting the brakes hard an suggested that this would not be possible for automated systems, saying, "I'll believe in driverless cars when they can do that. "
One option being discussed for autonomous vehicles is a kind of anchor that can be deployed into the road surface to arrest the car quickly in case of an emergency.
The anchor destroys the road surface, but when you look at the cost of a life (estimated at around GBP10mn in some government calculations) it is significantly less than the cost of repairing the road.
Source: https:/
Standard road engineering puts a normal driver with good tyres, good brakes and a mid- to high-friction road surface at a maximum decleration of around 1g. With emergency brake assist, that rises to 1.3g.
Most humans can easily withstand 3g for short periods. Only the very frail suffer from that kind of deceleration. Standard braking would give a stopping distance of around 14m at 1g, or 11m at 1.3g.
That kind of emergency brake could reduce stopping distances (from 60kph to zero) to less than 5m at 3g deceleration.
The primary thinking among vehicle makers is to deliver very powerful braking in urban and rural environments, where the environment is less controlled than a mass-transit system, or motorway.
It's still a way off, but engineers and designers are aware of these issues and working toward solutions that can give drivers, passengers and other road users confidence.
IKJLM: "When it comes to computers, 'never say never' is a good maxim - Moore's law says computing power increases exponentially, and software gets smarter, as well as more complex. 'Never' is a bit strong in that context." - now you are in my territory, moore's law talks about hardware primarily, software has barely changed it's still linear code but coupled with the power of modern hardware it can achieve more than ever before but it's still a matter of programming for sensor input
"You gave an example of someone hitting the brakes hard an suggested that this would not be possible for automated systems, saying, "I'll believe in driverless cars when they can do that. ""
no I gave the example of rapidly pulling out of the way of a car about to hit me up the back, read it again.
"One option being discussed for autonomous vehicles is a kind of anchor that can be deployed into the road surface to arrest the car quickly in case of an emergency. " - only useful at slow speeds anyway but could help.
"You gave an example of someone hitting the brakes hard an suggested that this would not be possible for automated systems, saying, "I'll believe in driverless cars when they can do that. ""
no I gave the example of rapidly pulling out of the way of a car about to hit me up the back, read it again.
"One option being discussed for autonomous vehicles is a kind of anchor that can be deployed into the road surface to arrest the car quickly in case of an emergency. " - only useful at slow speeds anyway but could help.
I'm not trying to get into an argument with you. but you say software is still linear.
I guess you know of neural nets, fuzzy logic and even quantum computing - none of these is linear, and the auto industry is increasingly using neural nets, combined with 'big data' to generate good decision-support processes and decision-making systems.
Acquiring large amounts of data about how people really use their vehicles is an important step. Combine that data with a multi-stage neural net and you get a powerful machine-learning environment.
Use the results from that to support decisions in the automotive driving system, and there is a route toward good decisions by a computer, without using the linear systems and therefore without having to anticipate every possible eventuality.
I repeat (and will continue to do so) this situation is evolving. There are massive resources going into this, from the technical point of view as well as psychological, legal, ethical and others, to develop systems that will work in the real world.
I guess you know of neural nets, fuzzy logic and even quantum computing - none of these is linear, and the auto industry is increasingly using neural nets, combined with 'big data' to generate good decision-support processes and decision-making systems.
Acquiring large amounts of data about how people really use their vehicles is an important step. Combine that data with a multi-stage neural net and you get a powerful machine-learning environment.
Use the results from that to support decisions in the automotive driving system, and there is a route toward good decisions by a computer, without using the linear systems and therefore without having to anticipate every possible eventuality.
I repeat (and will continue to do so) this situation is evolving. There are massive resources going into this, from the technical point of view as well as psychological, legal, ethical and others, to develop systems that will work in the real world.
IJKLM, you state the cost of repairing a road that's been ripped up by a ground anchor as being substantially less than the estimated value of a life. that's essentially true, but it would not be the whole story.
from your source document:- "the UK Department for Transport (DfT) values the prevention of a fatality on Britain’s roads at £1.8 million (2016 £s)."
against that must be balanced not just the actual repair cost, but also the economic cost of the road closure while the repair is carried out. this is mentioned on many of these "fly-on-the-wall" documentaries about road workers and although I have no quantified figures, it could well approach a balance with the cost-per-life-saved quoted, if the closure extends to several hours.
from your source document:- "the UK Department for Transport (DfT) values the prevention of a fatality on Britain’s roads at £1.8 million (2016 £s)."
against that must be balanced not just the actual repair cost, but also the economic cost of the road closure while the repair is carried out. this is mentioned on many of these "fly-on-the-wall" documentaries about road workers and although I have no quantified figures, it could well approach a balance with the cost-per-life-saved quoted, if the closure extends to several hours.
Yes, mushroom. That's a good point.
I happened to be at a seminar with one of the peope responsibe for the UK HIghways Agency programme at which we were kicking around ideas in this area, and his top-of the head response was that it might be economically justifiable.
It's a developing situation. None of it certain yet.
I wanted to show that some people have creative ideas that can give solutions to situations that might appear to be insoluble at present.
I happened to be at a seminar with one of the peope responsibe for the UK HIghways Agency programme at which we were kicking around ideas in this area, and his top-of the head response was that it might be economically justifiable.
It's a developing situation. None of it certain yet.
I wanted to show that some people have creative ideas that can give solutions to situations that might appear to be insoluble at present.
//No real acknowledgement of the progress made so far, nor, for that matter, of the importance of continuing to experiment. //
in the matter of experimentation or projection, there's never too much indication, at the outset, where the development might take us. a few years ago, advances were made in computer networking, data transfer speed and live video streaming which was expected to see an increase in home working and remote video conferencing. instead we're seeing an exponential increase in journeys made, by car or by public transport, to the point of saturation on some routes.
in the matter of experimentation or projection, there's never too much indication, at the outset, where the development might take us. a few years ago, advances were made in computer networking, data transfer speed and live video streaming which was expected to see an increase in home working and remote video conferencing. instead we're seeing an exponential increase in journeys made, by car or by public transport, to the point of saturation on some routes.
IJKLM: "I guess you know of neural nets, fuzzy logic and even quantum computing - none of these is linear, and the auto industry is increasingly using neural nets, combined with 'big data' to generate good decision-support processes and decision-making systems. " - I have programmed neural nets, essentially they are facilitated by linear code and then the algorithm allows for path strengthening, ie learning and some positive results have been obtained but we are a long way from giving computers intelligence. We'd need a quantum leap in technique, or a new discovery to do that. Quantum computing will make brut force programming easier but the actual instructions are still in effect serial. Fuzzy logic is just another definition of logic beyond True/false and that is implemented every day with serial programming. In the end all of these things are a CPU executing one instruction after another. Parallel CPUs can split tasks up to do many things at once where the order may not be important but in the end it's serial instructions one after the other with 0 intellegence.
Related Questions
Sorry, we can't find any related questions. Try using the search bar at the top of the page to search for some keywords, or choose a topic and submit your own question.