Oy vay! This must be the most stoopid idea yet!
got a blank!what’s a diaster?
I don’t see why. Self driving cars are the future. If you mean the “living/social space”, I don’t know. It’s a totally untried product, but I can certainly see the market potential. It’s not like BMW is saying “We’re going to replace our entire line with these”. It’ll be a pilot program, and if it works, they’ll expand it.
I don’t see why this is a disaster. I wouldn’t buy the car, but I’m not their target market.
There is a reason that Aircraft are not completely autopiloted. This same principle applies to road vehicles. There is always an unexpected event that programmers cannot foresee.
That’s exactly my point. Computers, no matter how sophisticated, cannot plan for every single incident. I just think it is dangerous and will put people at risk. I’m not against this technology, I just think it needs to be scrupulously tested first.
BTW, I think the car looks really cool. I wonder how expensive it is…
It already has been. Google’s self driving car has clocked over three million miles without causing an accident. The average driver causes 5 accidents in this same number of miles.
The only thing Google’s cars don’t presently handle well, are crazy mistakes by surrounding humans.
The BMW car is for dedicated highways that only allow self-driving cars. When every car is self-driving, accidents will basically not even happen. And that’s with today’s technology. I imagine things will be a lot better in 2030.
When did Google begin selling/developing cars???
I can’t get your link to open, CT.
I have no problem with self driving cars so long as they have the ability to be switched over to manual control with a switch of a button, like turning of cruise control. Like DN pointed out there is a reason vehicles are not fully automated with no human at the controls. No code is without bugs. Not to mention these vehicles rely on other technology to function(satellite and up to date map information).
I automate a lot of my workload which gives me time to pursue other work related things but I can stop it any time with a push of a button. Society should be wary to build on a foundation made of nothing.
Richard Dawkins should approve this . . .
OK. I give up. Who’s “Richard Dawkins?”
The guy who says that “nothing became something.”
As a former programmer, let me assure you that no programmer can possibly anticipate every possible condition. I worked on a large payroll system that had been up and running for at least three years before I started on it. There was a bug in it that showed up periodically, required one employee to have an off-line check (because we had to delete his records before the program would run to completion). Because it was sporadic, it was difficult to find the cause. I finally found the cause and fixed it.
But several years later a “new” bug popped up, which caused payroll to actually be delayed. Fortunately, the problem was found soon enough that the delay wasn’t too terribly long. There were a number of programmers working on it. Most of the people got their direct deposits in time, I think the checks were a little late. I got mine in good time, since my bank was the clearing house for the direct deposits. I hadn’t been working in payroll for some time then. But it was more than ten years after the program was implemented that this problem showed up. It had not been caused by some other programming change (although that can happen sometimes). It was just so rare an occurrence that it had never happened before during the life of that program.
I remember reading an Ann Landers column where someone had disputed that computers couldn’t make mistakes. They had got a whole new computer system, and there was a cyclical miscalculation showing up. They even called in a programmer from the company who created the program (the computer and the whole system were a package), and he couldn’t find it in a whole week. Therefore, it had to be the computer.
The very cyclical nature of it indicated a programming error. For the location of some errors, depending on the complexity of the program, a week can be a very short time to try to find it.
And then, there are programmers, and there are programmers. I took over a problem once when another programmer was laid off. He had been working on the program for two years. The general location of the problem seemed to have been isolated by the help desk before he took it over. After I had been working on it for two weeks, one of the higher ups in the help desk told me that I had accomplished more in two weeks than the other programmer had in two years, although he thought he was a great programmer. It wasn’t long before I had it fixed - not ideally, but it worked correctly, and that was all that mattered. There was another issue that he had also been working on for two years, and it had to be restarted from scratch. It was an “upgrade” (although absolutely necessary due to legal issues), and it involved a lot of work in a lot of programs. I think the programmer that took that over did it in about two months.
I remember years ago they wanted to put a robot on Venus and that means that robot has to overcome unforeseen obstacles. I have been looking at the robots being developed and they are testing to see if the robot can handle such obstacles like being kicked or uneven terrain.
I have no problem with self-driving cars if they’re kept on a closed course with a high wall around it…
I mean the thing has a steering wheel. If you want to be constantly alert while it drives, or even driver yourself, you can. I don’t see it as a big deal.
Personal cars probably won’t even be the main use for self-driving cars. Warehouses, construction, and mining will all benefit from self-driving automobiles. And they’re already being used.
- So does a regular car with a drunk driver; that doesn’t make it safe. Things can (and often do) go wrong in a split second.
- Low-speed off-road applications like that I might be okay with, depending on the circumstances.
To ammend this: I have no problem with self-driving cars if they’re kept on a closed course with a high wall around not only it, but around the manufacturers and politicians who want to make them street-legal…
A car with a drunk driver arguable does not have an alert driver at the wheel.
The computers they’ve been using in the Google self-driving car at least have a better accident rate than nearly all human drivers, with about a million miles driven and only 1 accident at the fault of the car (it sideswiped a bus just this week, apparently, while trying to merge to avoid some sand bags). People seem to be worried about the car not adapting to all situations, but human drivers fail to do that at a much higher rate than the computer at the moment, especially since the computer isn’t distracted, like this Uber driver was. I can see how it can be uncomfortable to release control of the car to a computer, though.