It is no longer a question of whether we will have automated vehicles, but a question of when. As we all celebrate the positives of automation in vehicles, like fewer accidents, fewer injuries and deaths, have we even touched the philosophical and moral questions that must be addressed with the software that runs the cars? Who will decide on how the cars “think?” Will it be software designers, their managers, government bureaucrats, or even legislative committees?
Take a scenario where an automated vehicle is traveling down a road and two children run out in front of the vehicle. They are too close for braking to miss them but a swerve would work. Now add a third child on the side of the road that would be hit with the swerve. Does the car choose to save the two kids and kill the bystander, or hit the two kids and save the bystander? The knee jerk response is to save the two kids but is that the correct moral or philosophical choice? Can we have society (or any other entity for that matter) decide who to sacrifice? And, more importantly, this decision has to be made years in advance of the decision being used. To continue reading click here.