Unvetted Philosophy of Autonomous Vehicles

It is no longer a question of whether we will have automated vehicles, but a question of when.  As we all celebrate the positives of automation in vehicles, like fewer accidents, fewer injuries and deaths, have we even touched the philosophical and moral questions that must be addressed with the software that runs the cars?  Who will decide on how the cars “think?” Will it be software designers, their managers, government bureaucrats, or even legislative committees?
Take a scenario where an automated vehicle is traveling down a road and two children run out in front of the vehicle.  They are too close for braking to miss them but a swerve would work.  Now add a third child on the side of the road that would be hit with the swerve.  Does the car choose to save the two kids and kill the bystander, or hit the two kids and save the bystander? The knee jerk response is to save the two kids but is that the correct moral or philosophical choice?  Can we have society (or any other entity for that matter) decide who to sacrifice? And, more importantly, this decision has to be made years in advance of the decision being used.  To continue reading click here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s