MIT Wants You To Help Teach Morality To Self-Driving Cars - Geek.com

Isaac Asimov gave us the Three Laws of Robotics ages ago, but not every decision every robot makes will be black-and-white. That’s why MIT wants your help teaching machines all about morality.

Head over to the Moral Machine website, and they’ll present you with various scenarios that a self-driving car might encounter. You’ll be given two options and asked to pick the outcome that’s most acceptable to you.

The setup is simple enough. A driverless car and its passengers are careening towards a crosswalk. There’s a concrete barricade in one lane. You’ve got to decide whether the car should swerve and leave its lane or stay the course.

Some things you might base your decision on: the number of passenger versus the number of pedestrians and who or what those passengers and pedestrians are.

MIT Wants You To Help Teach Morality To Self-Driving Cars - Geek.com

Yes, what. In one of the scenarios the car is driving a cat and its two dog friends somewhere… possibly to a pet spa? Or maybe they just really, really like going to car rides, and their owners were just too busy at the time to take them.

Either way, the choice is yours! Does the car crash itself and risk killing the pets or does it roll through the crosswalk and hit the three criminals who are currently trying to cross the street? Three elderly passengers or three children?

It’s easy enough to see why self-driving cars need to understand these situations, and it’s a good thing that MIT (and other folks working with autonomous vehicles) are trying to teach machines how to decide.

We’ve got to do whatever it takes to make sure that all those driverless Ubers or Waymo vehicles don’t go Maximum Overdrive on us.

MIT Wants You To Help Teach Morality To Self-Driving Cars – Geek.com

发表评论

电子邮件地址不会被公开。 必填项已用*标注