This forum is in permanent archive mode. Our new active community can be found here.

MIT Machine Morality

I figured I'd give this it's own thread so as not to derail the ToYD thread with results. This is just a series of scenarios for self-driving cars, you choose which one you'd prefer happen.

The Test: http://moralmachine.mit.edu/

My results: http://moralmachine.mit.edu/results/1853544227
«1

Comments

  • I love that it says Hoomans
  • There are definitely some issues, because it gave me maximum preference for women and high social status, when those had literally no bearing on my decisions. link
  • Ikatono said:

    There are definitely some issues, because it gave me maximum preference for women and high social status, when those had literally no bearing on my decisions. link

    There's a lot of factors going into it. You probably picked groups with more males and criminals/beggars to be killed.
  • I just mean that the samples are poorly made if they got my actual motives that wrong. That or I'm just a weird outlier.
  • edited September 2016
    Their assumption in motives is ultimately flawed. I basically followed a few rules:
    1. Kill animals over killing humans. Sorry.
    2. Kill passengers of car that has failed versus killing unrelated passerby's who didn't choose to be in the car.
    3. Save youth over elders.
    4. Try not to kill homeless people 'cause it seems like a dick move.

    That automatically answered like, the majority of questions. It ascribed things like gender preference and whatnot, when those really didn't account for my decision-making process the majority of the time.

    An interesting test, though, to be sure.

    Edit: Tried sharing my results, but the link seems to be broken and wind up blanking out.
    Post edited by Axel on
  • I find the way the illustrations are presented a real problem.

    The main issue is that the obstacle that is meant to kill the people in the car looks completely undeadly. It's just a barrier! If a car is going fast enough that passengers die when it hits a barrier, then it is already waaaay too fast for operating in an area where there are pedestrian crossings.

    In my mind, cars are already the safest place for humans in a car vs human accident. There should be no situation EVER where a car swerves into any group of pedestrians where smashing into a wall, a lamp post, or even another self-driving vehicle is an option.

    If the illustrations showed a car either driving into a group of pedestrians or driving off a cliff to certain destruction, then I might be able to take it seriously.

    If the illustrations showed groups of pedestrians on the side of a motorway, escaping a burning bus (the only time an autonomous vehicle should be encountering pedestrians while going fast enough to kill them), then I might be able to take it seriously.
  • Looking at the results, this whole test is bullshit.

    It says I want to save women more than men. I didn't even look! I didn't look at the illustrations hard enough to even notice there were different genders, different fitness of people, that some were crooks and others law abiding citizens, or different ages.

    What self driving car will be determining these things about pedestrians it passes? Will every car have facial recognition and a link to a crime database... in real time? What the hell.
  • Online tests are bullshit.
    Good night.
  • Lol do you smash every mirror that shows your face?
  • Yes, can't trust mirrors with letting you keep your soul.
  • Okay, this is the first thing I see. What the fuck is this?

    image

    Option C: Stop and let them cross.
  • On those questions, I just flipped a coin.
  • There's nobody in the car. Just fucking drill it into the side barrier.
  • What self driving car will be determining these things about pedestrians it passes? Will every car have facial recognition and a link to a crime database... in real time? What the hell.

    That's what I was thinking. Are we going to have racially profiling self driving cars?

    Also, why are there so many random barriers in the middle of lanes apparently still being driven on? Shouldn't there just be an angled one and signs signaling you to merge into the next lane? That's just unsafe road planning.
  • Yea I wasn't a fan of this test, it's completely crazy.
  • It's a really interesting idea but the worst possible execution.
  • The nature of these tests is always one of extremes, it puts a magnifying mirror to your hierarchies and values structures. We tend to approach decisions with nuance and often don't have the will or power to make life ending decisions, so we react viscerally when confronted with these structures out of context and hugely magnified. Just because you don't like the results, doesn't make it a bad test.
  • I seem to prefer, in order:

    1. Upholding the law
    2. Protecting humans
    3. Minimizing total casualties
    4. Nonintervention
  • One factor to this test that I employed is that if the number of casualties is the same in the car versus outside the car, I always favored saving pedestrians. In my head, likelihood of surviving inside car is far better than outside. And yeah, if the car is empty, always prefer careening into the barrier over killing anyone.




  • I felt like this kid during this test.
  • I consciously made a decision tree for these:

    Fewest human deaths
    Kill passengers before bystanders
    Do not intervene

    In that order to break ties.

    Coincidentally, because of the scenarios selected, that also means I selected for fit people, young people, and criminals. Something seems particularly icky about swerving to kill a different group of equal size because it contains more criminals. I didn't notice the traffic signals when I took the test, I might put them second in that decision structure.
  • Cremlian said:





    I felt like this kid during this test.

    MAX SCORE COMBO
  • pence said:

    I consciously made a decision tree for these:
    Fewest human deaths
    Kill passengers before bystanders
    Do not intervene

    I protected whoever wasn't breaking the law. Don't jaywalk.

  • Killed so many cats and dogs.
  • edited September 2016
    I seem to be about:

    1) Saving the most human lives (fit young humans to be exact)
    2) Upholding the law
    3) Non-intervention

    Though I think I had a slightly different decision tree for intervention than some. My thought process was something like this:

    -Always save the most humans
    -If the number of humans killed would be equal:
    a) Do not crash into a concrete barrier (because crashing into a barrier creates a greater hazard for other motorists than crashing into squishy meat)
    b) If a concrete barrier is not an issue:
    i) save the most young and/or fit and/or skilled people (young/fit/skilled each add 1 "point" to the "value" of each life)
    ii) If everything is still equal, do not intervene
    Post edited by TheWhaleShark on
  • Rym said:

    pence said:

    I consciously made a decision tree for these:
    Fewest human deaths
    Kill passengers before bystanders
    Do not intervene

    I protected whoever wasn't breaking the law. Don't jaywalk.

    I didn't finish the entire test, because nobody has time for that shit. But I did kill a lot of jaywalkers. The question posed is "what should the self-driving car do." Obviously I would rather have 2 people die instead of 5, if those were the only two options. However, this is a false premise.

    If self-driving cars obeyed traffic laws, they would become respected like trains. If you know the self-driving car is going to go when it has a green light, regardless of what you do, then you aren't going to jaywalk in the first place. People will respect crossing the street the way they respect crossing train tracks. Self-driving cars being programmed that way will result in the least total deaths over time, even if in that example picture their happen to be a whole bunch of jaywalkers becoming roadkill.
  • It decided I didn't like homeless people, but I only killed them when they jaywalked. If that had been five doctors jaywalking, I would have killed them just the same.
  • Biking will also be easier. You're riding next to some self-driving cars. Unlike a train, you can be close to them. You know exactly what they are going to do. If they signal, they are turning. If they don't signal, they're going straight. Pedestrians are not going to get in your way, because effectively, trains are coming.
  • Apreche said:

    Biking will also be easier. You're riding next to some self-driving cars. Unlike a train, you can be close to them. You know exactly what they are going to do. If they signal, they are turning. If they don't signal, they're going straight. Pedestrians are not going to get in your way, because effectively, trains are coming.

    except I've hacked the cars to KILLLLL ALL bicyclists.
  • Cremlian said:

    Apreche said:

    Biking will also be easier. You're riding next to some self-driving cars. Unlike a train, you can be close to them. You know exactly what they are going to do. If they signal, they are turning. If they don't signal, they're going straight. Pedestrians are not going to get in your way, because effectively, trains are coming.

    except I've hacked the cars to KILLLLL ALL bicyclists.
    Good thing I've hacked the space laser to kill all Scott Johnsons.
Sign In or Register to comment.