Milgrams Experiment Ethical Empathetic Technologists

Every week, my engineering team has lunch and learns. Sometimes they’re specific to the things we’re working on as a team, but sometimes they’re just something someone has taken an interest in and wants to talk about with the rest of the team - from topics such as ergonomics, interesting math algorithms, or anechoic chambers.

Recently, one of my teammates lead one on ethics —it was largely a primer on the topic and to get our feet wet as a team, to see where people’s interests are and where to take further conversations on this topic. One of the things he mentioned was the Milgram experiment, and it got me thinking. The experiment itself was unethical, and may not have been very valid anyway.

I’m not here to draw scientific conclusions from it. But I did start thinking about the implications of the broader lessons on software specifically, so I’d like to use it as a launch pad for things I’ve been thinking about in terms of how we can be more ethical technologists, and reasons why we might be falling short.

I’m actually primarily interested in some of the later variations, but let’s start with a quick overview of the original, for context.

There were three people involved in each experiment:

A learner - actually a researcher, though the subject thought they were also a volunteer.

A teacher - someone who had volunteered for the study, and the only person not in on it.

The supervisor - the volunteer knew this person was part of the study and not a volunteer.

The teacher and supervisor were in one room, while the learner was in another. There was an audio connection between the rooms, but they could not see each other. The teacher asked the learner questions, and each time the learner got one wrong, they were told to give them an electric shock. The size of the shock went up with each incorrect question.

Now, the learner, being in on the experiment, intentionally got a lot of questions wrong, but was not actually receiving physical shocks. But they made sounds of protestation indicating that they were, and these sounds of protestation grew increasingly desperate as the size of the supposed shocks grew.

The teacher thought the purpose of the study was related to pain and memory, when in fact, they were studying how far people would be willing to go toward causing another person physical pain when ordered to do so.

The results? People were very willing to administer very large shocks.

When the same experiment was re-done with two additional teachers in the room, but whom were also actually part of the experiment, and they refused to administer the shocks, the research subjects were dramatically less likely to administer the shocks either.

Humans are social creatures to whom acceptance is an important survival tactic. Derik Sivers’ talk about the first follower supports this - someone initiating an action, in addition to the first person to follow suit risk ridicule, but it becomes progressively easier for others to follow after that.

There are plenty of opportunities to take action against unethical practices. Some are dramatic - people are leaving their jobs at companies who have contracts with ICE. Employees at Google have been trying to unionize, and their coworkers are pushing back for them when there have been repercussions. But there are also the more routine - advocating for more inclusion on your team, advocating for building in engineering time to make sure your product is accessible, considering privacy from the angle of people who might be experiencing domestic violence. The list goes on.

Taking this kind of action requires privilege. Not everyone has the privilege of financial stability or a support system that means they can risk losing their jobs. If you have that privilege and see an opportunity for action? Try to be the initiator or first follower - take on that risk for your less privileged teammates who might also want to take action, to pave the way for them to join you.

When participants in the experiment could tell someone else to push the button to administer the shock, they were more likely to continue shocking the learner to higher voltages.

When participants in the experiment had to physically place the hand of the learner on a plate that administered the shock, they were less likely to continue shocking.

So if people’s ability to empathize with others is dependent on proximity, what does that mean for us as software developers?

Probably not great things. Most of us talk to customers very rarely. Product requirements are handed down to us, often without much opportunity for us to weigh in. And, as a teammate mentioned when we were talking about this, depending on the size of the team or company, an individual developer might not even know the whole scope of what they’re building. If a project is broken down into small pieces and doled out across multiple teams, the whole picture, and any troubling aspects of it, might not be visible at all.

So what can we do? We can ask questions. We can talk to the people who do talk to customers. We can talk to customers, or if we’re not able, hopefully we can at least listen to conversations they’re having with others. We can ask what the common reasons are for people choosing not to be come customers, or to stop becoming customers. (If you’re not given access to some of this information, I’d have some questions about the company’s transparency…). We can consider how the things we’re building might impact all kinds of people, including those who aren’t like us — the ability to do this well is one of the strong arguments for diverse teams.

I wrote a little bit about customer awareness as a developer here. But this also might not be enough—there are likely many people impacted by any given technology who are not direct customers. When discussing Facebook at our team lunch, one of my teammates brought up the point that their customers (the people from whom they make money) are technically the advertisers on the platform, while most of the people being directly harmed by Facebook’s many shady ethical practices have hurt are users on the platform.

It’s also worth noting that humans can build empathy. People often talk about empathy as something that people are born with and comes more naturally to some than others. I don’t know if this is true or not, but I do know that we can practice and get better at it.

These are some books on ethical issues specifically pertaining to technology —I’ve read some of these, and others are on my waiting list, having come recommended by people I respect. I encourage everyone to dig in and learn more.

  • Algorithms of Oppression: How Search Engines Reinforce Racism; Safiya Noble
  • Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; Virginia Eubanks
  • Behind the Screen: Content Moderation in the Shadows of Social Media; Sarah T. Roberts
  • Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Cathy O’Neil
  • Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech; Sara Wachter-Boettcher