A pediatrician misdiagnoses a patient. This leads to the patient’s death and a case for malpractice.
A civil engineer approves an unsafe bridge schematic and upon its creation, collapses, killing several and leaving many others injured.
In both cases, a professional made a vital error resulting in harm to their fellow humans.
A software engineer aggregates their users’ data to show more relevant ads to users and fails to implement a secure system to keep hackers from stealing their personal information. The immediate consequences could be unknown but the implications are haunting. In the wrong hands, users could be subject to doxing, identity theft, forfeiture of privacy just to name a few. All this data would be right there, packaged neatly for some hacker to come and take.
Is this not also harmful activity? To leave others privacy and identity at risk? As software engineers, as professionals, do we not have a moral obligation to ensure the safety, privacy, and wellbeing of those who benefit from our services? Let’s talk about that.
As I began my pathway to becoming a computer scientist, I somewhat knew of the damage that computer systems and code could inflict, such as national banks being hacked or UAV drone strikes. Looking back now, that is really just the surface. “Damage” and “harm” can be defined as much more than just financial and physical. As stated previously, security negligence can lead to a lot of harm. Case in point, Facebook failed to monitor their platform and allowed for the intrusion of Russian hackers to influence American opinion. The wellbeing of its users was put at risk and action against these attacks was taken far too late.
Reflecting on events like these, here are my personal ethics of software engineering.
Bill Sourour wrote a post on freeCodeCamp about the the code he is still ashamed of. He discusses writing a website for a pharmaceutical client that quizzed users on their symptoms. However, the quiz would always recommend the client’s product and only wouldn’t recommend it if the users outright said they were allergic to the product or were already taking it. Mr. Sourour then goes on to say that this feature was one of the client’s requirements and continued development. As Mr. Sourour put it, “But the truth is, I didn’t think much of it at the time. I had a job to do, and I did it.”
Shortly after the site was launched, a news report was released about a young woman’s suicide. She was taking the client’s product and two of the side effects was depression and suicidal ideation. Mr. Sourour was wracked with guilt and told his sister, who was taking the medication at the time, to stop taking it immediately.
Mr. Sourour did not intend to hurt this young woman yet his site could have exposed her to using this medication and led to her death. Is Mr. Sourour to blame in this situation? He never directly interacted with her but he created a way to advertise this dangerous substance to the public. He was a young, up-and-coming programmer who needed to do his job but does that excuse him from writing a morally gray site to sway customers to buy a specific product that could lead to more problems?
Personally, I feel that there is more than one party to blame in this situation. It is the fault of the client for commissioning a website for their product that they knew was dangerous that targeted young women. It is the fault of the account manager for not looking into the product more to see if this was a client that would want to sponsor. It is the fault of Mr. Sourour for continuing development of the site despite the shady results of the rigged quiz. However, who is most to blame? I believe that while the company shouldn’t have released their product without further testing, I believe that the largest fault lies on the account manager for not researching the product more. At the same time, I do believe that Mr. Sourour shares an equal share in the blame for this site. He was the second of two parties who could have discontinued development buth both chose to ignore the site’s potential effects.
Mr. Sourour ends his post by saying, “Since that day, I always try to think twice about the effects of my code before I write it. I hope that you will too.”
The Association of Computer Machinery (ACM) has an in-depth code of ethics that describe ethics from a general, professional, and leadership role.
For a hypothetical (and terrifying) scenario of technology developed with “the best intentions”, check out Future of Life Institute’s “Slaughterbots”.