Ethics and Artificial Intelligence

Ethics and Artificial Intelligence

Ian Goodrich

Ethics in the Profession

10 May 2022

Artificial intelligence can be extremely useful in many fields, but the ethics of using it have become more questionable as technology has advanced. In the medical field it can help save lives. Police have used it to track down criminals and predict crimes before they happen. People have created algorithms to mimic human writing, and large companies commonly use it to boost profits. While these uses seem beneficial on the surface, there are a large number of problems that can surface from the use of artificial intelligence, bringing the ethics into question.

The first problems come from the medical field. In a paper done by researchers from the University of Adelaide and Stanford University, it is shown how many algorithms fail to correctly identify important data for groups not used during testing. In other words, any disease not shown during testing would not be possible to identify. Reliance on artificial intelligence with these errors could result in misdiagnosis of deadly diseases, and these problems have already been identified. The researchers found that poorly labeled or rarely seen markers led to a performance difference around 20% poorer when compared to markers that are common (Oakden-Rayner et al).

Such a discrepancy has also been shown by research published in the journal Science in 2019. The research group looked at an algorithm widely used in the healthcare industry, and found that for a given risk score, when black patients were seen they were already much sicker than white patients due to the algorithm being developed using white patients (Obermeyer). This continues the pattern seen previously, where algorithms work less effectively on groups not used in their development.

Moving to law enforcement, police have attempted to use artificial intelligence as a way to predict crimes before they occured. A study done by the Electronic Frontier Foundation examined this use of artificial intelligence, and once again discovered many problems. Data bias once again significantly altered the outcome, due to the locations that the data was taken from. Most of the data came from areas that were already more heavily policed. Because of this, more crimes were predicted to happen in areas already under police scrutiny (Guariglia).

Another common use  of artificial intelligence in law enforcement is the use of facial recognition, which has been found to be biased as well. In a paper created by the IEEE, or International of Electrical and Electronics Engineers, researchers found that many algorithms showed significantly worse performance on women, African Americans, and people between the ages of 18-30 (Kane et al).

When facial recognition software used by police was tested on African American women, it was found to perform significantly worse in comparison to white women. In a study done by the National Institute of Standards and Technology in 2019, it was found that black women were misidentified by a rate 10 times higher than that of white women(Simonite).

Continuing with facial recognition, a company named Clearview AI created facial recognition software using data from social media networks such as Facebook and YouTube. The software was then provided to law enforcement agencies, which would allow them to identify anyone with photos on social media from a single image (Hill).

The fear of humans being replaced by artificial intelligence is another concern that has become more important astechnology increases. Many people believe that creative jobs are safe, but as natural language processing becomes more advanced even these jobs have come into question.

One case is the program GPT-3. Created by OpenAI, the program has been shown to be able to create writing capable of tricking a human into believing the writing is real. The company is aware of the ethical concerns, and requires vetting before anyone is able to use the software. The older version of the software, however, is openly available and still capable of tricking people into believing that the result was written by a human (Heaven). The main fear of such software is use in fraud, such as creating false news stories that can be passed off as real.

Along with creating believable text, the emergence of artificial intelligence has allowed for the creation of electronic assistants such as Alexa. These have been questioned as well, as the company uses artificial intelligence from servers to process requests, allowing for Amazon to monitor speech. The training of these algorithms, however, has been known to use humans which would allow for people to have access to private information discussed around an activated Alexa (Day et al).

Along with privacy fears come physical dangers as well. Recently, stories of self-driving cars have been in the news for their use of Artificial Intelligence, and the failures thereof. In January of this year, the first person was charged for a felony due to Tesla’s Autopilot feature. The feature, which is meant to be able to perform common tasks such as breaking and steering, has failed numerous times before. In the January case the car, while on autopilot, ran a red light at high speed and killed two motorists (Associated Press).

While Tesla’s autopilot is only made as an assistant program, there are cars currently being tested with full self-driving capabilities. One such company working on these vehicles is the well-known ride sharing platform Uber. In 2020, a self-driving car created by the company hit a bicyclist, killing her. Due to a lack of precedence, the company was not charged at all. The human in the car, charged with taking manual control in case of software failure, was reportedly watching a television episode instead of paying attention, and was charged with negligent homicide (Cellan-Jones).

Despite all of these problems, artificial intelligence is not completely negative, and in fact has many positive uses. In healthcare, many artificial intelligence programs allowed doctors to more accurately spot potential cancers, and speed up the diagnostic process. Despite failures to diagnose certain types of rare cancer, the ability to more accurately diagnose common forms of cancer is extremely important in medical advancement.

An artificial intelligence algorithm known as Deep Learning based Automatic Detection (DLAD) was developed by researchers in Seoul, and was able to identify cancers better than 17 out of 18 tested doctors. A program created by Google was able to correctly identify cancerous and non-cancerous cells 99% of the time, and halved the time it took for doctors to review a slide (Harvard).

Returning to the topic of privacy, even software that invades privacy the most has its uses. While the ethical component of facial recognition software is questioned in regards to police use, millions of people use it to unlock their iPhones every day. Fingerprint scanners also use artificial intelligence, allowing them to increase in accuracy as they are used. Advances in artificial intelligence would allow users to further secure their devices and information.

Despite questions about the privacy implications of voice assistants such as Alexa and Google Assistant, they are undeniably useful pieces of technology. The ability to access the internet with only a person’s voice is extremely useful in many situations. Such situations may be because a person does not have a free hand, or simply does not want to have to look up information themselves.

These situations ignore one common use of voice assistants, however: those with mobility issues. The ability for a disabled person to control appliances without having to use precise movement is an undeniably good use of artificial intelligence technology, as it allows for them to have more personal freedom.

Artificial intelligence also has many uses in keeping people safe. Despite problems, the ability to predict weather is an important use of artificial intelligence. An article by Nvidia, a company well known for their artificial intelligence technology, highlighted one such use for predicting extreme weather. In the article, the artificial intelligence is shown to be able to predict extreme weather up to six weeks in advance, giving much more time to those who would be impacted to prepare (Horton).

Another potential use of artificial intelligence comes in the form of Search and Rescue missions. Traditionally, helicopters are used along with thermal imaging cameras to attempt to find anyone who may be trapped. By using artificial intelligence, a group was able to create drones capable of finding people in areas that would normally be too densely forested to be used (Schedl et al). This specific example was able to achieve a precision of 96%, a far better rate than the traditional 25% rate from only using thermal cameras (Papadopoulos).

Along with the ability to enhance searching, the use of artificial intelligence can allow robots to perform rescue operations as well. This would prevent humans from risking their lives while trying to rescue others, resulting in a lower casualty rate.

Other sectors, such as education and transportation, use artificial intelligence as well.

In education, artificial intelligence is already being used to enhance student learning. Companies such as McGraw-Hill have introduced products such as Connect that use artificial intelligence to understand where students are having problems, and focus on areas of learning that the student needs to work on the most.

In a future where self-driving cars are more common, and the technology more advanced, it would likely be safer in an autonomous vehicle than a human-controlled one. If vehicles were able to communicate with one another, it would allow for those unable to drive to be able to traverse a world dominated by cars. It would also allow autonomous shipping vehicles and public transportation as well, reducing times between package or personal arrivals.

The overall use of artificial intelligence is, and will likely continue to be, greatly debated over the coming years. These debates will only become more important to preserving ethics in an age with constantly advancing technologies. While there are many problems with artificial intelligence as it stands today, there are many positive uses as well. In many cases, these positives must be taken with the negatives.

Artificial intelligence used to spot cancer will not be able to find rare forms of cancer, but failures in this case should not mean that artificial intelligence should not be used to help spot cancers that are common.

On the other hand, facial recognition very well may have more negatives than positives, as false positives can lead to an innocent person being punished due to a computer’s failure to recognize faces that it was not trained on.

Unlike what some argue, artificial intelligence is ultimately a tool humans have created, and like all tools, must be used correctly. With proper training and observation, artificial intelligence can advance humanity in ways we likely cannot begin to imagine, but without these failsafes could easily cause disaster.

Works Cited

  • Cellan-Jones, Rory. “Uber’s Self-Driving Operator
    Charged over Fatal Crash.” BBC News, 16 Sept. 2020,
    http://www.bbc.com/news/technology-54175359.
  • Day, Matt, et al. “Thousands of Amazon Workers
    Listen to Alexa Users’ Conversations.” Time, 11 Apr. 2019,
    time.com/5568815/amazon-workers-listen-to-alexa/.
  • Greenfield, Daniel. “Artificial Intelligence in
    Medicine: Applications, Implications, and Limitations.” Science in the News, 19
    June 2019,
    sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implications-and-limitations/.
  • Guariglia, Matthew. “Police Use of Artificial
    Intelligence: 2021 in Review.” Electronic Frontier Foundation, Electronic
    Frontier Foundation, 1 Jan. 2022,
    http://www.eff.org/deeplinks/2021/12/police-use-artificial-intelligence-2021-review.
  • Heaven, Will Douglas. “OpenAI’s New Language
    Generator GPT-3 Is Shockingly Good—and Completely Mindless.” MIT Technology
    Review, 20 July 2020,
    http://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/.
  • Hill, Kashmir. “The Secretive Company That Might
    End Privacy as We Know It.” The New York Times, 18 Jan. 2020,
    http://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.
  • Horton, Michelle. “Global AI Weather Forecaster
    Makes Predictions in Seconds.” NVIDIA Technical Blog, 13 Jan. 2022,
    developer.nvidia.com/blog/global-ai-weather-forecaster-makes-predictions-in-seconds/.
    Accessed 8 May 2022.
  • Igoe, Katherine. “Algorithmic Bias in Health Care
    Exacerbates Social Inequities — How to Prevent It | Executive and Continuing
    Professional Education | Harvard T.H. Chan School of Public Health.” Www.hsph.harvard.edu,
    12 Mar. 2021, http://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/.
  • Klare, Brendan F., et al. “Face Recognition
    Performance: Role of Demographic Information.” IEEE Transactions on Information
    Forensics and Security, vol. 7, no. 6, Dec. 2012, pp. 1789–1801, openbiometrics.org/publications/klare2012demographics.pdf,
    10.1109/tifs.2012.2214212. Accessed 25 Apr. 2019.
  • Oakden-Rayner, Luke, et al. “Hidden Stratification
    Causes Clinically Meaningful Failures in Machine Learning for Medical Imaging.”
    Proceedings of the ACM Conference on Health, Inference, and Learning, 2 Apr.
    2020, 10.1145/3368555.3384468. Accessed 12 Apr. 2021.
  • Obermeyer, Ziad, et al. “Dissecting Racial Bias in
    an Algorithm Used to Manage the Health of Populations.” Science, vol. 366, no.
    6464, 25 Oct. 2019, pp. 447–453,
    science.sciencemag.org/content/366/6464/447.full, 10.1126/science.aax2342.
  • Papadopoulos, Loukia. “Search and Rescue Drones Use
    AI to Spot People Lost in Woods.” Interestingengineering.com, 28 Nov. 2020,
    interestingengineering.com/search-and-rescue-drones-use-ai-to-find-people-lost-in-woods.
  • Press, The Associated. “A Tesla Driver Is Charged
    in a Crash Involving Autopilot That Killed 2 People.” NPR, 18 Jan. 2022,
    http://www.npr.org/2022/01/18/1073857310/tesla-autopilot-crash-charges.
  • Schedl, David C., et al. “Search and Rescue with
    Airborne Optical Sectioning.” Nature Machine Intelligence, vol. 2, no. 12, 23
    Nov. 2020, pp. 783–790, 10.1038/s42256-020-00261-3. Accessed 19 Feb. 2021.
  • Simonite, Tom. “The Best Algorithms Still Struggle
    to Recognize Black Faces.” Wired, WIRED, 22 July 2019,
    http://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.