image

What is the problem with Artificial Intelligence?

Since the digital age's inception, artificial intelligence (AI) has been a crucial piece of technology, especially for me as a future computer science major. Although first appearing in 1951 as a checkers-playing algorithm, the revolutionary development in Large Language Models (LLMs), models that can perform language tasks such as generating or summarizing texts, has shined the spotlight on AI and has spearheaded its lightning-fast development (Russel and Norvig). However, the pressure for the rapid growth of this technology, which has only recently come into the government spotlight, has left checks and balances, usually in other technologies, unplaced due to the lack of regulation.

Explanation of machine learning concepts (Fireship)

In its quick development, developers behind AI technologies still need to remember to ensure their ethical development and to consider human privacy and the workforce. To hold developers to a standard where their development does more good than harm, governments should sign into law and enforce legislation beneficial to citizens involving ethics, privacy, and the workforce.

How do ethical concerns arise with AI?

Ethical concerns have largely sparked conversations about using artificial intelligence during its rapid development. When developers no longer face penalties for wrongdoings, some are unmotivated to do the right thing and choose wealth over people.

AI vs ethics

AI Ethics (McCombs School of Business)

"We defined AI ethics in terms of impact analysis: this can be read as a response to the increasing number of high-profile cases of harm that has resulted either because of the misuse of the technology…or as a result of the technology having design flaws." (Kazim and Koshiyama)

When expediting any project, project managers must take shortcuts to meet a deadline quickly. This concept is especially true for technology. A fast-racing environment that pays well to be the first to be unique creates a situation where it forces developers to push stability and ethics aside to fit in better. Duffourc and Gerke state, "It dismissed the breach of contract claim for failure to state a legally cognizable claim—MD could not demonstrate all the elements of a claim recognized by the applicable law." When no regulation exists to force fast-paced developers to develop software ethically and with users in mind, end users see themselves in a world where they cannot fight back against violations committed against them.

What are the privacy concerns with AI?

The approach in which the government leaves companies to self-regulate allows corporations to be invasive of their client’s privacy, and in some cases, without the client acknowledging so. To train machine learning models, pre-existing and parameterized training data, like images or text, must be entered into an algorithm to identify and generate content based on its training data. Unfortunately, hoping to use client data, an organization may train AI models based on user data without their acknowledgment.

Duffourc and Gerke state, “UC shared with Google ‘de-identified’ EHR data from adult patients encountered between January 2010 and June 2016. These data still contained ‘dates of service’ and ‘de-identified, free-text medical notes.’”. Although the University of Chicago removed any identifying information in the data given to Google, patients who trusted the University of Chicago with their medical data were left feeling betrayed after the University of Chicago gave their information to Google to train their AI without the patients having a say. Additionally, although the University of Chicago instructed Google not to use patient data to identify patients, a data breach in Google, a widely internet-connected company, could leave patient data in the hands of unauthorized individuals who could identify patients with the dates of service and medical notes still attached to the data.

Scale judging AI vs privacy

Scale judging AI vs privacy (G)

Kazim and Koshiyama iterate, “Another issue is consent, where respect for human agency would entail meaningful and informed consent, including the right to withdraw consent and presents with consent mechanisms that are explicable in the context of an average user.”. A theme within machine learning ethics is that all users should be knowledgeable and informed when organizations use their data to train AI. Unfortunately, companies tend to hide this consent within the fine print of their services, leading to a situation where users have consented to training an AI but are, in most cases, unaware.

In what way does AI threaten the workforce?

In addition to ethics and privacy, a central talking point about artificial intelligence is its ability to automate many human functions within the current workforce. Without legislation, people who depend on jobs that could possibly be automated risk having their livelihood stripped from them for the sole reason of technological advancement.

"In March, it is estimated by Goldman Sachs that AI may 'expose' around 300 million jobs to automation, in 2013 Oxford University studied and found that 47% of US jobs would be replaced by AI in the next 20 years." (Manu et al.)

Self-driving cars replacing taxi and ride-share jobs (Waymo)

With such a decrease in human working demand solely due to automation, people worry about whether the paycheck they had just received would be their last. As a computer science (CS) major, I must repeatedly face a similar horror story of the CS job market. The stories of those with hundreds of job applications and yet not a single offer letter display the damages COVID-19 has done, and with AI threatening massive layoffs, employees who would otherwise be working find themselves suffering and panicking. Kazim and Koshiyama write, "crucial to accountability is ensuring that there are robust human oversight mechanisms, based on the principle, and current legal standing, that humans are ultimately accountable and thereby responsible for harms that may result from AI systems." With little legislation holding machines or their developers accountable for their actions, companies that lay off human resources are left with no accountability as the machines that work heavily alongside humans influence their overall operation.

Conclusion

With ethical, privacy, and workforce concerns springing solely based on the development of artificial intelligence, legislation being imposed forces developers to consider those around them before creating or enhancing an AI. In the process of creating an AI, developers should overlook the end goal of their project and question themselves on whether or not the project is ethically viable. In addition, developers who source data from any source should confirm that the collected data was sourced and parameterized with consent. Lastly, any AI created should not harm the workforce but help it improve efficiency or create new employment opportunities.

Human reflecting on AI

Human reflecting on AI (Goel)

Reflection

Since middle school, I have grown up with an unwavering passion to grow up and study Computer Science. My need to interact with and examine every piece of technology in my possession became obsessive, so my parents would keep me away from any new technology that came into my household. Then arrived artificial intelligence. Before it caught the public eye, I was amazed and curious. There was something surreal knowing a computer, an inanimate object, could think. Learning all the behind-the-scenes of training a neural network and how math, especially statistics, played into getting an output from an input was more than my brain could comprehend at the time, and my passion to learn more about this technology has a never-ending appetite. However, then came Large Language Models and the public eye's undivided attention. I remember ChatGPT being everything anyone could talk about within High School and how people bragged about using it to cheat on homework, essays, and coding assignments. I remember teachers scrambling to find tools to recognize AI-driven work and how teachers had entire classes using LLMs to artificially generate work. In addition to schooling, companies compete within AI, spearheading its development no matter the cost, causing and threatening layoffs with any new and better model. Unfortunately, this public obsession with AI was my eye-opener to how harmful tools like AI were. Although my memory of AI is still not fond, this project has encouraged me to give AI the same chance I did when I first found it. With a lot of legislation and tools to prevent unneeded AI attention, AI won't be able to return to what it was, but it will at least be there as a tool, not an assistant.