While the world began to recognize the term “artificial intelligence” during a conference at Dartmouth College in the 1950s, the ethical issues surrounding its developments took decades to crystallize and evolve. These issues have been directly proportional to the growth and spread of AI applications, from the initial simple attempts to the present moment when humans interact with “ChatGPT” and its successive versions.

For those working in the technology sector, both in industry and innovation, there are two sides to the issue of “ethics.” On one hand, a faction calls for subjecting expected developments to ethical scrutiny before applying them, making ethical considerations a governing standard, even urging for a “pause in application governance to ensure they do not harm humans.” On the other hand, however, this could “disrupt or push developers to regress,” according to another group.

Experts and academics agree on the “importance and vital nature of AI applications, as well as the well-being and benefits they’ve brought to many lives.” Still, they maintain the necessity of establishing ethical guidelines “to reign in automated programs without letting these criteria become a sword over AI’s neck.”

In March of last year, technology developers and AI specialists signed a petition calling for a “six-month halt on AI development research to allow for more governance of this activity to ensure it does not harm humans.”

This petition, prepared by the nonprofit Institute for the Future of Life, garnered nearly 3,000 signatures from scientists and entrepreneurs, raising fears that “the race undertaken by AI labs to develop and deploy more powerful digital minds might spiral out of control, making it impossible for anyone, including their creators, to understand or reliably manage them.”

UN Recommendations

Despite the advantages AI presents for humanity, such as diagnosing diseases and predicting climate changes, its ethical challenges—including some biased algorithms based on gender and race, and potential threats to privacy—prompted the United Nations Educational, Scientific and Cultural Organization (UNESCO) to adopt “the first global agreement on recommendations regarding AI ethics” in November 2021. This included four recommendations: the first stresses “data protection,” with measures to protect individuals’ data and their right to control it. The second calls for a “ban on the use of AI systems for mass surveillance,” ensuring that ultimate responsibility for any task remains with humans, preventing AI technologies from being seen as legal entities. The third supports “efficient use of data, energy, and resources in AI,” targeting interventions in environmental issues. The fourth recommends establishing a “mechanism for assessing the ethical implications of AI.”

Continuous Calls

Concerns surrounding AI “are as old as the concept itself, but the rise of chatbots like ChatGPT has brought them closer to reality,” says José Delgado, a specialist in experimental psychology at the University of Granada, Spain.

Writers, thinkers, philosophers, and scientists have all recognized the necessity of AI ethics. American novelist Vernor Vinge remarked in 1983 that the “future existence of humanity may depend on implementing solid ethical standards in AI systems since these systems, at some point, may match or replace human capabilities.”

In 2018, Swedish philosopher Nick Bostrom warned about the potential dangers of technological superiority should intelligent machines turn against their creators—humans—emphasizing the need to build “friendly AI.”

Scientists joined thinkers in cautioning against ethical fears. Notably, prominent American computer scientist Rosalind Picard stated in 1997, “the more freedom a machine has, the more it requires ethical standards.”

Returning to Delgado, he expresses no doubt that ChatGPT will overcome its mistakes, believing that “machine learning could lead it to become smarter than humans, posing a danger when AI is employed in conjunction with human workers.”

Therefore, Delgado stresses the necessity for “humans to maintain full control and responsibility over the behavior and outcomes of AI systems.” He attributes incidents analyzed in recent years—like the Alvia train crash in Spain in 2013, Air France Flight 447 in 2009, and Asiana Airlines Flight 214 in 2013—to AI overreach, stating researchers have found that the fundamental cause was that AI control strategies differed from those employed by human operators.

The challenge, as outlined by Delgado regarding human-AI interactions, is that “to enhance a moral and fair relationship between humans and AI systems, interactions must be based on the fundamental principle of respecting human cognitive capabilities.”

The Exceptional Human

This previously mentioned challenge does not conflict with another identified by Özlem Garibay, an assistant professor in the Department of Artificial Intelligence and Management Systems at the University of California, who considers “responsible AI” as being supportive of human well-being in a way that aligns with human values.

Garibay stated, “Within this challenge, smart robots can provide medical solutions for certain disabilities, but it should not develop into ‘deifying technology’ to build the ‘exceptional or superior human’ by enhancing characteristics or improving memory, for instance, using electronic chips.”

Another dimension identified by Marc-Antoine Delac, an assistant professor specializing in ethics and political philosophy at the University of Montreal, in an article published in UNESCO’s newsletter in March 2018, pointed to “software already applied in many countries to determine ‘terrorist behavior’ or the ‘criminal personality’ of individuals using facial recognition technology.” He noted that researchers from Stanford University were alarmed by “this resurgence of physiognomy theory analyzing individuals based on their facial features and expressions.”

However, biased physiognomy is not just a preventive security measure; hiring applications reflect some of this bias. A study by MIT Technology Review in February 2022 revealed that platforms like LinkedIn were attempting “to eliminate certain interview software that discriminated against people with disabilities and women candidates.”

Weaponization Concerns

Yoshua Bengio, a leading Canadian computer scientist and recipient of the 2018 Turing Award (considered the Nobel Prize of computing), stated there must be efforts to “prevent the design of AI systems that entail extremely high risks, such as systems capable of using weapons.”

Bengio emphasized that “AI systems can yield immense benefits for humanity, particularly in healthcare, but on the flip side, systems could be developed that utilize weapons, which must be prohibited.”

Among other challenges that demand attention is the need to “secure privacy,” ensuring that AI systems do not violate it. This was one of the reasons Italy decided to ban ChatGPT, with Spain and France soon following suit.

The “European Data Protection Board” announced the creation of a team to enhance information exchange regarding possible actions towards ChatGPT, expressing support for “innovative AI technologies,” but underscored that they “must always align with people’s rights and freedoms.”

The application “ChatGPT” collects personal data and processes it to train its algorithms, which Domenico Talia, a computer engineering professor at the University of Calabria, describes as “a clear violation of privacy.”

Talia adds, “I appreciate this application and the benefits it provides for human life, but at the same time, I do not accept that my personal data is collected when I interact with it.”

International Treaties

Bengio contends that the challenges posed by AI should be tackled through enforceable laws and legislation, rather than self-regulation, comparing it to “driving, whether on the left or right side of the road, where everyone must drive in the same manner, or else we will be in trouble.”

He notes a “proposed law regarding AI” is being prepared in the European Union, and a law will soon be enacted in Canada, but this does not diminish the need for international treaties similar to those established regarding “nuclear risks” and “human cloning.”

While UNESCO issued its recommendations on ethical challenges for AI less than two years ago, over 40 countries are currently working with the organization to develop “checks and balances for AI at the national level.”

The organization urged all countries to join its initiative to build “ethical AI,” stating in a press release on March 30 that a progress report will be presented during the “UNESCO Global Forum on AI Ethics” in Slovenia next December.

The Leverhulme Centre: Stephen Hawking’s Call for Humanizing AI

“AI is likely to be the best or worst thing to happen to humanity, so there is immense value in correcting it,” this is how renowned British scientist Stephen Hawking (1942-2018) sought to summarize his call for “humanizing AI” shortly before his passing.

Hawking’s remarks during the inauguration of the “Leverhulme Centre for the Future of AI” at the University of Cambridge (in 2016) reflect part of the center’s mission that focuses on “the future of AI,” while emphasizing the humanitarian aspect in its interests and research outputs.

Research published by the “Leverhulme Centre” brings together academics from diverse fields such as “machine learning, philosophy, history, engineering, and others” with the aim of “exploring AI’s short-term and long-term potentials and ensuring its use benefits humanity,” a goal that Professor Hawking highlighted upon the center’s founding, which was established with a £10 million grant.

At the opening ceremony, Stephen K. said, “We need AI to help us make better decisions, not to replace us,” adding efforts must be made to “ensure that intelligent artificial systems have goals that align with human values, and to ensure that computers do not evolve autonomously into unwelcome directions.”

In recent research, experts from the “Leverhulme Centre” sought to “monitor male gender bias by creating a cultural stereotype around male dominance in the AI field,” a scenario that they warn could lead to “fewer women in the field, further embedding biases into algorithms that select new hires for any organization.”

In a study published on February 13, researchers at the “Leverhulme Centre” reviewed 142 films from the past century (1920-2020) that addressed AI and identified 116 characters portrayed as professionals in the field, among whom 92% were male, whereas the actual percentage of men in the sector stands at approximately 78%. Researchers express concern that science fiction shapes reality and could help “entrench this status through cinema, marginalizing women in AI products.”

Another research initiative launched by the center between 2018 and 2022, titled “Global Narratives of AI,” aimed to review fictional narratives surrounding AI, identifying the values and interests driving them, and analyzing their impact on public imagination and acceptance, as well as their influence on policymakers and governments.

Did you enjoy this article? Feel free to share it on social media and subscribe to our newsletter so you never miss a post! And if you'd like to go a step further in supporting us, you can treat us to a virtual coffee ☕️. Thank you for your support ❤️!

Categorized in: