Humanizing artificial intelligence

Humanizing artificial intelligence

2023 was the year of artificial intelligence. We saw breakthrough developments – both positive and negative (and some were really eye-catching, like the photo of Pope Francis wearing a Balenciaga puffer coat). AI is already helping physicians to detect cancer and researchers to combat climate change. It’s teaching pupils with otherwise severely restricted access to education and is helping all of us by relieving us of tasks such as having to write boring e-mails ourselves thanks to ChatGPT. Yet at the same time, last year we saw how AI favors fake news and undermines serious news, steers armed drones and can be used to send innocent people to jail.

New generative AI models such as Meta’s LLaMA 2,Google’s Bard chatbot and above all OpenAI’s GPT-4 have both amazed and scared us. This rapid development has led to a very heated debate: optimists versus “doomers”.

Optimists view artificial intelligence as a utopian future without suffering, disease or hunger and are calling for an unbridled acceleration of all investments and research in the AI sector. By contrast, “doomers” demand a halt. In March 2023, numerous well-known scientists and entrepreneurs signed an open letter calling for an immediate six-month pause on the training of AI systems more powerful than GPT-4. These included: Elon Musk, Steve Wozniak, Yuval Noah Harari, Professor Stuart Russell from the University of California, Berkeley, and Yoshua Bengio, founder and scientific director of Mila – Quebec Artificial Intelligence Institute.

Yet this binary debate is a pseudo-debate that does not correspond to reality. The question is not: brake or accelerate? But rather, what do we want to accelerate?

We need a code of ethics for AI

Instead of fearing that HAL 9000, the computer with (neurotic) human traits from the movie 2001: A Space Odyssey, could become reality, we must first demystify artificial intelligence. AI is not a sinister force lurking to enslave or even destroy humanity. And if we design AI correctly, it will never become one.

AI is a tool – a tool that is still in its infancy despite the rapid progress that has been made and which we humans can influence with our decisions. We must therefore ask ourselves the question: What do we want to invest in? In lying chatbots, deepfake propaganda and "art" generators that steal from human artists? Or in AI that makes life easier for people? AI that gives us a healthier, more modern and dignified human existence? AI that doesn't eliminate jobs, but supports the well-trained knowledge workers of tomorrow instead? Because it relieves us of unpleasant tasks, solves global crises and supports us where human abilities are inadequate?

In other words, we should not only ask what benefits we humans have for the new and reality-changing technology – we should rather ask what benefits artificial intelligence has in store for a better humanity and how we can use it responsibly.

That’s why we need to change our perspective – and drive progress ourselves, guided by humanistic principles. This new approach does not require brakes, but rather an ethical framework that helps us to make responsible decisions – because even with the best of intentions, we can create unintended negative effects. That’s why we need guidelines that give us orientation in gray areas, that enable us to develop binding rules for certain applications, and that are lived in every corner of every research unit, company or government that works with AI.

What could a code of digital ethics look like?

For example, at Merck Group, we have an external advisory panel set up especially for this purpose as well as a code of ethics for handling data and AI. This Code of Digital Ethics is based on five ethical guidelines:

  • Justice: We stand up for justice in our digital offerings.

  • Autonomy: We respect the autonomy of every single human in our digital offerings.

  • Beneficence: We promote the needs and well-being of individuals and society through our digital offerings.

  • Non-Maleficence: We avoid doing harm through our digital offerings.

  • Transparency: We strive for the greatest possible transparency for our digital offerings.

These five principles are each defined in great detail and help us to create specific guidelines for certain applications or develop products responsibly. For example, the principle of “Autonomy” means, in practice, that we explain our algorithmic systems, among other things. After all, we are convinced that algorithmic systems should be explainable. Everyone who uses our digital services should know whether they are directly or indirectly affected by an automated decision.

The European Union is currently working on a regulation for providers and users of AI systems. Our Code of Digital Ethics will also be applicable in the AI sphere and offer guardrails for our work. After all, we want to create a positive AI innovation ecosystem and foster the public trust beyond just managing risks.

“Moral” questions are still difficult

Digital ethics is still a new field and many questions are still open. Merck intends to take on a pioneering role. Digital technologies require reliable ethical standards and these standards should guide our day-to-day business in the digital age.

We have to be realistic here. When you look at the current debate surrounding ethics and AI, you might get the impression that AI ought to have better morals than the humans themselves. But this claim is absurd.

For one thing, there is no “one true morality.” Today’s large language models such as ChatGPT may frequently provide answers to moral questions that a human test subject would consider “morally correct”. However, according to a working paper published in September 2023 by researchers at Harvard University, this is usually due to some kind of bias. Both the texts with which the models were trained as well as most of the test subjects were from a cultural group that the scientists affectionately summarized as “Western, Educated, Industrialized, Rich, and Democratic” – or: WEIRD. And even WEIRDOs like us can’t agree on one morality.

And even if there were “one true morality”, who wants to use AI that acts more morally than they do? For example, who would buy an autonomous car that refuses to drive because bikes are less damaging to the environment?

As previously mentioned, this debate is still too recent to be solved in a blog post. (For those who want to go into more detail, Paul Bloom, Professor in the Department of Psychology at the University of Toronto, gives entertaining and in depth insights in this article in The New Yorker magazine.) These examples might show just how complex the issue is.

However, for our own practical purposes at Merck, we have noticed that we can’t solve all the issues, at least not alone. But we can agree on minimum moral standards that will help us to make decisions and invest in AI that will help people. At Merck, our Code of Digital Ethics helps us to seize the opportunities offered by digital progress and minimize the risks at the same time. Under the guidance of our Code of Digital Ethics, we can promote AI innovation in a targeted manner.

Apart from human intelligence and artificial intelligence, a third parameter is also needed: material intelligence. In my next blog post, I will explain in more detail what that involves.

Md Khalilur Rahman Ansari

Student at Bholanath College

2mo

Artificial Intelligence and the Future of Humans: Artificial intelligence (AI) is a fascinating and powerful technology that has the potential to shape the future of humanity in many ways. Continue....... http://resources360.blogspot.com/2024/02/artificial-intelligence-and-future-of.html

Like
Reply
Lee Asplund

Elevating Scientific & Biopharma Evolution: Vibrant Public Speaker, Author, & Strategic Marketing & Business Development Maestro spearheading Digital & PAT Excellence for Amplified Growth Strategies & Customer Engagement

2mo

Kai Beckmann - thank you so much for your insights. This is a hot topic and as such, raises great discussions. A question for you - Given the complexities and potential consequences of AI development, how can we ensure that ethical considerations are prioritized in AI research and implementation, especially in industries and sectors where profit motives may conflict with ethical principles?

Tino Senoner

People must be found thanks to their exceptional skills. This works better, also for people with disabilities, assisted by AI technologies.

3mo

Hi Kai, You've raised a truly fundamental question. The use of AI technologies to enhance individual health is undoubtedly a significant advancement. Best regards, Tino

Marc Castricum

Master AI before it masters you.

3mo

Completely agree, ethical considerations for AI are crucial to ensure its positive impact on society! 👍

Marc R. Esser

From Strategy to Implementation: Your Partner in the Digital Age!

3mo

Completely agree with you! The ethical considerations surrounding AI are of utmost importance. It's great to see that Merck Group has taken initiatives to focus on a human-centric AI approach and develop a Code of Digital Ethics. This will undoubtedly help ensure that AI serves the needs of individuals and society as a whole. Looking forward to reading your blog article on humanizing artificial intelligence!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics