ChatGPT has quickly become an integral part of many people’s daily lives, facilitating a wide range of tasks such as writing work emails, understanding technical and scientific concepts, and generating creative content. However, true mastery of the tool, which was developed by OpenAI, is often limited, and many people fail to understand its full potential. What proportion of users really know how the artificial intelligence behind the prompter works, or what “ChatGPT” even means?
GPT, which stands for Generative Pre-trained Transformer, is one of the best-known large language models (LLMs). LLMs are natural language processing (NLP) algorithms that use neural networks, often built using transformer architecture, to generate human language. They are trained on huge text corpora and learn to predict the next word in a sequence, enabling them to generate coherent, contextually relevant text.
LLMs are part of a new form of AI that is called “generative AI” because it can generate content, including text, images (with systems like DALL-E), music and even code. “Generative AI is a step further in the development of AI that offers different uses, bringing new opportunities but also new risks with major ethical challenges”, says Christine Balagué. Professor of Management Sciences Christine Balagué is an expert in AI and management at Institut Mines-Télécom Business School and is studying the nature of the risks of generative artificial intelligence with her PhD student, Ahmad Haidar. Their work will be the subject of a scientific publication.
ChatGPT, the AI Act… major changes in the world of AI
Christine Balagué already had extensive expertise in the fields of digital technology and social media and developed an interest in generative AI a year and a half ago. She has contributed to a number of multidisciplinary projects on the subject and her recent publications include two reports, released in late 2023 and early 2024, which paved the way for her current research: ChatGPT: research evidence-based controversies, regulations and solutions and Un an après l’arrivée de ChatGPT:Réflexions de l’Obvia sur les enjeux et pistes d’action possibles face à l’IA générative (see insert below). “Generative AI, particularly ChatGPT, inspires the imagination but can also instill fear. That’s why we carried out a literature review in different disciplines, to identify the challenges, risks and opportunities of this technology on the basis of the available data,” she explains.
As the researcher developed the next phase of her research, a draft European law on artificial intelligence, known as the EU AI Act, emerged. The new legislation was formally adopted on May 21, 2024, by the European Council and aims to establish a legal framework to regulate the use of AI technologies, notably ensuring that the AI used in the EU is safe, transparent, ethical, and respectful of fundamental rights.
The AI Act classifies AI systems into three risk categories: unacceptable, high, and limited or minimal. This context, together with funding by the Good In Tech network for a thesis, carried out by Ahmad Haidar on responsible AI, provides fertile ground for exploring the specific risks of generative AI and ChatGPT in particular.
A risk classification framework based on real-life incidents
As with Christine Balagué’s previous studies, this research goes beyond postulations: “we’re no longer vaguely positing that generative AI has consequences for disinformation,” Christine Balagué argues. “We’re trying to prove that it does using real risk-analysis data.” To achieve this, Ahmad Haidar drew on a database of AI-related incidents provided by the Organisation for Economic Co-operation and Development (OECD), from which he selected incidents relating specifically to generative AI identified in the year following the launch of ChatGPT.
Based on the 858 events identified, the PhD student and his supervisor have defined three main risk categories: those relating to data, governance and privacy, those relating to content, and those relating to usage. The impact and consequences of these risks are assessed at different levels: individual, societal, organizational, and social and environmental. Based on real-life data, this classification framework allows the two researchers to put the finger on previously unidentified risks.
“Of course, AI-generated fake news poses a significant risk to social cohesion, national security, and political and economic stability, but this new research shows that there are many other dangers: in terms of psychology and well-being at an individual level, also in terms of reputation at an organizational level, something that had not previously been demonstrated.” says Ahmad Haidar. Furthermore, the two scientists stress that the environmental risk of generative AI is very poorly cataloged, “because it’s poorly perceived by economic players, even though it’s very real”. It is admittedly not easy for users to appreciate the computational resources required to run these systems.
A tool for the masses, delivered without instructions!
Christine Balagué has studied social media at length and believes that generative AI is following a similar technology acceptance model (TAM). “Although the uses are totally different, both products are free, simple and accessible. The free version of ChatGPT has limited features and you quickly have to pay for a more powerful version, but strong competition is developing, such as Gemini or Llama,” says the researcher.
Based on two factors, namely ease of use and perceived benefit, the TAM predicts massive adoption of generative AI by businesses and the general public. “But very few people are made aware of the potential risks!” warns Christine Balagué. The researcher draws a parallel with driving a car: “Without taking lessons, you are very likely to have an accident, or even kill someone, but by learning to drive and respect the rules, it’s a great tool that offers a huge range of possibilities. The problem is that, for the moment, generative AI doesn’t have its own highway code yet…”
Learning to “prompt”
While the AI Act imposes fairly strict regulations on AI, LLMs still benefit from a favorable context, supported, according to the two researchers, by a “techno-solutionist” perspective. The solution, pending more targeted measures on generative AI, is to make users aware of the risks and educate them, in particular, in the subtle art of the prompt.
The starting point for ChatGPT, the prompt is far from objective and steers the answer provided. “Depending on the prompt, you can obtain very different results for the same request. There really are techniques for prompting and avoiding hallucinations [editor’s note: hallucinations are when the model produces answers that seem credible, but are in fact false or unfounded, such as erroneous facts or invented quotations…]”, argues Christine Balagué. In order for the AI to understand a query accurately, one possibility is to “teach” it, by feeding it verified content, for example.
The researcher supports the idea of introducing prompt courses in graduate schools, but deplores the fact that nothing similar is in the pipeline for the general public. “We don’t want to be pessimistic about generative AI,” she says. “ChatGPT helps improve productivity in a multitude of areas. The opportunities are numerous, but we also need to be aware of the risks, to assess them and be prepared to manage them at every level of society, from individuals to companies.”