AI competes for our attention because our attention has been commodified. As our entire lives revolve more and more around the attention economy, what can we do to restore our autonomy, reclaim our privacy, and reconnect with the real world.

Computer scientist Fabien Gandon and research engineer Franck Michel are experts in AI, the Web, and knowledge systems. Fabien is a senior researcher at Inria (Institut national de recherche en sciences et technologies du numérique), specializing in the Semantic Web, while Franck focuses on integrating and sharing data through Linked Open Data technologies.

Together, they’ve written Pay Attention: A Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance. Their research unpacks how digital platforms are monetizing our attention at an unprecedented scale—fueling misinformation and division and even threatening democracy and affecting our emotions and well-being.

THE CREATIVE PROCESS

So, on the topic of how to regulate the attention market, there is the EU AI ACT and the UK Online Safety Act. The US has the National Artificial Intelligence Initiative Act, a proposed Algorithmic Accountability Act, and the Blueprint for an AI Bill of Rights. China has the New Generation Artificial Intelligence Development Plan (AIDP) and Regulations on Algorithm Recommendation Services and other approaches to governance, but how are they going to be implemented? And how do we strengthen those human guarantees?

FABIEN GANDON

The fact that technologies are being used and combined to capture our attention is concerning. This is currently being done with no limitations and no regulations. That's the main problem. Attention is a very private resource. No one should be allowed to extract it from us by exploiting what we know about the human mind and how it functions, including its weaknesses. We wrote this paper as a call to regulate the attention market and prevent algorithmic emotional governance.

I think one of the key aspects right now when we talk about AI regulation is the notion of not only observability but also auditability. We need to check things against the original documented objectives of these technologies. For each one of these techniques, we need actors to take at least two steps. The first one is to publicly document the objective of the techniques. What do they want to achieve with them? Then, we need them to make these techniques observable so that we can be sure the actual technique is pursuing the documented objective. In other words, we need each of these players to state, "I want to use this technique to recommend better content to you." then, I'm showing you the data so that you can verify that my technique is actually achieving that and not something else. You need these two steps, which are extremely important in terms of transparency. If you don't have these two steps, if you don't know what the company's intended objective is, and if you cannot check whether this is the objective pursued by the techniques, you have a black box, and you can only manage control after that. It's extremely important to recognize that these different types are indeed having a combined effect.

So, to put it bluntly, our brains are being hacked, and we can identify at least two ways this is happening. The first category includes techniques that are explicitly designed to use our cognitive biases. For instance, when applications use notifications or tell you how many people liked what you published, they are tapping into the dopaminergic pathways of your reward system and into your need for social approval. Some of these techniques are known as dark patterns, and they are designed to nudge us into actions we wouldn't have taken otherwise. The second category of techniques is based on machine learning, which is trained on massive behavioral data from web platforms. They learn how to recommend content to us. However, if the objective of these techniques is not to actually recommend the best content but to increase or prolong our engagement on the platform, then they can end up selecting content that is not necessarily interesting, but that triggers negative emotions, which are well-known to keep us engaged. That's how you end up having a system polluting your days with negative emotional content just to keep you connected. In both cases, it is important to document what the objective is and be able to observe what is actually being done so that we can hold people accountable for their actions and say, "Look, you told us you wanted to do A, but you are doing B, and we do not agree with that."

FRANCK MICHEL

What are the goals of the platforms? If you look at what the main social media platforms tell us they will do for us, they promise to find what interests us on the web. However, by doing so, they train algorithms to recommend information that conforms to our tastes and beliefs, which is very convenient and comfortable but too comfortable.

It tends to lock us into a filter bubble, a comfortable space where we are no longer confronted with contradictions or facts or ideas that challenge our opinions. This has a very concerning consequence: agenda setting. The platforms decide not what we have to think, but what we have to think about. By creating a vision of the world that is specific to us and almost unique for each individual, these techniques lock us into some sort of biased reality. The consequence here is that if we observe the same reality and the same facts, we can engage in debate and argument about the reasons for various issues that occur in the world and how to solve them.

Fair enough. What happens if we see a different reality? If we do not observe the same facts, then the debate becomes nearly impossible. There is no longer a possibility of a free, contradictory debate within the public space, which is essential to the functioning of democracies. By locking us into a comfortable informational space, they undermine the principles of democracy, which rely on our ability to discuss differing opinions about the same facts.

THE CREATIVE PROCESS

And it's quite ominous to think about the design to tell us what to think about. Of course, because we are feeling, emotional creatures, our mental landscape and emotional landscape are influenced by how we spend our increasing time and our entertainment.

We believe that there's a choice, but there is no choice. In the case of being flooded with misinformation, the journalistic landscape becomes overwhelming, with too much contradictory information leading people to lose faith in the democratic process. This contributes to the rise of populism and causes individuals to disengage. It disenfranchises them; they lose the desire to vote and engage politically. In this context, this is kind of like the playbook that autocrats can utilize to take power easily. We have seen this in Europe, along with a rise in populism elsewhere in the world. This is a dangerous moment to be in, as we know historically.

GANDON

I completely agree. What is actually frightening from a historical perspective is that, with the initial objective of making advertising effective, the widespread use of attention-capturing techniques and their exploitation of cognitive biases, combined with the network effects of these applications, have a worrying deleterious effect on our societies. We could mention the polarization of opinion, the dissemination of false information, or fake news. We could also view this as a mechanism for fragmenting our societies. This is splitting people apart.

It's important to stress that there is danger in how our attention is captured and focused on things we do not choose. That's one part of the danger we mentioned. But there is another danger, which I think is very relevant to us today, because this podcast is called "The Creative Process."

AI’s & Social Media’s Impact on Creativity and Imagination

This is the perfect place to mention another worrying impact of techniques for capturing attention: the impact on the creative process itself. We know that concentration, boredom, intellectual wandering, and daydreaming are essential to creative thinking, and many of us have experienced the flash of a sudden brilliant idea in the middle of a moment of relaxation. By stealing these moments, attention-capturing systems hinder creativity. It is our opinion that this should be a major concern for all activities and professions requiring concentration, creativity, and imagination. One might ask what attention-capturing systems are doing to areas such as politics, health, education, and artistic creation. In other words, I think The Creative Process Podcast is the right place to address attention creators.

I really agree with your analysis. There is a philosopher and sociologist named Hartmut Rosa who wrote about the fact that acceleration can lead to a type of alienation. Acceleration is a way to alienate people; AI tends to accelerate things, especially the techniques we mentioned.

They aim to provoke our reflexes and immediate reactions, which does not allow time for contemplation. They are replacing reflection with reflexes.

The Case for Selective Acceleration

However, there is a case to be made here. Not everything should be accelerated. We need to choose which tasks we want to automate and accelerate. We should select wisely; for example, we might want to accelerate doing laundry, but not necessarily other tasks. There are some domains where it is already known that a deliberative process should take place. In France, when you buy a flat, there is a duration during which the decision cannot be finalized. This duration is in place to give you the time to think twice about your action because it is a significant decision and investment. This time should not be shortened or accelerated, and there is a good reason for that. I firmly believe we need to reclaim our time.

Human-Centered AI: Augmented Intelligence

The second point that comes to my mind is that in the 1950s, when the term "artificial intelligence" was coined, another term, "augmented intelligence," was also introduced around Douglas Engelbart. The point made at that time is that humans should be the center of everything. We should examine what we want to augment in human activity and what is beneficial. Humans should always be at the center of the picture. That is the significant difference between AI, artificial intelligence, and IA, intelligence augmentation; the human is at the center of the picture from the beginning.

We are generating fast content, but the quality is not there. The value linked to the nature of the content is lost. It is completely different, at least in my view, to contemplate a fake ocean video versus knowing that the video I am viewing was recorded by a human who had to dive, travel, and wait until they captured that rare species. The value in the first instance and the second is completely different. Yet, there is this tendency to pursue acceleration again. Just like we have fast food, we now have fast content.

For the full conversation, listen to the episode.

This interview was conducted by Mia Funk with the participation of collaborating universities and students. Associate Interviews Producers on this episode were Sophie Garnier and Devyn Daniele. The Creative Process & One Planet Podcast is produced by Mia Funk.

Mia Funk is an artist, interviewer and founder of The Creative Process & One Planet Podcast (Conversations about Climate Change & Environmental Solutions).
Listen on Apple, Spotify, or wherever you get your podcasts.