Artificial intelligence (AI) has become a
key factor in the advancement of many fields, but it is also a new frontier in the development of
neurotechnologies. Beyond its growing popularity in fields such as
automation,
content generation or
data analysis, its use in the study of the
human brain and related interventions raises significant moral and
ethical questions about the potential of AI tools and their relationship with
fundamental rights and
freedoms.
Now, a
recent study by a researcher at the Universitat Oberta de Catalunya (
UOC) has examined this intersection between
innovation and
ethics in the context of the
European AI regulation and its impact on the development of neurotechnologies. "Right now, all these technologies are moving forward very quickly, threatening the very
essence of what makes us human, which is the ability to
think and make our own
decisions," said
Miguel Ángel Elizalde, coordinator of the Research Group on International Relations and International Law (
GERD) and member of the
Faculty of Law and Political Science.
The study, which has been published in open access, examines emerging
ethical concerns and warns of potential
human rights violations in the fields of
mental privacy,
freedom of thought and
individual autonomy. The researcher also examined whether the current
legal framework is adequate or new specific rights will have to be established to address a type of technology that could eventually influence people's mental processes.
Neurotechnology innovation
Neurotechnologies are tools and applications used to measure and record various types of brain signals, such as electrical, magnetic, optical, acoustic or mechanical signals. They also include technologies capable of influencing neuronal processes.
For decades, these technologies have been developed primarily in
clinical and
experimental settings, with a particular emphasis on neurological disease diagnosis and scientific research. However, the use of AI has made it possible to test the current
limits in order to go further and fully transform this field.
Until now, no tools were capable of managing the
huge amount of information produced by the human brain in the form of
complex signals. These processes were impossible to interpret without
advanced processing systems. However, this has now changed completely. Thanks to the development of AI and its application to neurotechnologies, we can identify
patterns,
correlations and
meanings in the data, turning seemingly chaotic signals into
information that is useful and actionable.
This capability has turned
neurotechnologies from mere observation tools into systems capable of interacting with the mind, especially when using other technological advances to support
neuroimaging technologies, which are technologies designed to study
brain function and structure. Neurointervention technologies, for their part, are designed to act directly on
neuronal processes by stimulating or modulating brain activity. Examples include certain brain-computer interface neurostimulators.
These brain-computer interface devices can operate as both open- and closed-loop systems, as they both translate the brain signals received and can even send them, enabling actions to be modified or adjusted. "For example, an aeroplane controlled by a brain-computer interface device sends information such as wind conditions or pressure directly to the brain, allowing it to change direction," said Elizalde, who is also a researcher at the
Digital Transformation and Governance Research Centre (UOC-DIGIT).
Associated risks
Given their potential in the field of neurotechnology, AI applications present many risks linked to the possibility of using them to interfere with people's thoughts and mental states, such as
emotions,
intentions or
preferences, which is cause for major
ethical and moral concerns. On the one hand, there is the idea that, without
appropriate regulations, these tools could be used to extract
sensitive information from a person's brain or affect their
behaviour without their full knowledge.
Elizalde warned that "when used in combination with AI, neurotechnologies could potentially make inferences about the subjective content of a person's mind and affect their mental processes, in turn influencing their behaviour." As a result, they could potentially identify emotional states, beliefs or private preferences, with huge implications and consequences in all social and professional spheres.
On the other hand, however, neurotechnologies can bring extraordinary
benefits in fields such as
medicine,
mental health or
neurorehabilitation. From
treatment for degenerative diseases to spinal cord injuries, their potential applications could be life-changing for millions of people.
Regulatory framework and the AI Act
Against this background, the
European AI Act addresses some of these scenarios by banning systems designed to
manipulate behaviour through subliminal techniques or deception. However, this regulation could stand in the way of
research and
development in Europe, shifting innovation towards regions with more permissive legal frameworks.
So far, technologies in the research and development stage are not covered by the regulation but, in view of its progress, European and foreign manufacturers wishing to market their products in Europe will be subject to
stricter conditions, which may affect their
competitive position in international markets.
"There's a risk that it will make
innovation in Europe more difficult than in other parts of the world; it's a legitimate concern. However, we mustn't lose sight of the risks of not regulating," said Elizalde, emphasizing that using AI and neurotechnologies to
"rewrite a person's mind" would be banned under the European regulation.
"This prohibition is based on the aim of protecting people's
right to freely decide against any AI system with
behaviour-altering capabilities that circumvent rational control, which is considered an
unacceptable risk of AI," said Elizalde.
Designing new rights
Due to the possible risks and conflicts associated with their use, experts have raised many questions about the rise of these technologies and the need for new rights, known as
neurorights, to protect mental privacy, personal identity and free will against potential technological abuses.
"The regulation clearly applies to AI and is not specifically designed for neurotechnologies. However, neurotechnologies are increasingly using AI and working very closely with this technology," said Elizalde.
However, in the UOC researcher's opinion, current
human rights and the
European regulatory framework could be sufficient if their interpretation is tailored to
technological contexts, as they can be interpreted in a way that protects people from invasions of privacy or attacks against their freedom of choice. "Above all, they protect the rights to
freedom of thought,
privacy and
integrity in its various forms. This means that the establishment of new
neurorights would not be essential," said Elizalde.
According to this view, new
legal categories would not be required. Instead, existing ones would have to be adapted to address the risks associated with the application of AI to the brain. This interpretation eliminates the risk of
fragmenting the human rights system and supports the idea that
fundamental guarantees must evolve at the same pace as society.
This project, which is part of the UOC's "Ethical and human-centred technology" research mission, supports UN Sustainable Development Goals SDG 9, Industry, Innovation and Infrastructure; and SDG 16, Peace, Justice and Strong Institutions.
Transformative, impactful research
At the UOC, we see research as a strategic tool to advance towards a future society that is more critical, responsible and nonconformist. With this vision, we conduct applied research that's interdisciplinary and linked to the most important social, technological and educational challenges.
The UOC’s over 500 researchers and more than 50 research groups are working in five research centres focusing on five missions: lifelong learning; ethical and human-centred technology; digital transition and sustainability; culture for a critical society, and digital health and planetary well-being.
The university's Hubbik platform fosters knowledge transfer and entrepreneurship in the UOC community.
More information: www.uoc.edu/en/research