“Artificial consciousness is like when a great actor plays Hamlet. He doesn’tbecomeHamlet; he only plays the role. In the same way, there is no reason to think that what we talk about as artificial consciousness actuallyisconsciousness. And that is also where much of the value in humans lies - and animals.”
This is what Professor Einar Duenger Bøhn at the University of Agder (UiA) says. The prolific professor has taken time between various book projects and podcasts to write the bookTeknologiens filosofi(The Philosophy of Technology). The book is now published by Cappelen Damm.
We need a cat
The reference to animals is not accidental. The book begins with a description of Selina, Duenger Bøhn’s ‘absolutely fantastic’ black cat. In the preface he writes:
Why do I value the naturalness of and the low-tech about Selina like this? Would it not have been more practical to have an artificial digital robot cat that looked and behaved in the same way? It would not need food or drink and no sandbox in the hallway. And we could just turn it off when going away for a weekend.
“But no”, Duenger Bøhn quickly concludes. “We need a life partner, someone who feels the way we feel and is the way we are. It is not something an imitation can replace, it must be something real”, he says.
Ethics is a challenge
WithTeknologiens filosofi, Duenger Bøhn wanted to write a book he felt was missing from bookshelves.
“I address the basic problems surrounding new technology. From what information is, to whether it is possible to create artificial people, and whether we should use technology to create a new and better human species”, he says.
In the book, he leaves it to the reader to decide what is right and wrong in these questions. Much because he thinks so many books have already been written about ethics and technology.
Duenger Bøhn is, however, far from uninterested in the topic. As a member of theArtificial Intelligence Research Centre (CAIR)at UiA, he works to create an artificial intelligence that acts morally.
It is about creating an artificial intelligence that can take different interests into account and not just pursue one goal, among other things.
“What makes it difficult is that we do not quite know what morality and ethics are. It is difficult to find the data and the algorithms that will give the computer ethical training. It is perhaps more philosophically challenging than technically”, he says.
“Think for yourself”
The philosopher hopes that the book will provide a holistic view of new technology, and that the reader will be challenged to think about how we use technology.
“‘Dare to think for yourself’ is the slogan of the Enlightenment. There is something very robotic about just doing what others say and think. It is convenient, but also the root of all evil. I hope both students and people with backgrounds other than philosophy will read the book, gain new insights, and form their own opinions.