"Artificial intelligence should help and augment what people do, not replace them. Technology should be a support that gives us new opportunities," says Stefan Schmager.
He recently defended his doctoral thesis at the University of Agder (UiA).
Schmager collaborated closely with NAV, the Norwegian Labour and Welfare Administration, during his doctorate. He investigated how citizens and employees reacted to a system based on artificial intelligence (AI) used for following up those on sick leave.
What surprised him most was how trusting Norwegians were towards the state and governmental organizations.
"When I presented the results at conferences in the US, people were astonished. They couldn't believe that people trusted the authorities. In Norway, people understand that the public sector serves an important role," says Schmager.
Trust facilitates innovation
Norwegians' trust in public authorities makes our society function well, according to Schmager, who himself is from Germany.
"Trust facilitates innovation. Well-intentioned initiatives aren't as easily shelved due to a lack of understanding or obstinacy. Still, I think it's healthy not to be naive and to question the decisions being made," he says.
Participants in the study were positive about NAV handling their data using artificial intelligence. They appreciated transparency about how the data would be used and understood that AI processing could save time and resources in a way that benefits all.
To give an example: If you are on sick leave, artificial intelligence can help your caseworker decide whether a follow-up meeting is necessary or not.
"People understood they were contributing to something bigger. By letting NAV use their data, they free up resources that can be used for others in more need of help," explains Schmager.
The doctorate is part of the NFR-funded AI4Users project in the Human-Centered AI research group and also part of an extensive collaboration between NAV and UiA,
"In short, human-centered AI is about using AI as a tool that augments our abilities to perform our tasks, rather than technology taking those tasks from us," says Schmager.
Employees positive
Schmager also interviewed 19 NAV caseworkers. Most of them were positive about AI but had clear preferences.
"They saw great potential in AI handling routine tasks, freeing up more time for the people they are meant to help," he says.
As one NAV employee told Schmager: "We could spend our time on the most important cases, those who truly need it. People with few resources who cannot take care of themselves."
Caseworkers wanted AI that could:
- find important information faster
- help prioritize cases
- take care of time-consuming administration
- give them more time for the most challenging cases
12 rules for safe AI use
"In the private sector, we want AI to adapt to us. But in public services, everyone must be treated equally. The system shouldn't learn and copy one caseworker's habits," Schmager explains.
For example, if a caseworker frequently rejects applications from young men, the AI system should not start suggesting the same. Every case should be assessed equally based on the regulations.
Schmager developed twelve such principles for using AI in the public sector – six for citizens' needs and six for employees' needs. You can read them all in the fact box on this page.
Few guidelines for the public sector
“When approached by the researchers from UiA, NAV was very open and interested about the opportunity. They said «we plan to use AI, but we know there are risks and we would appreciate if you could help us do it right»”, says Schmager.
This research fills a gap. While many major tech companies have developed their own AI rules, there are almost no guidelines for the public sector.
"Private companies have to make money. The public sector serves the people. That is why different rules are needed," says Schmager.
"Must be done responsibly"
"NAV is rapidly advancing in digital development, and the collaboration with UiA and Stefan has provided our organization with valuable knowledge and insights," says Arve Haug, senior adviser at NAV.
He explains that NAV relies heavily on trust in its digital services, making collaboration with UiA crucial for ensuring the services are perceived as safe and fair.
"There are obviously many opportunities to use AI in our services, but this must be done responsibly. That is why our collaboration with UiA is so important," says Haug.
A warning
Schmager's study shows that Norway and the Nordic countries are leading in responsible AI use, but he warns against rushing into it.
"Don't use AI just because everyone else is doing it. First understand the problem you wish to solve, then determine if AI is the right tool," he advises.
The principles he developed can be applied by any public entity looking to implement AI. They can also be adapted for other countries, despite differences in levels of trust compared to Norway.
12 principles for public use of artificial intelligence:
Citizen-focused Design Principles:
- Balancing AI Benefits for the Public Good and Individual Freedom
(AI systems should clearly explain how they benefit society whilst respecting individual rights and freedoms)
- Comprehensive, Inclusive, and Necessary Data
(Only collect and use the personal data that's actually needed, and explain what data is being used and why)
- Reconciliation with Governmental Mandates and Process Integration for AI Systems
(Make it clear how AI fits into government processes and legal requirements, using simple language)
- Gradual Information Provision
(Give people an overview first, then let them access more detailed information if they want it)
- Voluntary Feedback Mechanisms
(Provide easy ways for citizens to give feedback and explain how that feedback will be used to improve services)
- Appropriate Consent Practices
(Get proper permission from citizens when using their personal data, especially when data ownership isn't clear)
Employee-focused Design Principles:
- Sensible Resource Allocation (Help public service workers focus their time on people who need the most support)
- Automating Repetitive Administrative Tasks
(Let AI handle repetitive work so employees can focus on more important tasks)
- Consistency over Personalisation
(Ensure AI systems treat everyone fairly rather than adapting to individual employee preferences)
- Providing Legal Reassurance
(Give employees clear information about data sources and legal compliance to help them do their jobs confidently)
- Voluntary Feedback Mechanisms
(Provide easy ways for employees to give feedback about AI systems and explain how improvements will be made)
- Enabling Human Decision-Making Power
(Keep humans in control of important decisions, with AI serving as a helpful tool rather than making choices independently)
Source:
Human-Centered Artificial Intelligence: Design Principles for Public Services