photo: JAKUB PORHYTSKY
The use of artificial intelligence (AI) in political campaigns needs tighter regulation to prevent voter manipulation, an Auckland law professor has warned.
National Party this week admitted to using AI create images for your election campaign.
on Thursday Privacy commissioner Michael Webster has been sacked list expectation for the private and public sectors to apply when using generative AI.
But University of Auckland senior law lecturer and AI law expert Nikki Chamberlain told RNZ’s Morning report more was needed.
“I think we need transparency and we need to know where the images and videos that are produced as part of political campaign marketing come from, because I think it’s about transparency — the voter is not being manipulated,” she said. “but I also think we should have laws that actually require it.”
Chamberlain, who is also one of the editors of the latest, 3rd edition, art Privacy Act in New Zealand tutorial – said the Privacy Act 2020 has provisions to set out how people’s personal information can be used, shared and analysed, but it has some flaws.
“This is not a high watermark, like the General Data Protection Regulation in the EU. We just saw that Facebook was fined €1 million last week for GDPR violations, and that’s a fine. the privacy act, our maximum fine is NZ$10,000, so the privacy act needs to be enforced more strongly.”
University of Auckland Law Lecturer Nikki Chamberlain
photo: Delivered
She said New Zealand also needed more specific rules on the use of artificial intelligence “to make sure that the voter is not manipulated and that there is transparency”.
According to her, there are two main types of AI used in political campaigns: machine-based algorithms like those seen in the Cambridge Analytica scandal, and generative AI like ChatGPT and Midjourney.
The former was a concern because it could be used to target users with political ads to try to get them to vote for a particular party or candidate.
“There was a Facebook app, a quiz, and people participated … their information and their friends’ information was pulled into that and then it was used by Cambridge Analytica to essentially determine voter preferences,” Chamberlain said.
“If you find that someone often clicks on links related to crime, for example, or is particularly interested in crime-related issues, then you will. Maybe they are afraid, and then you can target them with fear-based advertising. and around an increased police presence and a tough fight against crime.’
She said that Generative AI, on the other hand, takes a collection of data — strings of words or images — and uses it to create new data that closely resembles the type of data being fed into it. This means that a person can request an image or document of a certain type, and the AI will generate what someone might expect to see based on previous examples that the software has access to.
This is the technology behind ChatGPT for quickly creating text documents, or programs like Midjourney for creating images – exactly what National used for its advertising campaigns.
On the face of it, Chamberlain said the only problem is putting stock actors out of work, but there is potential for more abuse, such as with video AI like Deepfakes.
“The worst part about it is that if it’s not kept under control, you can essentially have images of people saying things they didn’t say, and the voters don’t know about it. [that it’s not real].
“It’s easy to get caught up in a narrative that may not be accurate.”
She said any campaign material using artificial intelligence should make this clear for transparency – and the law needs to be updated to prevent more nefarious uses of the emerging technology.
“I think the biggest thing I would say is to recommend that any political party that uses artificial intelligence should state in their ads that they use it so that people know where the image came from and that it’s not real.”
She also had a message for voters to “check your sources and go to multiple sources to make sure the information is accurate.”