ARTIFICIAL Intelligence (AI) can bring about favourable shifts in society, such as higher levels of access to education, better healthcare, and increased productivity.
Even if AI has its advantages, there are significant ethical and societal ramifications that need to be taken into account such as privacy, security, job displacement, and information disorder.
In 2023, a funky picture of Pope Francis wearing a puffy jacket appeared online, marveling a lot of people as people believed it to be real. What most who saw the photo didn’t know was that it was AI-generated.
When individuals are unable to distinguish between AI from reality, issues occur including likely misinformation. Alternatively, when deliberately created AI content is used to deceive people, that is disinformation.
Experts have expressed concerns about the prevalence of AI-generated content and new patterns in which it would be played out in 2024. Some of them include:
1. U.S. Elections
Just like we saw disinformation and misinformation played out in Nigeria’s 2023 general elections, disinformation and AI experts have projected that the use of AI for misinformation and disinformation would pose a threat to the 2024 U.S. elections and democracy as malicious actors will increasingly use generative AI to spread disinformation which would not only affect the United States but other countries of the world who are looking forward to their elections.
2. Voice-cloning scams and impersonation
Apart from using AI-generative content to amplify misinformation, scammers are also devising new methods of using AI-generative content to defraud people of their monies. The old style of scammers calling and pretending to be a relative or friend has gone a step further as scammers now use AI to clone the voices of individuals in order to scam people.
This also plays out on social media as scammers now use AI-generated images of women and AI-generated texts to steal money from people on dating and social media apps.
Apart from using AI-generated content to scam people, it is also being used to impersonate people.
Egemba Chinonso Fidelis, a popular Nigerian doctor and social media influencer, was impersonated with AI in September 2023. As a Facebook page (archived here) had posted deepfake videos of him advertising a cream that cures joint pain and arthritis.
Dr Fidelis disclosed this weeks after the video had gone viral stating that it was a video of him when he just had his surgery earlier in the year that was manipulated.
This also happened during the recent off-cycle governorship elections in Nigeria as a deepfake video of the People’s Democratic Party (PDP) gubernatorial candidate in Imo state, Samuel Anyanwu, claiming to have stepped down from the race and declaring his support for the All Progressives Congress (APC) candidate, Hope Uzodinma, went viral until the PDP candidate debunked the claim.
AI cyber-bullying and cyber harassment
Cyber-bullying and cyber harassment often manifest as a form of fake news. It employs tactics such as rumours, post-truths, lies and sometimes mal-information. Politicians, government officials and public figures are most times victims of cyber-bullying, which we saw played out in the previous Nigerian elections.
Cyber-bullying might take a different turn in 2024 as AI-powered tools are already being used by social media trolls to magnify their abusive messages and more readily target people who are easily sensitive. AI intensifies cyber-bullying by creating realistic fake content, posing a heightened threat to young individuals and their families.
While it is being used by malicious actors for cyber-bullying, AI tools can also be used to prevent cyber-bullying by detecting and correcting several forms of cyber-bullying to mitigate the harmful effects of cyber-bullying in online forums, social media apps and websites.
Way forward…
In October, media reports stated that the National Broadcasting Commission (NBC) sent a bill to the National Assembly seeking to repeal and re-enact the NBC Act, CAP L11 laws of the Federation of Nigeria (2004), to enable the NBC to regulate social media when passed.
The Nigerian government had earmarked a budget of N24.5 million to the ministry of Information and National Orientation to tackle fake news out of the N27.5 trillion for the 2024 fiscal year.
The funding is expected to support the agencies in facilitating “special enlightenment campaign on government policies and programmes; testimonial series to gauge impact of government policies on the citizenry, advocacy against fake news, hate speech, farmers-herder clashes, banditry, rape etc.”
The FactCheckHub reports that experts had stated that the budget allocation targeted at combating misinformation was insufficient and shows the lack of political will to stem its rise.
The report added that the Federal Government was not ready to complement the efforts of fact-checking organizations working hard to minimize the impacts of misinformation and fake news in the country which could be deduced from the goal outline of the ministry as the fight against the increasing prevalence of misinformation and disinformation, including AI-generated content and deep fakes.
Kunle Adebajo, a fact-checker and Investigations Editor at Humangle noted that it is the responsibility of lawmakers, regulators, and tech corporations to ensure that the distinction between reality and fiction is not entirely blurred.
“The thing with artificial intelligence technology is that it is getting better at an exponential rate. And it is almost impossible to distinguish between content that is generated by AI and that which is made by humans,” Adebajo stated.
He added: “Some of the loopholes fact-checkers look out for in generative AI material are getting ironed out as updates are made to the applications. What this means is that the burden of ensuring the line between fact and fiction is not muddled completely now lies with the regulators, policymakers, and tech companies.”
He said that although it could be used in a lot of positive ways, it could also be used in nefarious activities if not regulated properly.
“If more and more people have easy access to advanced deepfake technology, yes it could be used in a lot of positive ways. But it will also be used for fraud, cybercrime/warfare, unethical propaganda, and to spread hate speech and misinformation. We’ve already seen some examples of this and it will only get worse unless drastic measures are taken,” he said.
This report is republished from The FactChceckhub.
Fatimah Quadri is a Journalist and a Fact-checker at The ICIR. She has written news articles, fact-checks, explainers, and media literacy in an effort to combat information disorder.
She can be reached at sunmibola_q on X or fquadri@icirnigeria.org