Five AI and Disinformation Stories You Need To Read This Week
We’re Valent Projects, and we protect everyone from banks to elections from online manipulation. Every week, we curate five of the most interesting stories that have piqued our interest.
Disinformation poses an unprecedented threat in 2024 — and the U.S. is less ready than ever
I’ll read any article that quotes Claire Wardle, PhD of Brown’s Information Futures Lab. This piece from NBC contains an extensive rundown of the disinformation threats developing as we head into 2024’s global election deluge. Some (like the risk of generative AI making content) and platform enforcement problem issues are already well known. But others, like Congressional investigations being used to harass disinformation researchers (such as Wardle), is less talked about - Amil Khan
AI heralds the next generation of financial scams
As the technical barriers to entry to utilize generative AI move lower and lower, the number of threat actors that have the capability to impersonate anyone they choose unfortunately increases also, as the Financial Times has covered this week. Banks and Financial institutions need to invest in and deploy software to counter the rapidly advancing efforts of threat actors; playing catch up will not be easy. Should they fall behind the curve, the depositors will suffer. With new legislation being introduced by the payment systems regulator in the UK coming into force in October 2024 requiring banks to reimburse scam victims even if the victims approve the payments, there will no longer only be a moral obligation to protect depositors from scams, but a financial incentive also. - Fergus McKenzie-Wilson
'We're Playing With Israelis' Minds': Inside Telegram Group Helping Thousands Spread Disinformation
Although the use of bot farms to game content distribution on social media is common, it is detectable and platforms tend to close it down quickly. So, it’s interesting to see actors resorting to alternative methods to achieve the same results. This Haaretz article suggests the Muslim Brotherhood organisation is relying on supporters to organise a coordinated campaign to influence the Israeli audience. What's intriguing is that this operation relies on human labour rather than advanced technology (AI tools). This combination of human and AI methods makes their influence operations harder to detect. It also raises a question around whether technical manipulation will continue to be as central to “inauthentic activity” as we have seen in the past. - Zouhir Al-Shimale
A new kind of climate denial has taken over on YouTube
The guys at the Centre for Countering Digital Hate (CCDH) found climate change deniers are pivoting away from claiming climate change is happening and instead focusing on attacking policies meant to address it as ineffective, as The Verge covered this week. Their findings underscore how important it is to be able to track narratives accurately - Amil Khan
Who did the posting will soon matter more than what was posted - AI generated content is raising the value of trust
The Economist - a trusted information brand - suggests that trusted information brands are on the verge of being trusted again. This is a theme we hear repeated by the likes of Reuters, the BBC, CNN, NYT etc quite a bit. Is it true or wishful thinking? Only time will tell - Amil Khan