Could Chat GPT manipulate the 2024 Election
The increasing use of this tool has raised questions about its potential impact on politics, as well as concerns about potential biases. Some fear that the tool may exhibit biases against certain political ideologies or movements, or against certain identities, prompting a debate on the ethical considerations and responsible use of such technology.
The rise of generative AI tools like ChatGPT has increased the potential for a wide range of attackers to target elections around the world in 2024. Generative AI tools can be used to create highly convincing fake news articles, social media posts, and other forms of content. This could be leveraged to spread false information about candidates, political parties, or election processes, influencing public opinion and voter behavior.
Both state-linked hackers and allied so-called “hacktivists” are increasingly experimenting with ChatGPT and other AI tools, enabling a wider range of actors to carry out cyberattacks and scams, according to the company’s annual global threats report. This includes hackers linked to Russia, China, North Korea, and Iran, who have been testing new ways to use these technologies against the U.S., Israel, and European countries.
With half the world’s population set to vote this year, the use of generative AI to target elections could be a “huge factor,”. Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct [information operations] against elections in 2024
If state-linked actors continue to improve their use of AI, “it’s really going to democratize the ability to do high-quality disinformation campaigns” and speed up the tempo at which they’re able to carry out cyberattacks.
Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct [information operations] against elections in 2024. Some of the tech companies developing AI tools have been sounding the alarm themselves.
Secondly, AI-generated emails and messages could be used in phishing attacks targeting election officials, candidates, or political parties. Such attacks could aim to steal sensitive information, disrupt campaign activities, or compromise election infrastructure.
Moving forward, Generative AI are being used to create deepfake videos, which are realistic but fabricated videos showing individuals saying or doing things they never actually did. These videos could be used to discredit candidates or manipulate public perception.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.