The rise of generative AI brings about new dangers that are not being addressed by current warnings.
While the loss of jobs and the creation of fake content are risks that can be mitigated, the biggest threat is the emergence of targeted interactive generative media.
This new form of media can be highly personalised, fully interactive, and potentially more manipulative than any form of targeted content to date.
Targeted generative advertising involves personalised informational content created on the fly by generative AI systems. These systems use influence objectives provided by third-party sponsors combined with personal data accessed for the specific user being targeted.
As one expert notes, “Every detail down to the colours, fonts, and punctuation could be personalised to maximise the subtle impact on the individual user.”
Conversational Influence
Targeted conversational influence is another emerging concern. This involves conversational agents powered by large language models (LLMs) integrated into websites, apps, and digital assistants.
Users encounter these conversational agents many times throughout their day, engaging in interactive conversations to request information.
However, they may also be targeted with conversational influence woven into the dialogue with promotional goals.
Both targeted generative advertising and targeted conversational influence are designed to appeal optimally to specific individuals based on their personal data. They pose a significant risk as they can produce interactive and adaptive content customised for individual users to maximise persuasive impact.
Propaganda and Misinformation
These techniques, however, can be weaponised to drive propaganda and misinformation.
The same methods used to optimise sales can talk individuals into false beliefs or extreme ideologies that they might otherwise reject.
Without meaningful protections, consumers could be exposed to predatory practices that range from subtle coercion to outright manipulation.
Generative AI, while undeniably powerful and versatile, poses a greater danger than initially perceived.
This stems primarily from its uncanny ability to create realistic and persuasive content, which can be weaponised to spread disinformation, manipulate public opinion, and undermine trust in institutions.
As one source points out, “As AI-generated content becomes increasingly indistinguishable from human-generated content, discerning fact from fiction will be an immense challenge.”
This technology can potentially exacerbate existing societal divisions and create new ones as malicious actors exploit these tools to sow discord and confusion.
The rapid advancement of generative AI requires immediate attention to establish regulations and countermeasures to mitigate the potential harm it poses to society.