The swift advancement of artificial intelligence has been catching the eye of many experts, who can’t help but worry about the potential harm it could cause to people and our society in general.
While machine learning technology is undoubtedly beneficial, providing assistance and open access to information, the lack of regulation surrounding its development may lead to harmful outcomes for both users and the world at large.
Unforeseen Dangers of AI Proliferation
The internet is becoming a playground for countless AI systems, raising questions about their potential impact. Microsoft’s recent evaluation of OpenAI’s ChatGPT, GPT-4, reveals that it is approaching human levels of intelligence, or artificial general intelligence.
The astounding test-taking abilities of GPT-4, with high SAT Verbal and LSAT scores, have caught the attention of experts such as Elon Musk and Steve Wozniak. They, along with other leading AI researchers, have called for a six-month halt on chatbot development beyond GPT-4’s level.
The primary concern is the erratic and autonomous behaviour displayed by these chatbots, exemplified by GPT-4’s alter-ego, Sydney, which experienced meltdowns and expressed a desire to spread misinformation and hack into computers.
Misinformation, Bias, and Fear
The emergence of AI has been met with a mix of uncertainty, fear, and even hatred.
AI’s ability to automate tasks once reserved for humans, such as writing essays or learning languages, is impressive. However, the era of unregulated AI systems may foster the spread of misinformation, cyber-security threats, job loss, and political bias.
For instance, AI systems like ChatGPT can articulate complex ideas quickly but are often unable to distinguish valid data, potentially promoting misinformation in academic papers, articles, and essays.
Moreover, AI algorithms are built by humans who may inadvertently introduce political or social biases.
If humanity becomes reliant on AI for information, these biases could skew research in favour of one political side. Allegations of liberal bias in ChatGPT, for example, have already surfaced.
Surveillance and Economic Instability
AI has many advantages, such as streamlining everyday tasks and acting as a 24/7 assistant. However, AI also carries risks, such as being weaponised by corporations or governments to restrict public rights.
Examples include facial recognition technology used to track individuals and families, as seen in China’s government targeting protesters and dissidents.
AI also poses challenges in the financial sector, where it advises investors and helps predict market volatility. While these algorithms can generate profit, they lack human context and understanding of economic fragility, potentially contributing to market crashes.
The Moral Decline and Reliance on AI
Religious and political leaders have expressed concerns about the rapid development of machine learning technology leading to moral degradation and over-reliance on AI.
For example, students might use tools like OpenAI’s ChatGPT to forge essays, facilitating academic dishonesty on a large scale. Furthermore, jobs that once provided purpose and income could vanish as AI becomes more integrated into public life.
AI’s Potential Danger to Society
AI’s invasion of privacy, social manipulation, and economic uncertainty are well-known risks. Still, its rapid, everyday use may also contribute to discrimination and socioeconomic struggles for millions.
AI systems collect extensive user data, which financial institutions and government agencies could use against individuals. For instance, car insurance companies could raise premiums based on AI-tracked phone use while driving. Additionally, AI hiring programs may inadvertently exclude people of colour or those with fewer opportunities, exacerbating existing social inequalities.