Commentary, Opinion

Does A.I. development need more doomerism?

In the blink of an eye, artificial intelligence (A.I.) has been incorporated into nearly every aspect of our lives. From education to grocery shopping to music––there is no escaping it. Following the roll out of OpenAI’s ChatGPT, the quantity of publicly available A.I. technologies exploded, leaving a chasm of unregulated opportunity for the continued development of artificial intelligence. However, with a compilation of the terrifying uses of existing A.I. already able to fill a library, the question must be asked: Does A.I development need more doomerism? 

A.I. phone scams are just one of the consequences of the rapid evolution of artificial intelligence. With many people having recordings of their voice on the internet, whether that be on social media or in a video presentation posted online for school, publicly available A.I. voice cloning technologies can use machine learning to simulate any person’s voice. This disturbing practice has resulted in cases of scammers cloning the voices of children, then calling the children’s parents demanding a ransom for their “kidnapped” child. 

Non-consensual deepfake porn is another exploitative dimension of our new reality. Not only is this content readily accessible through Google, the practice is so successful that popular deepfake creators are advertising paid positions to help create content.  

If this is not convincing enough, the increasing role of A.I. in the military should send you into a spiral, contemplating how long the world is going to last. The use of A.I. in geopolitical conflict goes beyond the incorporation of A.I. into weaponry and decision-making. Misinformation campaigns pose a dystopian reality in which A.I. generated audio, video, and text can be manipulated to replicate political officials and military leaders to falsify orders within military rankings or create panic amongst civilians. Not only are militaries already looking to use deepfakes in special operations, the continued development of and investment in such technologies poses the threat of an Oppenheimer-like catastrophe. 

To draw brief attention to the benefits of this rapid A.I. development, the use of A.I. in the healthcare field to aid in cancer imaging, or even detection, reminds us that our future with artificial intelligence does not need to resemble a dystopian horror movie. But any application of A.I. to the healthcare system must be careful, under the increased potential of data breaches and the likelihood of underlying bias––a known issue for many machine learning systems. 

We can no longer depend on government regulation to reign in Big Tech. Addressing direct threats such as election interference and disinformation campaigns that directly affect democracy has already proven too tall a task for government regulation. Guided by Cold War-era fears, the A.I. arms race makes it near impossible for government regulators to put barriers in front of A.I. development. 

If governments are unwilling to provide adequate guardrails around artificial intelligence, who will? Anthropic––a safety-focused A.I. start-up established by a group of employees who left OpenAI out of concern that the company had gotten too commercial––employs the doomers of A.I. development. Despite developing an A.I. chatbot months before ChatGPT was released, Claude––a Constitutional A.I. model––was never released publicly out of fear of how it may be misused. Anthropic claims to have created an “A.I. safety lab” where their anxiety of the potential catastrophe their creation may inflict on the world influences every decision they make. 

The world of A.I. is scary and will likely only get scarier. A.I. tools are already in the hands of bad actors, and the consequences are cataclysmic. Despite concerns of whether or not Anthropic is just a capitalistic ploy attempting to appear responsible in a generally irresponsible field, it presents a case for how to promote critical A.I. development. Considering the impossibility of pressing pause on the development of machine learning, the only possible answer to ethically continue down this path is anxiety-informed A.I. development.

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue