With the rapid development of platforms like ChatGPT by OpenAI, a lot of conversation has centered around trying to figure out how to regulate artificial intelligence. The AI safety researcher Eliezer Yodowsky made a highly publicized claim that if we don’t pause the development of artificial intelligence, then “literally everyone on earth will die.” Elon Musk and a whole group of AI experts, alongside industry executives, recently called for a six-month pause in developing systems more powerful than ChatGPT-4 in an open letter, citing risks to society. It seems like new articles pop up daily, warning us about how AI could completely change everything as we know it.
ChatGPT-4 is the latest of OpenAI’s Generative Pre-trained Transformer (GPT) programs. How does GPT work? It uses a neural network that has been trained on absolutely massive amounts of data. Because it has been pre-trained on so much text, it can understand and produce natural-sounding language. It can process images and has passed the bar exam, SAT exam and medical licensing exam, alongside many others. Compared to its predecessor, GPT-3, it now has more human-sounding responses and can interpret and output blocks of text of over 25,000 words. Its strength and efficiency means that the possible benefits of using this AI are huge. GPT-4 was able to diagnose a 1 in 10,000 medical condition in seconds. Bill Gates estimates that AI will be teaching children to read in 18 months, and AI-generated music has made waves in the industry already. If prompted, GPT-4 will write essays, scientific papers, novellas or poems about whatever you tell it to and make an optimized, to-the-minute schedule for you based on the commitments of your day. Companies like Duolingo and Stripe have already begun implementing it into their companies. Microsoft’s chatbot, Bing Chat, currently runs on GPT-4.
Concerns start to rise, however, about regulation. While AI is vastly helpful when asked something, it can sometimes confidently answer queries incorrectly. The problem is also that AI has begun developing so rapidly that there hasn’t really been time to implement any regulations on a federal level about what can and cannot be done with it. Currently, anybody could go and use AI to generate an entirely fake persona, and they have. Influencers have demonstrated how AI can be used to generate faces and then turn those faces into moving, speaking people in videos that you wouldn’t know featured AI-generated people unless you were told so. Fake, AI- generated images of Donald Trump getting arrested have been created and circulated. If you give AI enough samples of somebody’s voice, it can make them seem to say anything. This is especially dangerous in cases like celebrities or political figures who have their voice and face spread all over the internet. Videos are all over the internet with people like Trump, Joe Biden and Musk seeming to say things that they never really did. But in some cases, it’s almost impossible to tell.
It’s for these reasons and many others that 1,188 people were a part of the letter asking for AI development to take a pause for 6 months. However, it’s very unlikely that a pause will end up happening. Google and Microsoft have been pouring billions of dollars into AI development and integration, with no signs of being open to a pause. Musk has been accused of trying to use this letter for the purposes of slowing down OpenAI to give his own AI team a chance to catch up. Also, it’s becoming a more widespread opinion that taking a six month break would halt the development of extremely helpful AI resources in education, the healthcare system and security. It’s also a worry that companies might find workarounds during the pause. While a pause might be beneficial in terms of creating shared safety protocols, these could also be implemented without a break while simultaneously continuing to develop helpful resources that will not bring about the end of the world as we know it, but rather hopefully become a useful tool that helps bring about a better future.