Artificial Intelligence in Education: Harmful or Helpful?

When Bing introduced their first AI, the world could not help but think about the idea of a war against robots. While it is fun to believe, I don’t think that is where our concerns should be. I believe that AI is most harmful to several things, one of them being education. The concept of modern artificial intelligence has been brewing for about a decade or so, and in the last few years, it has leaped into action. Bellevue College and other educational institutions have taken several measures to mitigate the damage done by artificial intelligence, especially in the exam portion – whether that’s updating policies, forcing students to write on paper, using AI content detection/plagiarism software, forcing students to take oral exams or even changing assignments to be personalized for the classroom. I have noticed that all of these precautions have been taken to minimize AI use in malicious ways. But what makes AI malicious exactly, and where is the line drawn between cheating and just a helpful resource?

Will there ever be a point where students are not the ones being penalized for overworking AI but the professors? I can imagine it being a useful tool for grading en masse. Right now, I’ve noticed that most teachers are in favor of using ChatGPT for research, ideas, and things of the sort. But there is a lot of gray area when it comes to how much you can really do. Making AI write an essay for you is clearly cheating. Asking for ideas is not. But is asking it to correct your essay cheating, or worse, is asking it to write an essay and then getting inspiration from it cheating?

I think, without a doubt, AI is an extraordinarily helpful tool for learning when given to the right person. I believe that if it improves the quality of learning. Since a lot of courses are online nowadays and the professor isn’t easily accessible at all times, an alternative option is to just ask AI. Textbooks aren’t always easy to follow. For example, assume that you attempted to solve an algebra problem, following exactly what you learned from the textbook, but you still didn’t end up with the right answer. Given the right mindset, the student would then inquire and troubleshoot where they went wrong. It’s an easy and cost-efficient alternative way of tutoring, which can be very useful in countries or regions where there is less of an opportunity to find someone who is able to assist. But when it’s given to someone with little regard for their learning, it is a harmful tool. It makes me wonder if there will ever be a point where educational institutions start applying AI chatbots that are restricted to positive academic interactions only; I believe that it would be of great use for learning in the classroom.

I have tried out the latest version of ChatGPT, GPT-4, on a friend’s device, and the upgrade is instantly noticeable. Although it undoubtedly depicts some serious concerns for unethical AI usage in the future, it also allows for more accurate and reliable sources of information. Up until recently, AI has not been great at solving problems, especially ones that involve more intuitive thinking. But seeing the jump in accuracy from GPT-3 to GPT-4, when it comes to more complicated problems, makes me wonder about how limitless the later versions will be.