Artificial Intelligence (AI) is taking over modern learning in schools throughout the country, and despite all the negative backlash it has received from the older generation, there is very little action against it. As AI technologies increasingly shape educational tools, the debate about their role in classrooms intensifies. While many argue that AI may dehumanize education, create dependency, or widen the digital divide, many educators and students embrace these tools for their potential to improve learning outcomes. This growing divide raises important questions about the future of education. By understanding both the benefits and the concerns, society can decide if we should be cautious or embrace this technological shift without hesitation.
What AI should be used for is often not what AI is actually used for, especially with high school students. AI was created in the early 1950s but did not become more widely used until the early 2000s. Initially the goal of AI was to find how to make machines use language from difficult concepts or topics and solve problems for humans to improve themselves. As it has become widely known, AI is now commonly used by high school students and is interfering with their learning. This has created the concern in many schools that academic dishonesty is becoming more prevalent. Many teens do “admit to using ai to cheat on assignments, homework, or tests,” but less than five percent claimed to be daily users in a study conducted by students attending Harvard University. Technology based systems are not always accurate and can lead to the spread of misinformation, however they still have a strong hold on society today. When asked about her worries about the development of AI and the influence it has, freshman Julia Lupse said, “It is not a big concern for me. AI has become such a big part of our lives, and I do not know how much more it will grow.” The younger generations are becoming more and more comfortable with evolving technology systems and seeing it as part of their everyday lives. It may not seem scary to students, but to the older generation it is a big concern. Adults today grew up without the technologies that we have today. The lack of experience and knowledge about it makes adults even more worrisome. Chat GPT, a major AI platform used by students, was only launched in 2022. Sophomore Lucas Shifflet says, “Adults assume it to be worse than it really is.” Most students believe that adults have a negative bias towards AI, however even the older high school students have worries about technological advancements. Sadie Huryn, a senior, believes, “The spread of misinformation is a growing problem in the younger generation.” There is concern that the younger generations are more vulnerable to the spread of misinformation, mainly due to their heavy use of digital media. Interaction with AI and its platforms only creates a bigger issue for those who are too young to fully assess and understand the information they are given.
One of the biggest negative traits is the chance of misleading information. With the high number of students not admitting to using AI or not knowing information about AI, researchers are left with little to no information of what AI is capable of and what it can become in the future. When asked about her worries about the power of AI, Huryn stated, “I think it will have the power to form a lot of opinions on a lot of things that aren’t completely true. Stereotypes can be even more polarized in society today which can be very detrimental.” Humans take information and form their own beliefs and values from it. As a major source of student’s research and information is coming from AI platforms, opinions can be made from false information that can create harm, eventually leading to preconceived judgment and stereotypes. Besides the information it produces, AI has many other downsides. “AI still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalized.” (Amnesty International). Technology does not have the mind or feelings of real people, and it just finds information and presents it no matter the circumstances. Arguments have been made for higher government regulation and limitations on Artificial intelligence, but still little progress has been made. Character AI has become a newly popular creation that allows people to talk to a seemingly real person through AI systems. Defined by the University of Illinois News Bureau, Character Ai is a chatbot web application that uses artificial intelligence to generate human-like text responses. Lupse believes character AI has problems and benefits. “The downside is it would limit their social interactions with humans and that people will start to rely on their AI more than people,” Lupse began. The worry is that a computerized system is storing information and becoming something students may fall back on. Lupse continued and stated, “Some people who are scared to get help from real people or don’t feel like they can open up to actual people, will have a place to open and talk and get the support they need.” The use of emotional support through character AI has become more common only in the past couple months and is still developing with not a lot known about it. Harvard university raised the question, is AI becoming a “modern approach to learning?” They found that students bring their questions to AI for better or worse. AI chatbots are embedded into social media platforms such as Snapchat and Instagram, and teens incorporate them into group chats, use them to learn social skills and sometimes treat them as romantic partners. (University of Illinois News Bureau). AI is becoming more easily accessible to teens, leaving many more experiences both for the benefits and downfalls to artificial intelligence.
Many people wonder why AI does not have stronger limits within schools or even regulated stronger by the government. Some industry leaders have voiced support for government oversight of AI. Sam Altman the CEO of OpenAI stated, “There is a need for a new agency that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards,” (Brookings). As leaders began to address the possible issues, there have been new views on AI usage. There is a big call for action within the companies developing and handling AI, however, there are some issues causing AI to be difficult to regulate. One of the first reasons for no action against developers is the Licensing proposals. Licenses tend to reinforce dominance by large companies in AI. These software’s can develop very quickly after being given protection by licenses. Licenses also protect their information and “code” from competitors. AI platforms make thousands of dollars daily, making their code extremely valuable. A license can protect the creators from losing their work to other companies, but in turn also protects them from being taken down or limited. A second reason for lack of action is Risk-Based Agility. AI regulation would need to be based on varying risk levels however, “Because the effects of digital technology are not uniform, oversight of those effects and it is not a ‘one size fits all’ solution.” (Brookings) AI is a very smart, technologically advanced software and has many different levels of code and instructions. The way AI is regulated on a social media platform, like Snapchat, cannot be regulated the same as a web-based platform such as ChatGPT. Its many uses and accessibility to society makes it hard to set expectations for what AI can and cannot do. The last challenge causing lack of limitations is, AI regulation involves balancing public safety with innovation. “Thus far in the digital age in the United States, it is the innovators who have made the rules. This is in large part because the American government has failed to do so.” (Brookings: the three challenges of AI regulation.) People want to make money, and AI platforms generate huge incomes for the developers. Innovators will continue to push for AI to have no limits if it continues to benefit them, and the issues do not become too big to control. Overall, it is hard to say who will regulate AI and what it is they are regulating, fortunately, some schools have made small steps to approach the issue of AI usage within classrooms. When Huryn was asked how adults in her life have addressed the issue, she said, “Both my mom and my teachers all think that it can be very detrimental, which I agree with because it can really hurt one’s learning and their understanding of topics.” Both Lupse and Shifflet also said that teachers have had punishments for the usage of AI in the classroom. Regulation within schools is the first step to slowing the usage of AI in high school students and beyond.
While AI has the potential to revolutionize education by enhancing learning outcomes, its widespread use among students has raised concerns about academic dishonesty and the spread of misinformation. The rapid growth of AI technologies challenges educators, innovators, and society to find a balance between embracing innovation and addressing its negative impacts. As AI continues to evolve, it may become crucial to establish effective regulations and promote responsible usage to ensure that its benefits outweigh the risks for future generations.