Kids who use ChatGPT as a study assistant do worse on tests

The Hechinger Report compares math progress of almost 1,000 high school students to determine the effects of ChatGPT use on learning.

Posted

Young person in classroom looking at ChatGPT app on smartphone, seen from behind.

Robalito // Shutterstock

Does AI actually help students learn? The Hechinger Report examines the results of a recent experiment in a high school which provided a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn't have access to ChatGPT. Those with ChatGPT solved 48% more of the practice problems correctly, but they ultimately scored 17% worse on a test of the topic that the students were learning. 

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127% more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way—on their own—matched their test scores.

The researchers titled their paper, "Generative AI Can Harm Learning," to make clear to parents and educators that the current crop of freely available AI chatbots can "substantially inhibit learning." Even a fine-tuned version of ChatGPT designed to mimic a tutor doesn't necessarily help.

The researchers believe the problem is that students are using the chatbot as a "crutch." When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer. Students were not building the skills that come from solving the problems themselves. 

ChatGPT's errors also may have been a contributing factor. The chatbot only answered the math problems correctly half of the time. Its arithmetic computations were wrong 8% of the time, but the bigger problem was that its step-by-step approach for how to solve a problem was wrong 42% of the time. The tutoring version of ChatGPT was directly fed the correct solutions and these errors were minimized.

A draft paper about the experiment was posted on the website of SSRN, formerly known as the Social Science Research Network, in July 2024. The paper has not yet been published in a peer-reviewed journal and could still be revised. 

This is just one experiment in another country, and more studies will be needed to confirm its findings. But this experiment was a large one, involving nearly a thousand students in grades nine through 11 during the fall of 2023. Teachers first reviewed a previously taught lesson with the whole classroom, and then their classrooms were randomly assigned to practice the math in one of three ways: with access to ChatGPT, with access to an AI tutor powered by ChatGPT, or with no high-tech aids at all. Students in each grade were assigned the same practice problems with or without AI. Afterwards, they took a test to see how well they learned the concept. Researchers conducted four cycles of this, giving students four 90-minute sessions of practice time in four different math topics to understand whether AI tends to help, harm, or do nothing.

ChatGPT also seems to produce overconfidence. In surveys that accompanied the experiment, students said they did not think that ChatGPT caused them to learn less even though they had. Students with the AI tutor thought they had done significantly better on the test even though they did not. (It's also another good reminder to all of us that our perceptions of how much we've learned are often wrong.)

The authors likened the problem of learning with ChatGPT to autopilot. They recounted how an overreliance on autopilot led the Federal Aviation Administration to recommend that pilots minimize their use of this technology. Regulators wanted to make sure that pilots still know how to fly when autopilot fails to function correctly. 

ChatGPT is not the first technology to present a tradeoff in education. Typewriters and computers reduce the need for handwriting. Calculators reduce the need for arithmetic. When students have access to ChatGPT, they might answer more problems correctly, but learn less. Getting the right result to one problem won't help them with the next one.

This story was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education, and reviewed and distributed by Stacker Media.

Comments