THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
There is no shortage of opinions on how artificial intelligence (AI) affects our daily lives. In particular, the impact of AI on education is hotly debated. Relevant sources of concern include academic integrity, intellectual-property protection, security and privacy, and even the long-term prospects of the human race.
One often-raised impact of AI is especially relevant to the educational enterprise: Will AI negatively impact human intellectual abilities? If so, what will become of academia? Are there compelling reasons to worry about the adverse consequences of AI on human intellectual skills?
A recent scientific study by researchers at the MIT Media Lab suggests that AI users underperform at neural, linguistic, and behavioral levels. The study involves recording brain-scan data from human test subjects to assess their cognitive engagement and cognitive load when writing an essay. The test subjects were divided into three groups corresponding to three levels of access to assistive tools. The first group used no tools to write the essay. The second had access to a search engine. The third could use a large language model (LLM) like ChatGPT.
The study showed measurable differences among the three groups. For example, the first group easily recalled what they wrote, while 83.3 percent of the third group could not quote even a single sentence from essays they completed only minutes earlier. Also, the first group had a measurably "stronger occipital-to-frontal information flow" than the third group (numerically, 27 "connections" for the first group compared to only 42 for the third). The authors of the study point to "the pressing matter of a likely decrease in learning skills" associated with LLM use.
A definitive explanation for the results of the study above is undoubtedly challenging to provide. The authors use a framework called “cognitive load theory” to help understand how LLMs impact learning outcomes. Their results show that LLM users experienced a 32 percent lower cognitive load than those who used only traditional software tools. They identified that LLM users had a particularly low "germane cognitive load." This means that letting AI do the thinking for us causes us to think less, which then leads to underdeveloped intellectual abilities.
“Letting AI do the thinking for us causes us to think less, which then leads to underdeveloped intellectual abilities”
Are there any counterpoints to help soothe our anxiety about the harms of AI? Fundamentally, our anxiety might well stem from aversion to uncertainty and change, likely compromising objective analysis of the risks. So, we should be very cautious when developing firm views about AI and its impacts.
The first thing to realize is that while AI has its risks, it also has tremendous benefits, including global social transformation and empowerment of humankind. We should be evaluating the risks in comparison with the benefits. Notwithstanding the difficulty of this task, I will offer two thoughts. The first is what we can learn from the past. While it is difficult to deny that the AI revolution is a unique milestone in human history, there have been past revolutions of similar import. The birth of the printing press comes to mind. Some prominent risks raised then include information overload, misinformation and propaganda, social isolation, and decline in creative and intellectual abilities. But happily, these risks have not been as detrimental as feared. A second related revolution is associated with the Internet, where similar risks were raised, including a decline in intellectual abilities. Again, such a decline is elusive.
A second counterpoint to raise relates to the explanation for the negative cognitive impacts described in the research study above: AI use is associated with low germane cognitive load, referring to mental activity used in information processing and learning. This risk seems straightforward to mitigate, especially in educational settings. To be sure, AI is a highly disruptive technology in education, so we must adapt. But when we adapt, we can account for germane cognitive load. More specifically, we can be mindful to maintain the level of germane cognitive load while we integrate AI into education. Some knowledge and skills that have been taught for decades will need to be eliminated. At the same time, there will be new topics to bring to the classroom with significantly high germane cognitive load. For example, engineering curricula might no longer involve routine mathematical calculations. Instead, we could focus on engineering-design processes that engender human creativity and innovation. We have always found ways to engage the human intellect.
Finally, those of us in the business of education might worry if human educators will be needed at all once AI takes over. It seems inevitable that some tasks will no longer require humans to perform, e.g., customizing educational materials in response to individual student needs. However, humans are so intrinsically social that it is unlikely that we can dispense entirely with human involvement in education. So, unless we evolve to thrive without social interaction, we can expect the educational experience to require human interaction in some essential way. Human educators will still be needed, at least in the foreseeable future.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info