I’ve been thinking about a theory for using AI in history education. By theory, I mean what I expect to be true based on general ideas and principles. Expectations are fickle, so this is a tricky endeavor. Seems reasonable to talk to ChatGPT about this to help untangle my emerging ideas, so here we go.
(I’m speaking here to ChatGPTo1 pro mode – edited)
I would like your assistance in developing some ideas (and principles) that may be used in a theory for using AI in history education.
What I’m thinking about is an expert novice continuum, and trying to situate a point on that continuum where the novice becomes more “expert.” I’m thinking of this as an upward curve, perhaps steep at first, that softens and plateaus at some point. And, I’m thinking about the point in that upward curve toward the plateau, likely before the plateau when the novice in learning something new (e.g. sourcing a document), ceases to benefit from repetitive practice with these skills. They’ve become expert enough that they’re able to benefit from the application of those expert skills in ways that might make them a better person.
Graphic from Sora using the previous paragraph to prompt
My question is not really about where the “point of diminishing returns” occurs on this continuum (I’m sure teachers will be able to assess that), but instead about the implications for instruction regarding that identified point. And, for our consideration here, how we might use AI along the curve with the point of diminishing returns (let’s call it a threshold) as a fulcrum upon which AI ceases to be a “tool” to support students and becomes an extension of expertise. My operating hypothesis is that teachers ought to shift their focus regarding the uses of AI at a transition in terms of the ways in which they support students in practicing with skills towards application.
AI tools like ChatGPT and Gemini are able to automate the skills that we teach our students. I hear educators saying that they don’t want their students to use AI to automate skill work because they’re novices and they need to do the hard work of learning how to use those skills. I agree. But, at what point is the application of the skill just grunt work – students laboring in a historical inquiry factory using hand tools to craft understanding when machines can automate with no loss in skill acquisition value to students.
Illustration from Sora using the previous paragraph to prompt
Engagement with AI to support and scaffold students’ skill acquisition makes sense, but to replace the skill work of students is another matter. At what point do we say yes to the machine and offload the grunt work. At what point have students mastered the skill, internalized, and demonstrated their capacity to a level that we are OK with AI tools like ChatGPT being used to offload the analysis that students are doing.
(ChatGPT’s response not edited, but excerpted)
The Novice-to-Expert Continuum
- Skill Acquisition vs. Application
You’re describing a progression from basic skill-building (where repetitive practice is valuable) to a more advanced stage where those skills have been internalized enough that the learner can pivot toward more meaning-driven or higher-order activities. This aligns with common learning theories (think of something like the Dreyfus model of skill acquisition or Bloom’s Taxonomy). - Identifying the “Threshold”
You mentioned you’re pondering if there’s a specific point in the slope where teachers should shift from repetitive skill practice to deeper, more contextual application. In many fields, we often see a “plateau” or diminishing returns on repetitive practice, at which point a learner can more fruitfully engage in tasks that integrate those skills in real-world or complex scenarios. - Relevance to History Education
In history, “skills” might include analyzing primary sources, identifying historical context, or synthesizing narratives. Once a student can do these fairly reliably, the next step is using them to explore broader questions—like ethical issues, thematic connections across time and space, or the development of historical empathy.
AI’s Role and the Timing Issue
- Novice Stage: Potential Risks of Offloading
Teachers’ hesitancy to let AI do the analysis or writing for students is understandable: novices need “time on task” to cultivate the muscle memory and cognitive frameworks for historical thinking. If AI handles too much, too early, it could rob them of that essential practice. - Expert Stage: Potential Benefits of Offloading
Once students have enough skill mastery, leveraging AI can free them from the more mechanical or time-consuming tasks (basic fact-checking, cross-referencing data, formatting citations, etc.). At that stage, students are better prepared to use AI outputs critically—recognizing biases, identifying omissions, and adding their unique interpretive lens. - Threshold Sensibility
The question becomes: When is a learner “expert enough” to offload tasks to AI without stunting further skill development? In practice, this likely varies by individual. Some might be ready to benefit from partial automation after fewer practice sessions than others.
Final Reflection
You’re wrestling with two big ideas:
- When do we move from repetitively building a skill to actually using that skill in deeper, more meaningful ways?
- How can AI be responsibly integrated so it doesn’t undermine skill development, but instead expands what students can do once they’ve mastered the basics?
It might help to imagine an ideal scenario: students get enough practice to confidently do the skills on their own. Then, once they “own” those skills, AI enters to elevate the conversation. This could lead to the very sense of fulfillment and deeper meaning Tolle alludes to—not through chasing external accolades but through the intrinsic reward of a richer intellectual and personal engagement with history.
Back to me
Good stuff
Let’s move on
I’m interested in the 5 stage Dreyfus model for skill acquisition. A seminal 1980 paper on the model from brothers Stuart and Hubert Dreyfus put forward an approach to developing complex skills. In the paper, the Dreyfus’ use training of Air Force pilots as their example. But, the model they developed has been applied widely. I’m particularly interested in the distinction they made between concrete and abstract skill development. The Dreyfus model prefers real-world contexts for building skills as opposed to abstracted approaches that are boiled down to steps that are applied in clinical settings. At the same time the Dreyfus model is precise and this is where we find an interesting tension. While skill development must (presumably) be precise, clear in the sense that it can be scaffolded and thus abstracted, it must (again presumably) be applied in a real world context. So skill development must be abstract and concrete, perhaps we can think of this as a concrete abstraction.
Concrete Abstraction
Toshio SHIBATA
https://www.akionagasawa.com
Real world application and authenticity is foundational in our field. The last decade+ has seen an embrace of authenticity with the rise of inquiry and situated teaching. More than before, teachers are enabling their students to see themselves in their inquiry and to be seen in their intellectual work. The implications are vast. In such a condition, no longer can the “central office” dictate a curriculum with confidence and resonance. No longer can teachers take for granted the compliance of their students and blessings of their community. Now our students can insist that what they learn is what they see in the world and not what we simply reveal.
Similarly, skill development is foundational in history education. Many educators may view their work as primarily an effort to develop thinking skills. We don’t have to look far for evidence of the primacy of skills in our work through both standards projects and in history curricula.
(see our own C3 Framework for evidence)
The result of a three year state-led collaborative effort, the College, Career, and Civic Life (C3) Framework for Social Studies State Standards was developed to serve two audiences: for states to upgrade their state social studies standards and for practitioners — local school districts, schools, teachers and curriculum writers — to strengthen their social studies programs. Its objectives are to: a) enhance the rigor of the social studies disciplines; b) build critical thinking, problem solving, and participatory skills to become engaged citizens; and c) align academic programs to the Common Core State Standards for English Language Arts and Literacy in History/Social Studies.
(and see the OER Project as an example of skills-based curriculum).
Adaptable and skills-focused, all OER Project courses include built-in scaffolding to meet the needs of a range of learners. Students are grounded in historical thinking as they engage with content that interests them and improves their reading, writing, and critical thinking skills. Real world, concrete applications ground the development of abstracted skills.
My question is how and when does AI interact with skill-building and how does that change the game? With the prevalence of AI and its utility as an extension of human thinking, shouldn’t we be thinking about how to locate student learning in the context of AI. If nothing else, it’s a real world concrete application. But perhaps more importantly, AI is changing the way we think (more below) and thus would seemingly change the way we develop and utilize skills.
– plus venire
1. AI as an Extension and Alteration of Human Cognition
Argument: AI systems (seem to) act as cognitive extensions, performing tasks that traditionally required human thought. As AI increasingly handles thinking tasks like language translation, creative idea generation, data analysis, and decision-making, humans adapt their thinking patterns to rely on and integrate these tools. This shifts the focus from “how do I do this?” to “how can I leverage AI to do this?”, altering our mental models and skill sets.
- Reference: Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.
Why it’s relevant: David Chalmers (with Andy Clark) introduced the idea of the “extended mind,” arguing that technological resources effectively become part of our cognitive process. Although AI was not the central focus of this paper, the notion that tools outside our biological brain can transform our thinking is directly applicable to modern AI.