The Fully Automated Loop Goes to School

A recent piece in The Atlantic by Lila Shroff describes an accelerating degree of automation in student work — from essay writing to quiz-taking to discussion-board participation. One bot profiled in the piece completed an eight-module statistics course, including seven quizzes, in under an hour and earned a perfect score. The student who enrolled in the course hardly read the course website.

We have moved well past the "students are using ChatGPT to write essays" phase. That was so 2024. Agentic tools can log into learning management systems, complete entire courses, and simulate human typing to avoid detection. The share of middle school-age and older students who reported using AI for homework help rose 14 points in just seven months last year.

Many educators will instinctively treat this as an academic integrity problem. That response misses a deeper issue. What is emerging is what the Modern Language Association has called a "fully automated loop" — AI-generated assignments completed and graded by AI agents, with teachers and students both stepping back from the intellectual exchange that education is supposed to foster. For schools that define their value through close faculty-student relationships and the development of critical thinking, this presents a mission-level crisis.

But the same article profiles students who use AI effectively to build personalized practice-problem generators and to serve as real-time study partners during large lectures. The distinction that matters is between delegation and augmentation: handing your work to a machine versus using a machine to do better work yourself. This distinction is so new that we are still figuring out where it falls.

School leaders who want to be on the right side of that distinction need to invest in curriculum that resists automation (assessments embedded in dialogue and demonstration, not just deliverables), faculty development that goes beyond policy (helping teachers understand and design around the tools), and institutional honesty about value (if a bot can ace the course, the course was measuring compliance, not learning).

The question for boards and heads is whether their institutions treat this as a behavioral or technology issue to be managed, rather than as a strategic inflection point. The evidence increasingly supports the latter.

Next
Next

The Office Is Back! Here’s What Two Years of Evidence Actually Shows