Checkmark Plagiarism Logo
Checkmark Plagiarism
Menu
Back to Blog
EducationAI Research~25 min read

Timed vs. Untimed Essays: What the Research Actually Says

A comprehensive research synthesis on timed versus untimed writing assignments — covering cognitive science, assessment validity, equity, AI cheating, and expert recommendations.

The Checkmark Plagiarism Team
Timed vs. Untimed Essays: What the Research Actually Says

The evidence on timed versus untimed writing assignments is clear: they measure fundamentally different constructs, and the current retreat to timed in-class writing as an AI countermeasure risks sacrificing the cognitive benefits that made long-form essays valuable in the first place. The research doesn't support an "either-or" approach — it supports a "both-and" strategy, where timed writing verifies authorship and develops performance fluency, while untimed writing builds the deep thinking skills that define genuine learning. This tension sits at the center of a post-ChatGPT reckoning in education, and the data offers a clear path through it.

What follows is a comprehensive research synthesis across eight areas: cognitive science, assessment validity, equity, academic integrity, standardized testing, and expert recommendations — with specific studies, statistics, and expert quotes throughout.

1. What timed essays actually develop and measure

Timed essays activate a specific and narrow set of cognitive skills: rapid recall, organizational efficiency under constraint, improvisational synthesis, and what composition researchers call "knowledge-telling" — the ability to externalize existing knowledge quickly without deep processing.

The cognitive process theory of writing, established by Flower and Hayes (1981), describes writing as a recursive loop of planning, translating, and reviewing. Under time pressure, writers are forced to truncate this loop. The reviewing and revision components — where the deepest learning occurs — get compressed or eliminated entirely. Inexperienced writers under time constraints "become trapped in tiny loops in the process, repetitively correcting sentence-level mistakes before moving forward" (Perl, 1979; Sommers, 1980).

Les Perelman, former Director of Undergraduate Writing at MIT, conducted perhaps the most damning analysis of timed essay assessment. Studying SAT essay samples, he found a greater than 90% correlation between essay length and score — he could predict scores "from across a room" based solely on length. His direct assessment: "What they are actually testing is the ability to bullshit on demand. There is no other writing situation in the world where people have to write on a topic that they've never thought about, on demand, in 25 minutes." Perelman's advocacy contributed to the College Board's decision to abolish the mandatory SAT Writing section.

Research from Schirner et al. (2023), cited in a University of Notre Dame review, found that students with higher intelligence scores solve easy problems faster but are consistently slower when solving difficult problems. Slower response times correlated with higher accuracy. As Andrew Browning summarized: "Instead of assessing student knowledge, timed tests evaluate how well a student can reason under stress and guess answers quickly."

That said, timed writing does develop real skills. North Avenue Education (2025) identifies four genuine benefits: quick thinking and rapid argument structuring, clearer writing through forced efficiency, stronger grammar and mechanics through pressure-driven clean composition, and mental stamina for sustained focused effort. These are not trivial — they map directly to professional and academic contexts. The key insight from the research is captured well: "Untimed writing supports reflection; timed writing supports performance."

2. Untimed essays build the thinking that matters most

Extended, untimed writing develops a fundamentally different — and according to most composition scholars, more valuable — set of cognitive skills. These include deep research integration, iterative revision, critical analysis, argument development across multiple sessions, and what Bereiter and Scardamalia (1987) call "knowledge-transforming" — where the act of writing itself generates new understanding, rather than simply recording existing knowledge.

This distinction between "knowledge-telling" (common in timed settings) and "knowledge-transforming" (requiring extended time) is central. As Richard Menary argued in Language Sciences (2007), writing functions as a form of thinking itself, not merely a record of thought that happened elsewhere. The act of organizing, restructuring, and refining ideas on a page is where cognitive development happens.

A peer-reviewed study by Quitadamo and Kurtz (2007) in CBE Life Sciences Education compared students completing writing-intensive lab treatments with those taking traditional quizzes. The writing group significantly improved their critical thinking skills, specifically in analysis and inference, while the non-writing group did not.

Nancy Sommers's landmark research (1980) demonstrated that "revision, as it is carried out by skilled writers, is not an end-of-the-line repair process, but is a constant process of 're-vision' or re-seeing that goes on while they are composing." Her work showed dramatic differences between student revisers (who focus on word-level changes) and experienced writers (who make global, meaning-level changes). Untimed writing provides the essential space for this higher-order cognitive work.

A meta-analysis by Graham and Hebert (2011) of 55 studies found that writing activities improved students' reading comprehension with a mean effect size of d = 0.50 — a substantial educational impact.

MSU Denver's Writing Center research summary reinforces this, noting that writing facilitates deeper engagement because it "requires focusing of attention, planning and forethought, organization of one's thinking, and reflective thought." As Olivia Henderson argued for UCSB's Interdisciplinary Humanities Center (2022), timed essays "reward the ability to write and think quickly rather than write and think well" — and "taking the time to think about the nuances of a text and craft a meaningful argument is a more difficult, but rewarding and critical, skill than writing quickly."

3. The research on measurement validity is genuinely split

The academic literature on whether timed or untimed essays better measure "real" writing ability is more nuanced than advocates on either side typically acknowledge. The findings break into two camps, with a meaningful methodological insight underneath.

Studies showing higher scores with more time:

Powers and Fowles (1996), studying ~300 prospective GRE test-takers, found performance was significantly better when given 60 minutes versus 40 minutes. Crucially, there was no interaction between test-taking style (fast vs. slow) and time limits — slow writers did not disproportionately benefit. Their conclusion: "When test takers are given more time, they write more and their scores are higher also." Additional studies by Biola (1982), Crone, Wright & Baron (1993), Khuder & Harwood (2015), and Younkin (1986) all found similar patterns.

Studies finding no significant difference:

Caudery (1990), in a widely-cited study of L2 adolescent students, found "surprisingly, no significant difference in the scores" between timed and untimed essays. Crawford, Helwig & Tindal (2004), Ewing (1992), Hale (1992), Knoch & Elder (2010), and Kroll (1990) reached similar conclusions. Wu and Erlam (2016), studying 23 L2 learners, found that "the timed condition had a significant effect on the length and content quality, but did not impact on accuracy nor complexity of writing" — suggesting time limits primarily reduce elaboration and depth, not basic mechanics.

A 2025 study by Yu, Rosca, and Andrade, published in Assessing Writing, provided striking evidence for untimed assessment validity. Using the DAACS diagnostic system with 2,719 non-traditional students at Western Governors University, they found that untimed writing scores predict students' future performance in college writing courses — and "correlate more strongly with GPA than SAT Writing among traditional college students." Their conclusion: "A writing assessment does not have to be expensive, high-stakes, proctored, or timed to be useful."

The field's most respected voices have landed in different places. Edward M. White championed timed impromptu essays in his influential "An Apologia for the Timed Impromptu Essay Test" (CCC, 1995), arguing they offer standardized conditions, cost-effectiveness, and reasonable reliability. Peter Elbow and Pat Belanoff countered that portfolio assessment is "even more valid than timed essay tests because it focuses on multiple samples of student writing that have been composed in the authentic context of the classroom."

4. Equity and accessibility concerns are substantial and well-documented

The equity implications of timed writing are among the strongest arguments against relying on it as the primary assessment format. The research documents significant disparate impacts across multiple student populations.

Students with disabilities represent the most studied group. Extended time is the most frequently provided test accommodation — accounting for 55% of all accommodations requested and granted on standardized tests (GAO data). On the ACT specifically, 94% of students receiving accommodations get extra time, most commonly time-and-a-half. Learning disabilities account for ~45% of accommodations, with ADHD at nearly 25%. The "Differential-Boost Hypothesis" holds that while all students benefit from extra time, students with disabilities benefit disproportionately more. One key study found that under timed conditions, there was a significant difference between scores of LD versus non-LD students, but no significant differences under untimed conditions — suggesting time limits create artificial performance gaps. However, a 2017 study found that while students with LD demonstrated increased writing fluency with extended time, "the organizational and thematic quality of their essays continued to be lower" — extra time helps but doesn't fully close the gap.

ESL/ELL students face compounding challenges. A 2005 IEEE study found timed writing assessment resulted in an "unproportionally high failure rate among ESL students." ETS guidelines explicitly recommend extended testing time as an accommodation for English Language Learners, noting the need to separate language processing speed from content knowledge assessment. Wu and Erlam's (2016) L2 research confirms that untimed conditions allow "planning, organizing, correcting, and editing that could lead to a greater number of words and better writing qualities."

Test anxiety affects an estimated 15–20% of students at high levels (Hill & Wigfield), with broader anxiety prevalence among college students pooled at 31% across a meta-analysis of 13 studies (Chang et al., 2021). A landmark study by Ramirez and Beilock (2011), published in Science, found that highly anxious students who wrote about their fears before tests received an average grade of B+, compared to B– for those who didn't — nearly a full grade point improvement. Onwuegbuzie and Seaman (1995) found unlimited time improved performance specifically for students with the highest levels of test anxiety.

Gender effects are notable: Montolio and Taberner (2021) found male students outperform females on high-stakes timed exams by 0.132 standard deviations, but as stakes decrease, the gap reverses in favor of females by 0.08 SD. Both genders scored better on lower-pressure exams, but females' scores improved significantly more (0.153 SD vs. 0.018 SD).

Racial bias in writing assessment compounds these issues, though it operates through grading rather than timing per se. Quinn (2020), in a randomized experiment with 1,549 teachers, found that on a vague evaluation scale, teachers rated writing lower when randomly signaled to have a Black author — but clear rubrics eliminated this bias entirely.

5. AI cheating has fundamentally changed the timed-vs-untimed calculus

The rise of generative AI has created a seismic shift in how educators approach writing assignments, with data showing that the take-home essay is becoming an endangered species.

The statistics on student AI use are staggering. The Digital Education Council's 2024 global survey found 86% of students use AI in their studies, with 54% using it weekly and 24% daily. In the UK, the HEPI/Kortext 2025 survey found 92% of undergraduates use AI tools in academic work (up from 66% in 2024), with student use of AI for assessments surging from 53% to 88% in a single year. Use of AI to generate text doubled from 30% to 64%. The College Board (2025) reports 69% of high school students used ChatGPT for schoolwork. In the UK, nearly 7,000 university students were formally caught cheating with AI in 2023–24 — triple the previous year — while a Taylor & Francis study using list experiment methodology estimates 22% of UK students cheated with GenAI that same year.

This has triggered a visible retreat to in-class writing. Casey Cuny, a 23-year English teacher and 2024 California Teacher of the Year, told Fortune/AP (September 2025): "The cheating is off the charts. It's the worst I've seen in my entire career. Anything you send home, you have to assume is being AI'ed." His students now do most writing in class with lockdown software monitoring screens. Kelly Gibson in Oregon: "I used to give a writing prompt and say, 'In two weeks, I want a five-paragraph essay.' These days, I can't do that. That's almost begging teenagers to cheat." Carnegie Mellon told faculty that a blanket AI ban "is not a viable policy" unless instructors change assessment methods, with many returning to in-class writing and lockdown browsers. Only 34% of teachers say their school has consistent AI-related cheating policies (RAND, 2025).

Meanwhile, AI detection tools have proven unreliable. Weber-Wulff et al. (2023) found even the best AI detection software is only correct about 80% of the time, with simple paraphrasing dropping detection rates dramatically. Stanford researchers (Liang et al., 2023) found these tools disproportionately flag non-native English speakers' writing as AI-generated. OpenAI shut down its own detection tool in 2023 due to low accuracy. A University of Reading study found 94% of AI-written exam submissions went completely undetected by human markers.

The emerging alternative is process-based assessment — monitoring how writing is produced, not just evaluating the final product. Crossley et al. (2024), presenting at the International Conference on Educational Data Mining, found that authentic writing shows more pauses before sentences, more insertions, and more deletions and revisions, while transcribed or AI-pasted writing is more linear with fewer revisions. Their key finding: "AI detection models that focus on the product alone will not be successful at identifying or combatting plagiarism, especially as AI evolves." Allen et al. (2016) found keystroke indices accounted for 76% of variance in essay quality. Jiang et al. (2024) and Deane (2026), both in the Journal of Educational Measurement, confirmed that keystroke dynamics are effective in identifying nonauthentic texts.

Tricia Bertram Gallant, a 20+ year academic integrity specialist associated with ICAI, warns in Liberal Education against "returning to an overreliance on secure assessments" while recommending schools "proctor at least one assessment" and combine it with meaningful coursework that reduces cheating temptation.

6. Expert organizations recommend multiple measures, not a single format

Every major professional organization in writing education recommends a balanced, multi-measure approach — none endorses exclusive reliance on timed writing.

The CCCC (Conference on College Composition and Communication) Writing Assessment Position Statement (revised April 2022) establishes six foundational principles, emphasizing that "best assessment practice uses multiple measures to ensure successful formative and summative assessment," and that "admissions, placement, and proficiency-based assessment practices are high-stakes processes with a history of exclusion and academic gatekeeping." The statement explicitly calls for assessment methods that "provide multiple paths to success, accounting for a range of diverse language users."

The WPA Outcomes Statement for First-Year Composition (Version 3.0, 2014) defines five outcome areas — rhetorical knowledge, critical thinking, processes, conventions, and composing in electronic environments — and intentionally frames writing as process-based: drafting, revising, editing, not single-sitting production.

The Universal Design for Learning (UDL) framework, developed by CAST, recommends offering "different methods for students to demonstrate what they are learning" and specifically advises that if a learning goal is not specifically about writing speed, "then a timed written exam can be a barrier for many learners." UDL Guidelines 3.0 addresses "critical barriers rooted in biases and systems of exclusion."

In the AI era specifically, Dawson, Bearman, Dollinger, and Boud (2024) argued in Assessment & Evaluation in Higher Education that "assessment quality, not cheating prevention, should drive AI-era reforms." The CCCC/CWPA Joint COVID-19 Statement established a principle that remains relevant: "assessments and pedagogical choices should prioritize learning and students' successful demonstration of stated course objectives and learning outcomes, not time spent in an LMS or behavioral measures."

The emerging consensus among AI-era assessment researchers points toward a hybrid model: process-based assessment (watching the writing unfold) combined with some timed, proctored writing for verification, supplemented by formative assessments throughout the writing process. As the AWAC 2025 Statement recommends: "Use AI detection tools, if at all, only as part of a multi-pronged approach, supplemented by process-based assessment."

7. Standardized testing makes timed writing practice unavoidable

Regardless of the pedagogical arguments, the practical reality is that students face high-stakes timed writing on multiple standardized assessments, making some timed writing practice a pragmatic necessity.

AP English Language and Composition requires 3 essays in 2 hours 15 minutes (~40 minutes each), worth 55% of the total exam score. AP English Literature follows the same format. AP History exams (US, World, European) include a Document-Based Question at 60 minutes and a Long Essay at 40 minutes, totaling 40% of the exam score. As of 2025, AP essays are typed into the Bluebook digital application.

The GRE Analytical Writing section (post-September 2023 "Shorter GRE") requires one "Analyze an Issue" essay in 30 minutes, scored by both a human grader and an AI grader, with the human grader spending approximately 30 seconds per essay. The SAT Essay was discontinued in June 2021 after being made optional in 2016, though some states continued requiring it through SAT School Day programs.

At the K–12 level, Florida's B.E.S.T. Writing Assessment tests all students in grades 4–10 on computer-based timed writing. South Carolina's SC READY introduced Text-Dependent Writing items in 2024–25 across grades 3–8. Multiple states maintain timed writing components through Smarter Balanced and PARCC assessments aligned to Common Core standards.

The connection between practice and performance is well-established by test prep organizations: students who practice under timed conditions build planning, outlining, and time management skills transferable to these high-stakes contexts. The counter-argument, articulated by multiple researchers, is that over-emphasis on timed practice rewards formulaic structures (the 5-paragraph essay) over complex thinking — and as Perelman demonstrated, scoring systems often reward length over quality.

8. The reference blog's style offers a template for evidence-based thought leadership

The reference article ("Schools Are Solving the AI Cheating Problem by Creating Bigger Ones") provides a clear stylistic template for the target blog post. Key characteristics to replicate:

The tone is confident, authoritative, and editorially provocative without being aggressive — reading like a seasoned industry insider who is frustrated but constructive. It uses first person sparingly to establish credibility through direct experience rather than academic distance. The provocative framing operates at the thesis level, not through inflammatory language.

Structurally, the article runs approximately 2,200 words across 5 headed sections plus an untitled opening hook, following a problem → deeper problem → failed solutions → real solution → reframing arc. Each section dismantles one wrong approach before arriving at the right one. Headers function as mini-thesis statements.

The writing alternates between longer, complex sentences (for research presentation) and very short, punchy declaratives for emphasis. The article is research-dense — citing roughly 10 academic sources — with statistics used surgically rather than in data dumps. Quotes are woven into the argument as support for claims already made, never standing alone.

Most critically for a company blog, the article never names the company or product. The product is described as what schools need, not as a sales pitch. Competitor alternatives are dismissed on evidence, not marketing. The article earns credibility by leading with pedagogy, citing academic research, acknowledging complexity, and framing everything around student outcomes rather than institutional convenience. A reader could absorb the entire article without realizing they were on a company blog — the conceptual sell is implicit, not explicit.

Conclusion: the path forward is process, not either-or

The research converges on several actionable insights that go beyond the false binary of timed versus untimed. First, timed and untimed essays measure genuinely different cognitive constructs — one measures performance under constraint, the other measures the depth of thinking that education exists to develop. Neither is a substitute for the other.

Second, the equity data is unambiguous: time pressure creates artificial performance gaps for students with disabilities, ESL students, and those with test anxiety — gaps that narrow or disappear when time constraints are removed. Any assessment system built primarily on timed writing will systematically disadvantage these populations.

Third, the AI cheating crisis has created legitimate pressure to verify student authorship, but the retreat to timed-only, handwritten-only, or proctored-only assessment sacrifices the extended cognitive work that makes writing pedagogically valuable. The most promising research points toward process-based assessment — monitoring the writing process through keystroke dynamics and behavioral analysis — as a way to preserve long-form writing's cognitive benefits while maintaining integrity verification.

Fourth, standardized testing realities (AP exams, state assessments) make timed writing practice a practical necessity — students cannot avoid it, so they need structured preparation for it.

The emerging expert consensus recommends a portfolio approach: timed writing for baseline verification and test preparation, untimed writing for developing deep thinking, and process-based monitoring as the bridge that makes both possible without sacrificing either. The question isn't whether to use timed or untimed essays — it's how to design a system where both serve their proper pedagogical purpose while keeping the writing authentic.

Written by The Checkmark Plagiarism Team.

Works Cited

  1. Allen, Laura K., et al. "Investigating Bored Readers: Using Keystroke and Self-Report Data to Predict Essay Quality." Proceedings of the International Conference on Educational Data Mining, 2016.
  2. Bereiter, Carl, and Marlene Scardamalia. The Psychology of Written Composition. Lawrence Erlbaum Associates, 1987.
  3. Bertram Gallant, Tricia. "How Do We Maintain Academic Integrity in the ChatGPT Era?" Liberal Education, Association of American Colleges and Universities, 2023. https://www.aacu.org/liberaleducation/articles/how-do-we-maintain-academic-integrity-in-the-chatgpt-era
  4. Biola, H. "Time Limits and the Performance of Adults in Timed Writing." 1982.
  5. Browning, Andrew. "Timed Tests and Their Effect on Student Performance." Fresh Writing, University of Notre Dame. https://freshwriting.nd.edu/essays/timed-tests-and-their-effect-on-student-performance/
  6. Caudery, Tim. "The Validity of Timed Essay Tests in the Assessment of Writing Skills." ELT Journal, vol. 44, no. 2, Oxford University Press, 1990, pp. 122–131.
  7. CCCC (Conference on College Composition and Communication). "Writing Assessment: A Position Statement." Revised April 2022. https://cccc.ncte.org/cccc/resources/positions/writingassessment
  8. Chang, Jiajin, et al. "Anxiety Disorders and English Academic Writing Performance Among College Students." Psychology Research and Behavior Management, 2021.
  9. College Board. "New Research: Majority of High School Students Use Generative AI for Schoolwork." 2025. https://newsroom.collegeboard.org/new-research-majority-high-school-students-use-generative-ai-schoolwork
  10. Crossley, Scott, et al. "Plagiarism Detection Using Keystroke Logs." Proceedings of the International Conference on Educational Data Mining, 2024. https://educationaldatamining.org/edm2024/proceedings/2024.EDM-short-papers.47/index.html
  11. CWPA and CCCC. "Joint Statement in Response to the COVID-19 Pandemic." Council of Writing Program Administrators. https://wpacouncil.org/aws/CWPA/pt/sd/news_article/309074/_PARENT/layout_details/false
  12. Dawson, Phillip, et al. "Assessment Quality, Not Cheating Prevention, Should Drive AI-Era Reforms." Assessment & Evaluation in Higher Education, 2024.
  13. Deane, Paul. "Remodeling Writers' Composing Processes: Implications for Writing Assessment." Assessing Writing, vol. 49, 2021.
  14. Digital Education Council. "2024 Global AI Student Survey." 2024.
  15. Elbow, Peter, and Pat Belanoff. "Portfolios as a Substitute for Proficiency Examinations." College Composition and Communication, vol. 37, no. 3, 1986, pp. 336–339.
  16. Flower, Linda, and John R. Hayes. "A Cognitive Process Theory of Writing." College Composition and Communication, vol. 32, no. 4, 1981, pp. 365–387.
  17. Florida Department of Education. "2024–25 B.E.S.T. Writing Fact Sheet." https://www.fldoe.org/core/fileparse.php/20102/urlt/2425BESTWritingFactSheet.pdf
  18. Graham, Steve, and Michael Hebert. "Writing to Read: A Meta-Analysis of the Impact of Writing and Writing Instruction on Reading." Harvard Educational Review, vol. 81, no. 4, 2011, pp. 710–744.
  19. HEPI/Kortext. "2025 Student Academic Experience Survey." Higher Education Policy Institute, 2025.
  20. Hill, Kennedy T., and Allan Wigfield. "Test Anxiety: A Major Educational Problem and What Can Be Done About It." The Elementary School Journal, vol. 85, no. 1, 1984, pp. 105–126.
  21. Jiang, Yue, et al. "Using Keystroke Behavior Patterns to Detect Nonauthentic Texts in Writing Assessments." Journal of Educational Measurement, vol. 61, no. 2, 2024.
  22. Khuder, Bashar, and Nigel Harwood. "L2 Writing in Test and Non-Test Situations." Journal of Writing Research, 2015.
  23. Knoch, Ute, and Catherine Elder. "Validity and Fairness of Timed Writing Assessment." Language Assessment Quarterly, 2010.
  24. Liang, Weixin, et al. "GPT Detectors Are Biased Against Non-Native English Writers." Stanford University, 2023.
  25. Menary, Richard. "Writing as Thinking." Language Sciences, vol. 29, no. 5, 2007, pp. 621–632.
  26. Montolio, Daniel, and Pere A. Taberner. "Gender Differences Under Test Pressure and Their Impact on Academic Performance." Journal of Economic Behavior & Organization, 2021.
  27. Onwuegbuzie, Anthony J., and Mark A. Seaman. "The Effect of Time Constraints and Statistics Test Anxiety on Test Performance in a Statistics Course." The Journal of Experimental Education, vol. 63, no. 2, 1995, pp. 115–124.
  28. Perelman, Les. As cited in "Testing, Testing." Salon, May 17, 2005. https://www.salon.com/2005/05/17/sat_5/
  29. Perl, Sondra. "The Composing Processes of Unskilled College Writers." Research in the Teaching of English, vol. 13, no. 4, 1979, pp. 317–336.
  30. Powers, Donald E., and Mary E. Fowles. "Effects of Applying Different Time Limits to a Proposed GRE Writing Test." ETS Research Report, 1996. https://www.ets.org/research/policy_research_reports/publications/report/1996/hxuq.html
  31. Quinn, David M. "Experimental Evidence on Teachers' Racial Bias in Student Evaluation: The Role of Grading Scales." Educational Evaluation and Policy Analysis, vol. 42, no. 3, 2020, pp. 375–392.
  32. Quitadamo, Ian J., and Martha J. Kurtz. "Learning to Improve: Using Writing to Increase Critical Thinking Performance in General Education Biology." CBE Life Sciences Education, vol. 6, no. 2, 2007, pp. 140–154.
  33. Ramirez, Gerardo, and Sian L. Beilock. "Writing About Testing Worries Boosts Exam Performance in the Classroom." Science, vol. 331, no. 6014, 2011, pp. 211–213.
  34. RAND Corporation. "AI Use in Schools Is Quickly Increasing but Guidance Lags Behind." 2025. https://www.rand.org/pubs/research_reports/RRA4180-1.html
  35. Schirner, Sarah, et al. "Intelligence and Response Speed in Timed Assessments." 2023. As cited in Browning, "Timed Tests and Their Effect on Student Performance."
  36. Sommers, Nancy. "Revision Strategies of Student Writers and Experienced Adult Writers." College Composition and Communication, vol. 31, no. 4, 1980, pp. 378–388.
  37. South Carolina Department of Education. "Writing Component: SC READY." https://ed.sc.gov/tests/middle/sc-ready/writing-component/
  38. Taylor & Francis. "How Vulnerable Are UK Universities to Cheating with New GenAI Tools?" Assessment & Evaluation in Higher Education, 2025.
  39. University of Reading. "94% of AI-Written Exam Submissions Went Undetected." 2023.
  40. CAST. "Universal Design for Learning Guidelines, Version 3.0." 2024. https://udlguidelines.cast.org/
  41. Weber-Wulff, Debora, et al. "Testing of Detection Tools for AI-Generated Text." International Journal for Educational Integrity, 2023.
  42. White, Edward M. "An Apologia for the Timed Impromptu Essay Test." College Composition and Communication, vol. 46, no. 1, 1995, pp. 30–45.
  43. WPA (Council of Writing Program Administrators). "WPA Outcomes Statement for First-Year Composition, Version 3.0." Approved July 17, 2014. https://wpacouncil.org/aws/CWPA/pt/sd/news_article/243055/_PARENT/layout_details/false
  44. Wu, Xueyan, and Rosemary Erlam. "The Effect of Timing on the Quantity and Quality of Test-Takers' Writing." Papers in Language Testing and Assessment, 2016.
  45. Yu, Jing, Iuliana Rosca, and Heidi L. Andrade. "Predictive Validity Evidence for a No-Stakes, Untimed, Machine-Scored Diagnostic Writing Assessment." Assessing Writing, vol. 63, 2025.
Timed vs. Untimed Essays: What the Research Actually Says