"Is the text clear?" is one of the hardest questions to answer honestly. When we write a message, we already know what we're trying to say โ and when we reread it, we understand it perfectly. But the real test is whether someone who doesn't know what you're trying to say can understand it. The cloze test is the most reliable method for measuring that, and in 2026 it's becoming a standard tool in every serious UX Writer's kit.
This article covers what the cloze test is, how to run one step by step, how to interpret the results, and how to integrate it into your UX Writing workflow whenever you need to validate important copy.
What you'll learn:
- What the cloze test is and its scientific origins
- How to run one in 5 steps
- How to interpret the results (the 60% threshold)
- The UX Writing use cases where it shines
- Tools for running it online
- The limits of the method
What the cloze test is
The cloze test (from the word closure) is a text comprehensibility test where you remove every Nth word (typically every 5th or 7th) and ask the reader to guess it. The percentage of words guessed correctly is a measure of how comprehensible the text is for that reader.
It was developed in 1953 by Wilson L. Taylor, an American researcher, as a method for evaluating the readability of school texts. It rests on the psychological insight that a clearly comprehensible text lets the reader "predict" missing words from context โ while an ambiguous or poorly structured text leaves the reader with nothing to hold on to.
Over the years, the cloze test has been used in:
- Educational research: to evaluate the difficulty of school texts
- Applied linguistics: to measure second-language comprehension
- System design: to validate labels, messages, and guided content
In modern UX Writing it's made a comeback because it's cheap, fast, and measurable โ the three traits product teams love.
How to run a cloze test
Step 1: prepare the text
Take the copy you want to test. Replace every 5th (or 7th) word with a blank space or an underscore. Keep the first and last sentences intact to give minimum context.
Example of original text (onboarding copy):
"Hi! Welcome to ProductivityApp. Here you can organize your projects into lists, set deadlines, and invite collaborators. Start by creating your first list in the menu on the top left."
Same text with every 5th word removed:
"Hi! Welcome to ProductivityApp. Here you _____ organize your projects into _____, set deadlines, and invite _____. Start by creating your _____ list in the menu _____ the top left."
Step 2: pick participants
You need 25โ50 participants for meaningful results. Participants should belong to the product's target segment โ they should not be designers, copywriters, or people already familiar with the context.
Critical rule: participants must not have seen the original text before. If they've already read the copy, the test is contaminated.
Step 3: administer the test
The participant sees the text with blanks and has to fill each one with the word they think is most likely, using context. There's no strict time limit, but 5โ10 minutes is typically enough for a 150โ200 word passage.
The participant doesn't know the original text โ they genuinely have to "guess" the missing word based on what they can infer from the surrounding words.
Step 4: score the responses
For each missing word, a response is considered correct if:
- Strict criterion: the word entered is exactly the one in the original
- Semantic criterion: the word entered is an acceptable synonym or preserves the meaning
The strict criterion is more scientific but less flexible; the semantic criterion is more practical for UX Writing work. Most modern tests use the semantic criterion.
Step 5: calculate the score
The cloze score is the percentage of correct responses over the total number of blanks:
Cloze Score = (correct responses / total blanks) ร 100
For example: 12 blanks, 9 correct responses โ Cloze Score = 75%.
How to interpret the results
Classic literature from Taylor and later research has established three interpretation thresholds:
- Cloze Score above 60%: the text is independently comprehensible. Readers can read and understand it without outside support.
- Cloze Score between 40% and 60%: the text is comprehensible with effort. Readers can get through it but it takes work; for interface copy it's likely many users will bail out or misunderstand.
- Cloze Score below 40%: the text is incomprehensible for the tested target. Rewrite it.
For modern UX Writing, the recommended operating threshold is 60%. Below that, the text isn't fit to live in an interface aimed at a general audience.
Qualitative interpretation
The numeric score isn't everything. It's equally important to analyze which words are missed most often. If 90% of participants get the same word wrong, that word is the problem โ and probably needs to be reworded.
Example: in a test on banking copy, the word "transfer" is guessed by 95% of participants, but "ACH" by only 20%. Conclusion: the sentence shouldn't assume the user knows the term "ACH."
When to use the cloze test in UX Writing
The cloze test isn't right for every situation. It works well for:
1. Validating onboarding copy
Onboarding is critical because it's the product's first impression. A cloze test on onboarding copy reveals whether new users actually understand what you're trying to explain.
2. Testing error messages
Error messages are often drafted by engineers in "technical speak" and then patched by writers. A cloze test verifies whether the final version is actually comprehensible to an average user.
3. Evaluating help text and tooltips
Guided content where clarity is paramount. A cloze test on critical tooltips can prevent heavy support load after launch.
4. Verifying localizations
When you translate a product into another language, a cloze test on the translated version verifies whether the localization is actually clear โ not just grammatically correct, but comprehensible in the target cultural context.
5. Comparing variants
A comparative cloze test between two versions of a piece of copy gives you an objective measure of which communicates better โ better than a stated-preference test.
Tools to run cloze tests online
In 2026 there's no single dominant tool specifically for cloze tests, but you can build them with general-purpose tools.
Lyssna / UserTesting
User research platforms that support a range of tests, including text comprehension tests. They don't have a dedicated cloze test template, but you can adapt them. UserTesting has the biggest panel in the US market.
Typeform or Google Forms
For simple tests, a form with short-answer questions โ one per blank โ works fine. You can recruit participants via general panels (Prolific, Respondent.io, UserInterviews) or through your own network.
PlaybookUX and Userbrain
Unmoderated testing platforms that let you include written tasks where the user types an answer. A cloze test can be embedded as part of a broader test.
Cloze test generators (free tools)
There are free generators that automatically turn a text into a cloze version. Searching "cloze test generator" turns up working options, even if they're often unpolished.
The limits of the cloze test
The cloze test is powerful but has clear limits. Three things it doesn't measure:
1. Understanding of context
The cloze test measures the ability to infer missing words in a text, not understanding of the global meaning of the message. A user can score high on the cloze and still not know what to do after reading.
2. The emotional tone of the text
The cloze test is agnostic about tone. It won't tell you whether the text comes across as reassuring, cold, or ridiculous โ it only tells you whether it's comprehensible. Assessing tone requires other methods (attribution surveys, preference tests).
3. Layout usability
Comprehensible copy presented poorly can still get ignored. The cloze test measures text in isolation, not the integrated experience with visual design, spacing, and hierarchy.
For these reasons the cloze test is a tool in the kit, not the tool. Combine it with usability tests and preference tests for a full picture.
Frequently asked questions
How many words can I remove in a cloze test?
The standard is every 5th word (about 20% of the text). If you remove more (every 3rd or 4th), the test becomes too hard even for clear texts. If you remove fewer (every 7th or 10th), the test loses sensitivity.
How many participants do I need?
25โ50 for indicative results. 100+ for important decisions or statistically robust comparisons between variants.
Does the cloze test work across languages?
Yes, perfectly. It was originally developed in English but works in any language. Languages with rich morphology sometimes benefit from extra contextual cues.
Can I run a cloze test on a headline or a short phrase?
Not ideal: the cloze test works best on texts of at least 100โ150 words, where surrounding words give enough context to predict the missing ones. For short phrases, a preference test is a better fit.
Does the cloze test replace usability testing?
No, it's complementary. The cloze test measures whether a text is comprehensible in isolation; usability testing checks whether the combination of text plus design works in a real task. The two go together.
How much does a cloze test cost?
With an external panel: $250โ$600 for 30 participants in the US/UK market, including incentives and recruiting. Without a panel (using your own network): just prep and analysis time โ essentially free. It's one of the cheapest tests around.
Next steps
The cloze test is a specific but highly effective tool for validating important copy. To go deeper:
- Read the principles of UX Writing that the cloze test helps you validate
- Dig into how to test interface content more broadly
- Study the other user research methods to build a complete kit
In CorsoUX's UX Writing course we teach the cloze test as part of the daily copy-validation workflow, with hands-on exercises on real projects.



