Lorem nope-sum: The benefits of using realistic content in UX research
It’s common for UX researchers to use placeholder content when wireframing or prototyping to help save time. Placeholder content has many valuable uses, but it should not go in front of users. In our experience conducting UX research studies, placeholder content can trip up users and give you poorer results. When testing new designs with users, realistic content is key to convey the purpose and use of your designs.
What is placeholder content?
Placeholder content is any kind of unfinalized or fake content that you use instead of actual content when designing an application, interface, or website.
You’ve probably seen this around. For many years, Microsoft Word templates came pre-populated with a slew of Latin jargon beginning with “lorem ipsum….” There are other garden-varieties of placeholder content, and a quick Google search will give you almost endless options from classic Latin to hipster jargon.
Placeholder content is basically meaningless, and that’s the problem with using it in UX research. We want our users on task, not distracted by our content (more on this in our case study). If our users are distracted by trying to understand blocks of filler Latin text (which has happened in usability testing), we’ve surely lost the point of our testing.
Why do people use placeholder content?
Adding real, polished, impactful content is hard. It takes time and collaboration with other departments to make realistic content, and sometimes that feels impossible before your next round of user research.
Prototyping and wireframing require input from not only design, but systems architecture, engineering, project management, stakeholders, subject matter experts, and a slew of other teams. In the space of a sprint, these teams are scrambling to get stuff done. When it comes down to it, work is usually considered complete if the prototype works, looks good, and meets functional requirements.
A prototype with filler copy hits all those requirements. Besides, complete and realistic copy means additional long hours of work between departments, and it can be hard to justify changing a few words when the prototype is right there, working, and so shiny. Because of this, most teams gloss over the copy part and rush to testing. It is, after all, a minimally viable product, right?
Well, yes and no.
Case study from Libby Kaufer: Redesigning a legacy interface
I worked on a project that involved redesigning an old, legacy system. The information architecture did not match any version of a user’s workflow and was generally disorganized, making it difficult to figure out where to complete key tasks. The labels didn’t make sense and used confusing language, creating uncertainty when users tried to confirm important status information or find legal requirements. The Help Desk team was often flooded with support tickets during data submission periods from users who couldn’t understand submission instructions or figure out what their filing status was. In short, the application seemed designed for robots, not people.
My team of UX researchers and designers set out to fix this system. Based on discovery research, we developed wireframes for a totally revamped system, with new designs and information architecture to test during usability testing.
In our first rounds of testing, we were totally focused on the design. If we had time to come up with content, we did, but we didn’t think it was as important as the new design. That’s what lorem ipsum is for, right? We figured if something confused a participant, we’d just explain it was placeholder content and move on with the testing.
Well, we learned pretty quickly that trying to save time on the content wound up costing us a lot of time in the usability sessions. Participants noticed strange or incorrect content immediately. “This isn’t the right date for this submission program, so I’m not sure what this timeline means.” They couldn’t complete the tasks because the labels or descriptions didn’t make sense to them, recreating the very problem we’d been trying to solve.
In our first round of testing, we would tweak specific pieces of copy that were wildly incorrect (for example, submission deadlines or requirements) if we had time in-between user sessions, but it still wound up negatively impacting our participants’ success and gave them a less-than-positive perception of our redesigns. We realized we couldn’t save writing content for later. In fact, we were missing out on a perfect opportunity to test our content with users during these usability sessions.
Leading up to our second round of usability testing, we scheduled time with subject matter experts to review all of our prototype copy for correctness. We would need to rework some of the content later based on finalized functionality and legal requirements, but by testing it out beforehand, we gained invaluable insights into what kind of copy worked well and what didn’t. When we had to launch the final redesigns, the website copy was clear, direct, and correct. Users responded enthusiastically to the new website, and the Help Desk no longer needed to spend time helping users figure out how the website worked.
Crafting realistic content before usability testing is worth the effort
Content is how users make sense of a new system, especially if it’s a redesign of something they’re currently using. Content allows people to orient themselves and map new designs to their current understanding of the system.
It’s clear the damage that poorly chosen, placeholder copy can do. At the very least, it can be distracting (your user trying to learn Latin on the spot) or even just reading the text when they should be focusing on the design. At worst, it can lead your users down the wrong path, impacting the results of your testing and even the functionality of your prototype. Taking the extra time to craft realistic content will give you more robust feedback and give you a chance to road test new content.
Related posts
- Bringing service design to the VA digital services platform
- Why design skills are a power-up for researchers
- Expanding the possible with the Find Local Help tool
- Developers are humans too: Why API documentation needs HCD
- The case for listening to enterprise users
- Navigating government barriers to human-centered design