This blog post summarizes a research paper, Understanding Socio-technical Factors Configuring AI Non-Use in UX Work Practices, published at the ACM 2025 CHI Human Factors in Computing Systems Conference by Inha Cha and Richmond Wong, which can be freely downloaded at: https://doi.org/10.1145/3706598.3713140
When Everyone’s Using AI… Why Are Some Choosing Not To?
AI tools are everywhere in UX: generating personas, writing copy, analyzing research notes. With every new plugin and model, the message is clear — adopt AI, or risk falling behind.
But when we interviewed 15 experienced UX practitioners across industries like finance, healthcare, design consulting, and media, a different story emerged. These professionals weren’t AI skeptics or Luddites. They had tried AI. Some even liked it. And yet, they often chose not to use it.
This blog post explores what we learned from them — and why AI non-use is not about rejecting technology, but about negotiating its limits, values, and contexts.
The Core Insight: Non-Use Is Not Just Absence, It’s a Practice
We often assume that if a tool exists, people will (and should) use it. But that’s not always true — especially in professional settings. Non-use isn’t just about lack of access or awareness. It’s a deliberate, context-specific, and often strategic choice.
So, why do UX professionals hold back?
We found three major sets of factors shaping AI non-use:
1. The Nature of UX Work: Human-Centered, Contextual, Collaborative
At first glance, AI seems like a perfect match for UX. It promises efficiency, faster ideation, even automated user research. But many UX practitioners we spoke with felt that the nature of their work fundamentally resists automation. UX isn’t just about outputs — it’s about how those outputs are created. One common theme was the structured and system-constrained nature of UX workflows. Teams follow design systems, accessibility guidelines, and brand specifications that AI tools often can’t adapt to. As one designer told us, “We can’t just plug AI-generated components into our banking app — everything has to align with strict standards.”
Beyond systems, UX work is deeply collaborative. Designers work closely with researchers, engineers, PMs, and stakeholders — often negotiating conflicting needs and goals. Several participants noted that AI tools aren’t built to navigate these social dynamics. You can’t prompt Midjourney into consensus. This is especially true in research. While AI can summarize interview transcripts, it can’t replicate the sensemaking that happens when a designer sits with ambiguity, watches for contradictions, and reflects with a team. The process — not just the artifact — holds value.
And then there’s the challenge of interpreting human behavior. Users are irrational, emotional, and inconsistent — and making sense of that complexity is central to UX. Many participants said that AI-generated insights felt shallow or “flat.” As one researcher put it, “It clustered the obvious patterns, but missed the weird, messy stuff — the stuff that actually matters.”
2. Professional Values: Ethics, Privacy, and Control
Even when AI tools are technically capable, designers often make value-driven decisions not to use them. One major reason? Concern for the people they design for. Multiple participants described how AI-generated outputs — especially images — lacked diversity or reproduced harmful stereotypes. For example, when a designer asked a model to generate images of Black women for a healthcare persona, the tool returned unnatural, unrealistic hair textures. “It just doesn’t get it,” she said.
Privacy was another big concern. Several participants flat-out refused to input sensitive data into AI systems, especially those powered by black-box models or hosted in the cloud. Even if an AI tool could streamline research or speed up writing, the risk of leaking user data — or internal design decisions — was too great. “[Participants] signed consent forms,” one designer said. “We can’t just upload that interview to ChatGPT.”
These decisions were often framed as part of their professional responsibility. Designers saw themselves as advocates for users — especially marginalized ones — and gatekeepers for ethical practice. At the same time, many talked about the importance of maintaining agency over their work. They didn’t want AI to get in the way of their judgment. In fact, some felt that working manually gave them more control, even if it took more time. “It’s just easier — and better — when I write it myself,” one said.
3. Organizational and Policy Barriers: More Than Just Tools
Even when designers were personally open to using AI, organizational and policy factors often got in the way. In many cases, their companies had strict rules about data security, cloud tools, or regulatory compliance. If you work in healthcare or finance, chances are you’ve heard of HIPAA or GDPR — and chances are those regulations limit what AI tools you’re allowed to use. Several participants couldn’t use ChatGPT at all because their companies prohibited uploading any internal information to cloud-based tools.
Sometimes, the friction came from client contracts. In agency settings, clients would explicitly ask teams not to use AI — especially for sensitive work like customer interviews or brand-facing assets. In other cases, participants were caught in bureaucratic delays. “We wanted to use AI to help with design documentation,” one said, “but it required manager approval, legal sign-off, and a data compliance check. We gave up and just did it ourselves.”
Interestingly, leadership and middle management often pushed for AI adoption more than individual designers. Executives were excited about cost-cutting, faster turnarounds, and industry trends. But on the ground, designers were skeptical. They’d seen hype cycles before. “Every few years, a new tool comes in promising to change everything,” one said. “And then we end up going back to the basics.”
In fast-paced environments, AI tools often require more setup, learning, and troubleshooting than they’re worth. When deadlines are tight and expectations are high, the path of least resistance isn’t always automation — it’s the tools and processes designers already know and trust.
Final Thoughts: Valuing the Decision Not to Use AI
Not using AI doesn’t mean being anti-tech. Sometimes, it means protecting users. Sometimes, it means doing better work. Sometimes, it means pushing back against hype to do what’s right for your context.
Our research suggests we need to rethink what responsible AI adoption looks like in professional settings. That means:
- Designing tools that support both use and non-use. Let people turn off AI features or choose when they want to work manually.
- Acknowledging AI doesn’t always fit every task, process, or policy environment.
- Considering policy, legal, and ethical frameworks as core parts of UX — not afterthoughts.
As AI tools continue to evolve, we need to design with the full picture in mind — not just for adoption, but for thoughtful, responsible, and context-aware use or non-use.