This post is based on a talk given about the paper Towards Creating Infrastructures for Values and Ethics Work in the Production of Software Technologies by Richmond Y. Wong, published in Proceedings of the sixth decennial Aarhus conference: Computing X Crisis. [Read the full paper].
Computing appears to be in a crisis when addressing the social values and ethical effects of computing systems. Every day there seems to be a new headline highlighting how computing systems create new harms or violate rights. Computing research, particularly the field of human-computer interaction (HCI), has created many design tools to take social values and ethics into account during design. But these harms keep happening.
In this post I argue that this tool-oriented approach does not fit the actual problems going on in software production, which are not just technical, but are social, organizational, and political. I suggest the lens of infrastructures as one way to bridge this gap to (1) find more places to act, and (2) more modes through which to act
In particular, I focus on how we can support values and ethics work, to refer to the everyday practices conducted in the name of attending to social values and ethics, often conducted by technology designers and engineers. (I also tend to focus on values and ethics work in large tech companies in the U.S., though values and ethics work occurs globally across different types of organizations and institutions).
This work is in part inspired by an experience when interviewing user experience (UX) professionals who were attempting to address ethical issues in their work.
In late 2018, I was sitting in a coffee shop in downtown San Francisco. I was meeting Francine , who works on UX at a company that creates enterprise software. Francine began to discuss an incident where she and other co-workers learned that an organization that was a client of their company was involved in perpetuating harms against migrant families. That client reached out to ask for help to improve their installation of the software made by Francine’s company. Francine and a co-worker had strong feelings against helping this client. They drafted a letter that they planned to share, noting how this violated their personal values. While Francine felt her immediate manager was supportive, the issue got raised to upper management. Francine recounted the response back from a chief officer of the company as basically “do your job,” and that not working for this client would “open a can of worms”. However, Management ended up hiring an outside contractor to help this particular client. While Francine was glad she didn’t personally have to help this client; she was frustrated that this contracting-out solution did not address the underlying harms and concerns.
I think one of the striking things here is that it seems like Francine did the supposed “right things” that are implicitly expected in HCI’s dominant approaches to addressing values and ethics. She knew about values centered design approaches in HCI from her college program. She identified an ethical issue during technology production, and she brought it up—and nothing really changed. And this was in 2018, when US tech companies were generally more receptive to employee concerns than they are now. This point to the idea that tools that help individuals identify ethical issues during design isn’t enough to actually create change.
Part of the challenge is that HCI’s values and ethics tools tend to assume that a designer or engineer can use a tool that helps them: identify the social values at play with a particular technology and context; identify the potential ethical harms; and then that designer is imagined to have the power to speak up and make an alternate, “better” product design decision. (And I’ve published tool papers that follow these assumptions too, so this is also rooted in self reflection of how my own theory of change has evolved over time).

Icons by Corner Pixel from the Noun Project; Photo by Kier on Unsplash
Decision-making in reality is messier, more complex, and more entangled. HCI experts, or individual designers and engineers, are usually not decision-makers. Ethics design tools often miss this broader reality, and do not consider the organizational, social, economic, and governance structures that affect how design is done in the first place.
So the challenge here is: what can computing and HCI research do to better support ethics work, if this dominant tool-based approach isn’t working? Borrowing language from Katie Shilton, what levers and leverage points can we use in these organizational contexts, to create spaces within this tangled landscape for ethics work?
I draw on concepts of “infrastructure” as developed by Star and Bowker, and other across researchers science & technology studies, media studies, and adjacent fields. Infrastructures are inherently sociotechnical. They contain specific technologies, are enabled by social institutions through activities over time such as standards-setting, maintenance, and repair. With this lens, the social, organizational, political, and economic processes that are often in the background when designing values and ethics tools are foregrounded as sites of investigation and potential intervention.

(Icons by Garoks, Romaldon, Dwi ridwanto, Faridah, zahrotul fuadah, HideMaru, Reion from the Noun Project).
In the paper, I suggest 6 potential tactics that researchers, designers, ethics advocates can use, drawn from insights from infrastructure studies and from existing research on ethics work in organizational practice. Bit in this post, I’m going to just dive into 2 of these in more detail today.
Tactic 1: Acknowledging positionalities of values and ethics advocates
One insight from infrastructure studies is that infrastructures are social and are experienced from multiple subject positions. Lee et al. identify the importance of “human infrastructure,” calling attention to social groups and collaborative practices as a key part of creating and maintaining infrastructure. Burrell discusses how infrastructures are experienced in unequal ways.
A tactic that can make use of this insights it to acknowledge the positionalities of those workers advocating for values and ethics. There could be several ways to enact this—each acting as a potential intervention point that can create openings in this messy tangled space to conduct ethics work. Many ethics tools don’t consider questions like: Is the intended user in the position and have the power to advocate for the types of changes that the tool suggests? Is it safe for them to do so? We’ve seen lots of instances in the US of tech workers who are seen as speaking out “too much” face disciplinary action or termination. But most ethics tools don’t talk about the fear or stress or retaliation one might face when trying to navigate speaking up about values and ethics. What would it mean to create tools that take these things into consideration?
This might mean finding ways to limit the visibility of ethics work in cases when it might be too dangerous to overtly and explicitly engage in these practices. This might mean tactically engaging with corporate discursive practices. Rather than using the language of ethics, values, and justice; a tool might frame these more critical projects through the language of KPIs, quarterly reports, and other accepted business metrics.
Research on ethics work by Rattay et al., Su et al., and Popova et al., highlights the intense emotional labor and felt experience of advocating for values and ethics. Tools might also support forms of mental and physical care, self-care and community-based care that better address the emotional dimensions of doing this work. Some of these moves could be done through tool building, while some could be done through other modes of engagement like community-building. I don’t want to throw out tool building! But if we do want to build tools for values and ethics, maybe we need to re-think what practices the tools are supporting—and focusing on the collaborative and social practices involved in ethics work might require us to design ethics tools in a different way.
Tactic 2: Create Standards, Policies, and Law to Enable Values and Ethics Work
Another insight from infrastructure studies is that infrastructures call attention to how standards, law, and policy shape work practices. So this tactic is to “create standards, policies, and law to enable values and ethics work.” Changes here can help shift the dynamics “on the ground” in organizational contexts where ethics work takes place.
One way to enact this tactic is considering how the language of standards and law can codify a particular expertise when addressing an issue. For instance, does a particular social value become viewed as a legal compliance issue, or an engineering issue, or a user centered design issue? I love the GDPR, but I would argue that it has largely taken responsibility for privacy out of the hands of designers and HCI experts and into the hands of legal compliance teams. Empowering workers with social and cultural expertise to do ethics work requires having advocates during the policy creation process to shape these rules in ways support these workers’ expertise and give them agency and voice to take part in decision-making.
For example, the U.S. National Institute of Standards and Technology released their voluntary AI Risk Management Framework in 2023. Notably, this framework specifically calls for including human factors experts, who can contribute human-centered design practices and understandings of “context-specific norms” and “values in system design” to be included during the production process.
Human Factors tasks and activities are found throughout the dimensions of the AI life-cycle. They include human-centered design practices and methodologies, promoting the active involvement of end users and other interested parties and relevant AI actors, incorporating context-specific norms and values in system design, evaluating and adapting end user experiences, and broad integration of humans and human dynamics in all phases of the AI lifecycle. (NIST AI Risk Management Framework, Page 36)
This explicit call out to human factors expertise is really rare! Most standards and frameworks highlight legal or computer engineering expertise. But human factors, user experience, or social expertise are rarely highlighted. For organizations adopting this AI Risk Management Framework (and there are many!) this clause provides a potential opening for user experience and HCI experts to assert their expertise and legitimacy when raising concerns about AI systems. Paudel et al. describe how infrastructures affect the distribution of resources—we might think about these policy language choices as creating and legitimizing resources for particular forms of ethics work.
At other times, laws may indirectly support values work by creating new processes or requirements to “piggyback” on. Deng et al. observe that the creation of privacy impact assessments, which are required by data protection laws, created new opportunities for tech workers to use privacy impact assessments to also advocate for fairness. How could standards and laws be intentionally designed with ethics piggybacking explicitly in mind? Standards and laws might be designed with these metaphorical “handholds” that can support tech workers attempting to do ethics work on the ground.
Looking at policy can also open new points in this landscape for intervention that go beyond product design. If we think about AI and software as product supply chains (as Nafus and Widder do), we might look at how human rights laws might apply. Or (as I’ve looked at in the past), we might look at financial investment and securities regulations to force companies to disclose practices or increase costs that might open up new spaces for ethics work on the ground.
Reflections

That was a brief highlight of the paper, and I encourage you to look at the full paper for more detail on the other 4 tactics. But for now I want to end with a few reflections on how the lens of “infrastructures” can help open up new possibilities for doing ethics work in tech.
First, an infrastructural perspective provides a lens for analysis that helps us open up new points of intervention beyond the product design process—helping identify potential leverage points in this more complex, tangled landscape. The infrastructures of software production become a landscape of potential leverage points: organizational processes, software supply chains, financial investment processes, tech workers’ social networks, standards, and laws.
Secondly, learning to operate in this broader landscape may require different “modes of action”. An infrastructural lens can help us think and act in the world at multiple moments and scales. This means that HCI can do more than design, and operate using other modes of action. Sometimes that might mean policy advocacy, or creating governance mechanisms, or building social networks, or supporting workers’ mental and physical health. And it means joining with people already doing this work.
A final thought is that these leverage points and different modes of action are interconnected. And they can better support each other if we consider multiple modes of action simultaneously. Action leveraging change in one part of the landscape can open up or foreclose, untie, or tie up, opportunities in another part of this landscape. How do we think about the law, the organizational processes that implement that law, and the embodied experience of doing ethics work at the same time—and how those processes intersect? To me, an infrastructural approach to ethics work envisions researchers and advocates who can dynamically move across this landscape, and nimbly move across modes of actions, to more fully work towards achieving the ethical and moral goals that we seek in the design of software technologies.
Paper citation: Richmond Y. Wong. 2025. Towards Creating Infrastructures for Values and Ethics Work in the Production of Software Technologies. In Proceedings of the sixth decennial Aarhus conference: Computing X Crisis (AAR ’25). Association for Computing Machinery, New York, NY, USA, 13–26. https://doi.org/10.1145/3744169.3744171