An introductory note from Richmond Wong: I maintain a lab guidebook for graduate students who work with me, outlining the lab’s scholarly orientation to research, community expectations, and how I work with students. This is a modified and abridged version for folks who are interested in knowing a little more about the behind-the-scenes of what we do, and for potential graduate students who are interested in working with us!
About the Lab
Often we want our computing technologies to promote, protect, or adhere to a range of social values—such as privacy, fairness, equity, dignity, or security. Technology design is a social and cultural practice (as well as a technical one). Thus, helping technologists design ethical systems requires more than creating technical ethical design tools; it also requires creating social and cultural infrastructures that can help support technologists to make ethical decisions. (You might also check out Data and Society’s blog post on “How to think like a sociotechnical researcher,” which has a very similar perspective on technology and society as our lab).
The research that I do and supervise seeks to investigate the work practices involved in addressing ethics and social values, and the social contexts in which those take place. This allows us to study current and possible infrastructures — social, political, and technical arrangements that can be built and maintained — that can support the work of addressing ethics and social values.
What are some of our current research areas?
Our current projects tend to fall into one of 3 areas:
1. Studying the work required to address ethical issues within organizational contexts.
These projects study how technology professionals–such as user experience (UX) professionals, product managers, or artificial intelligence (AI) practitioners–attend to values and ethical issues as a part of their professional work. Often this work is attuned to issues of social power: who does what type of work to address ethical issues; whose work and knowledge is seen as legitimate or appropriate; what types of structures help or hinder these efforts?
Projects in this area include:
- Interviews studying the forms of work that UX practitioners do to address values in the workplace, including work to re-design how their organizations address ethics. At the same time, this might be in tension with organizational goals and serve as a form of “soft” resistance.
- Conducting a discourse analysis of AI ethics tools to understand how they implicitly frame what it means to do the work of AI ethics, finding that despite their claims that values like fairness are sociotechnical, most focus on providing technical advice, leaving several gaps in fully addressing these issues.
- We also attempt to innovate on qualitative and design-based research approaches to studying technology companies and workplaces when direct ethnographic observation is not possible — such as Ethics Pathways, which helps participants reflect on and describe past ethical decision-making processes.
2. Creating Alternate Ethics Infrastructures and Levers
Many projects make use of a deep understanding of the organizational and political context of ethics to imagine collective, organizational, or political ways to address ethics and values in technology design at levels beyond the individual. While many ethics design tools imagine an ideal empowered individual who can use those tools and then make a better decision, in practice, individual-led change is very difficult. Individuals may not have much social power within an organization, may not be decision makers, may be concerned about retribution, or being seen as a non-team player if they bring up social and ethical issues too often. Providing infrastructures that can help shift responsibility for addressing ethical issues may better help protect these individuals who surface social and ethical issues (e.g., addressing a social value is not because of an employee’s individual beliefs, but rather because of a law, standard, or organizational policy that should be followed). These infrastructures can serve as (what Katie Shilton terms) “values levers,” or mechanisms that help make social values visible and open to action.
Projects in this area include:
- How can investment practices shape how social values and ethics are addressed within tech companies? For instance, we conducted a discourse analysis of tech companies’ annual filings with the US Securities and Exchange Commission to see how companies communicated the concept of “privacy” to potential investors.
- How can law and public policy be a lever to help promote particular social values during technology design? This includes investigating how policy becomes enacted “on the ground” in designers’ practices, and comparing how policy “works” in comparison to HCI’s approach to design.
- How can we imagine new forms of organizational governance that might promote responsible and ethical technology development? This includes doing interviews and design activities with people who work in large technology organizations, as well as using speculative design to imagine how organizations and their governance practices could work differently.
- While many projects investigate levers beyond tools, we also design new ethics and values design tools that take organizational context into account (for instance, what if the person using a tool or design activity is not the person who is responsible for making decisions?). One activity, Timelines, makes use of fictional news headlines and social media posts to explore potential social effects of technology, in part to help practitioners frame their arguments about values in terms of public relations, which may better convince certain decision makers.
3. Studying User Resistance, Non-Use, and Alternative Use Practices
Technologies are not always used in the ways that their designers intend, particularly when the social values associated with the technology do not match the social values that users and communities want to promote. These projects explore how and why people choose to use technologies in alternate ways (particularly through refusal, non-use, or alternative forms of use), and the conditions that allow or inhibit them from doing so.
Projects include:
- Studying users or communities that choose not to use technologies in certain ways or under certain conditions, for instance people who choose not to use AI tools, or choose not to use social media for particular purposes.
- Showing speculative designs to people as prompts to understand their concerns with emerging technologies, and how their preferred social values may not align with the values embedded in technologies during design. Often we focus on issues of privacy and surveillance, but we also study other types of social values and harms.
- Creating speculative designs that highlight systems of social power in existing technologies, or that suggest alternative ways to design and use them that distribute social power in different ways.
How do we do this type of research?
We primarily use two types of research approaches: First are interpretivist qualitative methods, including interviews, focus groups, ethnographic observation, and discourse analysis. We tend to come from an assumption that there is not a singular objective truth out in the world, but rather that we need to understand the world from different people’s perspectives and experiences.
Second, we also utilize qualitative design-based methods in our research. Rather than using design to create immediate solutions to problems, we use design as a way to help ask “what if?” questions, which help prompt reflections about the relationships between people and sociotechnical systems. These techniques help elicit ethical concerns about technologies, and imagine alternate ways to build systems.
What kinds of perspectives and disciplines do we draw on?
Our research is interdisciplinary, often drawing on perspectives from design, science & technology studies (STS), critically oriented human-computer interaction (HCI), as well as other perspectives from the social sciences and humanities.
This is a small selection of some articles that inform my work, which I highly suggest that students who work with me read:
- Susan Leigh Star’s “Ethnography of Infrastructure” (1999), which describes infrastructure as a concept, a way to view the world, and tips for how to study them.
- Robert Soden, Austin Toombs, and Michaelanne Thomas’ “Evaluating Interpretive Research in HCI” (2024) which provides a nice overview of the type of qualitative research approach we often use
- Chapter 1 in Anthony Dunne and Fiona Raby’s book Speculative Everything (2013), which gave me new ways to think about design as a way to ask questions and explore possibilities, rather than solve immediate problems
- Katie Shilton, Jes Koepfler, and Ken Flesichmann’s “How to see values in social computing” (2014), which provides a useful framework for conceptualizing social values, and how different research methods can be used to study them.
- Katie Shilton’s “Values Levers: Building Ethics into Design” (2013), which describes how social values become seen as important in actual technical practice.
- Deirdre Mulligan, Colin Koopman, and Nick Doty’s “Privacy is an essentially contested concept” (2016), for arguing both that social values are inherently contested and multi-definitional, but there are still ways to conceptually map these concepts in ways that can be generative for conversations and for design (using privacy as an example).
How do I work with PhD students?
For PhD students, I usually do not just assign or “hand” students a ready-to-go project. Depending on your advancement in the program, I may bring the start of an idea for you to build on, or help develop an idea that you have come up with. Part of your learning and training experience is learning how to do the different steps of a research process. We’ll work together on planning these, and at some stages we’ll do these together. Sometimes I may help supervise you at the start of a task and then let you do the rest (e.g., sitting in on your first interview).
But especially with writing tasks, I will usually ask you to create a draft first for me to then edit and go back and forth with you on. Don’t worry about writing something perfect the first time – writing is an iterative process! But that also means that we will need to write well ahead of deadlines to have enough time to do those iterations.
Most projects that we work on together will include co-authored paper submissions, as long as everyone has made a significant contribution to the project. While most projects will be collaborations between you and me, sometimes projects may grow into broader collaborations that also involve other students, faculty, or researchers.
What do I do if I want to work in your lab as a PhD student?
I’m looking for the following qualities in prospective PhD students:
- Experience with or interest in HCI Research
- Experience with or interest in Science & Technology Studies research (or a related social science or humanities discipline such as sociology, anthropology, media studies, or cultural studies)
- Commitment to using interpretivist qualitative research methods, including design-based methods (at least in part – while I am open to collaborations utilizing quantitative and computational methods, I am unable to deeply advise a dissertation primarily focused on these approaches).
- Commitment to a collegial, collaborative, and supportive research and learning environment.
I can currently work with PhD students in two ways:
1. I can advise PhD students who are accepted in Georgia Tech’s Digital Media PhD Program.
The Digital Media PhD is an interdisciplinary, computing + humanities program in the School of Literature, Media, and Communication. We often have students who want to build and make things, but also read theory and want to critically analyze things too. Note that this program follows a humanities funding model, where the most common form of funding is through teaching assistantships. In these, the PhD student (who usually has already obtained their masters degree) is the lead instructor of an introductory undergraduate course every semester, usually an LMC 2000-level course, which pays for tuition and a stipend. Research assistantships can sometimes be obtained depending on faculty funding, but these are not guaranteed.
For Fall 2024 admissions for the 2025-2026 school year, an incoming PhD student working with me would be funded via teaching assistantships, likely as the lead instructor of an undergraduate course (unless you have an external source of funding that will pay for your tuition and stipend, e.g., NSF Graduate Research Fellowship Program, etc).
2. I can co-advise PhD students who are accepted into Georgia Tech’s Human Centered Computing PhD Program.
This program is in the School of Interactive Computing, within the College of Computing. As I am an adjunct faculty member there, you would need to identify a full-time professor (assistant professor, associate professor, professor, or Regents’ professor) in the School of Interactive Computing to be your primary advisor.
For Fall 2024 admissions for the 2025-2026 school year, I likely can only co-advise an incoming Human Centered Computing PhD student who already has an external source of funding that will pay for your tuition and stipend (e.g., NSF Graduate Research Fellowship Program, etc).
Contact me if you’re interested in applying!
I’m unlikely to respond to generic form emails due to the volume of email I receive. If you would like to reach out to me with inquires about working with me as a PhD student, I would highly suggest that you send an email to me (at rwong34 {at} gatech.edu) that includes: (a) a short description of your research interests, (b) a brief description of why you want to do a PhD, and (c) 2-3 pieces of academic writing (such as research articles or books) that inspire you or that you find very useful in your thinking and why (I’d like to learn from your interests too!).
Acknowledgements
My approach to a lab guide is inspired by documentation and guides created by Eric Gilbert and Mor Naaman, Max Liboiron’s CLEAR lab, and Aimi Hamraie’s Critical Design Lab. Thanks to Michael Madaio for comments on an earlier version of this blog post.