
Privacy violations in digital products rarely start with malicious intent. They start with a well-meaning feature: a "friend activity" feed designed for music discovery, a neighborhood livestream meant to keep communities safe, a location-sharing tool built to help friends connect. The problem is what happens next: a stalker tracks an ex-partner through a listening feed, a teenager's private playlist becomes ammunition for bullies, a domestic violence survivor's location gets exposed through a synced contact list.
These aren't hypotheticals. They're documented incidents from widely used platforms. And they share a common thread: the designers behind these features never imagined how their work could affect people in dangerous or vulnerable situations. Not because they didn't care, but because nothing in their process prompted them to think about it.
It's the kind of problem Zeya Chen has built her career around, first as a practicing designer, then as a researcher investigating why these failures keep happening.
A Designer Who Does the Research
Chen is not a typical UX practitioner, and she is not a typical academic. She has worked as a UX and product designer for companies including Steelcase, Verizon, and Fidelity, and as a design research scientist for organizations including ideas42, Harvard T.H. Chan School of Public Health, and Rush Medical Center. She holds an M.Des from the Institute of Design at Illinois Institute of Technology, one of the oldest and most established graduate-only design schools in the United States, and a B.A. in Industrial Design from Wuhan University, China. Her design work has earned over 30 international awards, including multiple iF Design Awards, a Red Dot Award, Core77, Fast Company World-Changing Innovations, and Muse Design Awards, with work exhibited at venues from the Carrousel du Louvre in Paris to Milano Design Week.
What sets Chen apart is that she does not stop at making things work. She conducts the rigorous, peer-reviewed research that investigates why design decisions succeed or fail at a systemic level, then turns those findings into tools and frameworks that other designers can use. Her doctoral work at the Institute of Design, conducted in collaboration with Northeastern University's PEACH (Privacy-Enabling AI and Computer-Human Interaction) Lab, sits at the intersection of behavioral design, AI ethics, and human-centered privacy. She also serves as a program committee and reviewer for multiple leading conferences in the field, including ACM CHI, DIS, TEI, and DRS.
Her latest work, PrivacyMotiv, is a direct product of that approach: a research-backed AI system built to solve a real design problem.
Making Privacy Tangible for Designers
Chen led and developed PrivacyMotiv as a visiting researcher at Northeastern University's PEACH Lab, working with collaborators from Johns Hopkins University and the University of Notre Dame. The project is funded by the National Science Foundation. Her paper on the system, "PrivacyMotiv: Vulnerability-Centered Persona Journeys for Empathic Privacy Reviews in UX Design," was accepted to ACM DIS 2026, one of the top international conferences in human-computer interaction and design. This year, the conference accepted just 21% of its 1,154 submissions.

The core finding is straightforward but easy to overlook: UX designers routinely skip privacy in their design reviews, not because they don't care, but because they lack the context to anticipate how a feature might affect someone whose life looks very different from their own. Privacy gets handed off to legal teams or compliance processes, and designers are left without the tools to engage with it in the way they think, through user stories, flows, and interfaces.
PrivacyMotiv closes that gap with a three-part system designed to fit into existing UX workflows.
First, it generates vulnerability-centered personas: fictional but research-grounded user profiles defined by specific risk factors. Instead of the generic "Sarah, 32, marketing manager" that populates most design briefs, PrivacyMotiv creates profiles like a gender non-conforming college student whose family doesn't know they're transitioning, or a woman who recently left an abusive partner.
Second, it uses large language models to simulate how each persona might move through a specific sequence of screens, tapping buttons, accepting defaults, skipping fine print, and traces the path from a seemingly harmless design choice (a default-on sharing setting, a silent session timer) to a real consequence (exposure, surveillance, emotional distress).
Third, it maps each identified risk back to specific elements in the designer's own wireframes. It doesn't just flag that a problem exists. It shows exactly where it lives and what produced it.
In a controlled study with professional UX practitioners, designers using PrivacyMotiv identified 59% more privacy issues and proposed 70% more redesign ideas compared to their usual process. Their findings were also more specific and actionable. After using the tool, participants described a shift in how they thought about privacy. Before, many treated it as a compliance checkbox: does the app ask for the right permissions? Afterward, they started asking different questions: who is affected, under what conditions, and what could go wrong?
A Research Trajectory with Broader Impact
PrivacyMotiv didn't emerge out of nowhere. It builds on a line of research Chen has developed over several years, each piece addressing a different facet of the same underlying question: how do people make decisions when the systems around them are opaque, automated, or designed without their needs in mind?
Chen's research on "Positive Friction" in human-AI interaction proposes that deliberately introducing friction into automated systems can improve decision quality and user autonomy. The idea has gained traction in both academic and industry computational communities, and Chen has been invited to present the work at multiple venues, including the 2024 HCI International Conference. Her "Choice Triad" framework for behavioral design in public health policy has expanded beyond the design field into the behavioral science community, featured in Bescy Behavioral Science Magazine and discussed on the Thinking About Behavior and Behavioral Grooves podcasts.
A collaboration with researchers from the University of Washington and the University of Michigan identified 583 instances of "Engagement-Prolonging Designs" across 17 Very Large Online Platforms regulated under the EU's Digital Services Act, revealing how apps pressure, entice, trap, and lull teens into staying online longer. This work has been accepted to ACM CHI 2026, the field's most competitive conference, and has been referenced in reports by the Knight-Georgetown Institute (KGI) and Panoptykon Foundation as part of ongoing policy efforts around children's digital safety and algorithmic feed regulation. Her most recent work on "Data Donation Design" introduced an approach that increased donation rates from 37.5% to 87.5%, and has been accepted into the DRS library, the largest design research community in the world.

The bigger picture her work points to is this: the gap between what technology does and what people actually need is rarely a problem of bad intentions. Whether it's a designer who doesn't anticipate how a feature affects someone in a vulnerable situation, a data donation system that triggers anxiety instead of trust, or an app that quietly lulls a teenager into another hour of scrolling, the pattern is the same. The people building these systems aren't trying to cause harm. They just lack the tools, the framing, or the behavioral insight to see it coming. Chen's research suggests the motivation to do better is already there. It just needs the right scaffolding.
Chen will present PrivacyMotiv at ACM DIS 2026 at the National University of Singapore this June, when the paper will also be available through the ACM Digital Library.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




