My work sits at the intersection of Usable Privacy & Security (UPS), Human–Computer Interaction (HCI), and cross-cultural socio-technical design. I study how people perceive and navigate security risks, such as smishing, scams, and social-media harms, and prototype human-centered, AI-assisted safeguards that travel across cultural and platform boundaries. A core question guides my research: How can we design privacy and security systems that remain trustworthy as people and technologies move across cultures, platforms, and borders? I combine empirical studies, digital ethnography, participatory design, and socio-technical analysis, translating findings into practical guidance for product teams and policymakers.
Domains: Usable Privacy & Security • HCI & Design • Responsible AI • Cross-Cultural UX • Policy Translation
Methods: interviews, surveys, participatory co-design, lab tasks; digital ethnography and contextual inquiry across cultures/languages; prototype evaluations with trust-forward UX patterns and human-in-the-loop oversight.
Evaluation: metrics that balance security (risk reduction, decision-point precision/recall) and human impact (false-positive cost, friction, comprehension, confidence).
Cross-cultural privacy & security in transition: How migrants and newcomers adapt to unfamiliar platforms, policies, and norms; culturally responsive onboarding and safeguards.
Mobile messaging security & smishing: empirical studies of how users judge message legitimacy (e.g., urgency, personalization, link formatting, sender ambiguity) and evaluation of trust-forward UI indicators, with measures of decision points and human impact (friction, confidence).
Human-in-the-loop defenses: Oversight placements (triage, appeals, audits) with clear thresholds and error budgets to balance safety and workload.
Policy-aware artifacts: Minimal abuse-report schemas, checklists, and reference UI kits that institutions can adopt with low engineering lift.
Design with, not for: Engage users, especially at-risk and under-represented populations, as collaborators.
Policy ↔ Design loop: Let user evidence inform policy; let policy set standards for inclusive, ethical technology.
Open, portable methods: Share protocols, stimuli, measures, and evaluation “recipes” when feasible so others can reuse and extend the work.
Safety-first ethics and equity: IRB-approved, accessible protocols and privacy-by-design practices, with subgroup and cross-language checks to ensure fair, consistent impact.
"Privacy on the Move: Understanding Educational Migrants’ Social Media Practices through the Lens of Communication Privacy Management Theory" (ACM SIGCAS/SIGCHI COMPASS 2025) — social-media practices of educational migrants and culturally responsive safeguards.
"Improving Mobile Security with Visual Trust Indicators for Smishing Detection" (IEEE AIIoT 2025) — evaluating trust-forward UI cues to reduce risky actions.
"What drives {SMiShing} susceptibility? a {US}. interview study of how and why mobile phone users judge text messages to be real or fake" (SOUPS 2024) — trust cues in mobile phishing and design opportunities.
Earlier work on secure email usability and adoption (Future Internet 2023; HICSS 2024).
I am committed to work with interdisciplinary teams across academia, industry, and the public sector. Recent efforts include invited talks (e.g., Carolinas Migration Conference, M3AAWG 65) and participatory design workshops with newcomers and practitioners to validate checklists and pilots. I aim to translate research into adoptable artifacts and actionable standards that strengthen user safety at scale.