My research critically examines histories of subjectivity, the senses, automation, and computerization in the twentieth and twenty-first centuries. It specifically attends to questions epistemology and power in artificial intelligence and machine learning (AI/ML) and explores how conceptions of difference become embedded and materialized through AI systems.

Within the tech industry, my work has focused on distilling the opportunities and challenges of conducting multidisciplinary impact assessments and participatory engagements. At the intersection of history of computing and science and technology studies, my academic research delves into two areas: machine listening and synthetic data. You can find more details below!

Also, here is my most recent CV. If you are interested in working together, the best way to reach me is via email.

I am a historian of science & technology and a researcher in the Responsible Tech Group at IBM Research

Research Projects

Machine Listening

  • With Edward B. Kang, we are bringing together an interdisciplinary group of scholars to critically examine the intersections between AI and sound (with partial support from an NEH DOT program grant).

  • As part of my Ph.D. dissertation, I am examining the historical emergence of automated voice biometric systems in the twentieth century.

Synthetic Data & Media

  • With Felicia Jing and Ranjodh Singh Dhaliwal, we are looking into the coloniality of synthetic data practices and algorithmic emplotment.

  • With Felicia Jing and Ryan Healey, we are tracing the historical continuities and transformations implied in the proliferation of synthetic data practices.

  • With Edward B. Kang and Dylan Mulvin, we are starting a collaborative project to explore how synthetic data and synthetic media are transforming knowledge production, media infrastructures, and trust.

Tech Assessments & Public Oversight

  • With Felicia Jing, Sara Berger, and other members of the Responsible Tech group at IBM Research, we have investigated the opportunities and challenges of enabling public participation and humanistic inquiry within AI assessments, evaluations, and audits in the tech industry.

Select Publications

Coming Soon

  • Becerra Sandoval and Jing. “Rethinking AI Safety: Provocations from the History of Community-based Safety Practices.” FAccT ‘25 (accepted)

  • Becerra Sandoval and Jing. “Historical Methods for AI Assessments, Evaluations, and Audits.” FAccT ‘25 (accepted)

  • Kang and Becerra Sandoval. “Dissecting Speech: The Acoustic-Linguistic Divide in AI.” Routledge Anthropology and AI (accepted)

Published