One of the most promising anthropology departments in Türkiye is at Özyeğin University, and they have organized Anthropology Days this year. I was invited to give a talk. The talk was livelier and anecdotal, but with the help of Claude.ai, I converted it into an essay with proper citations. Have a look.
The Anthropology of Black Boxes: Examining Artificial Intelligence Systems Using Ethnographic Methods
A talk delivered at the Department of Anthropology, Özyeğin University, February 23, 2026

Introduction: Opening the Black Box
Artificial intelligence is frequently portrayed as an inscrutable “black box,” its inner workings concealed behind layers of corporate secrecy and mathematical complexity (Lewis et al., 2019). Traditional perspectives tend to treat algorithms as purely technical or mathematical objects, operating beyond the reach of social analysis. Anthropology rejects this premise (Trammell & Cullen, 2021). The discipline insists that what appears opaque to a purely technical gaze becomes legible through ethnographic attention to the social relations, meanings, and human practices that produce, sustain, and are transformed by these systems.
The entry point for such an analysis is what scholars have called the “sociotechnical turn”: the recognition that AI must be understood not as a standalone tool but as a sociotechnical assemblage—a complex web of actors, meanings, and materialities in continuous interaction (Seaver, 2017). From this vantage point, AI is defined not merely by its computational architecture but by the combination of computational capacity, data analytics, machine learning, and, crucially, human interaction (Flew, 2023). These elements are inseparable, and it is precisely their entanglement that makes AI a compelling object of anthropological inquiry.

Theoretical Framework: Algorithms as Culture
From Algorithms in Culture to Algorithms as Culture
The starting point for an anthropological theory of algorithms is Nick Seaver’s (2017) influential distinction between studying algorithms in culture and studying them as culture. The former treats algorithms as pre-formed objects that enter cultural contexts from the outside; the latter insists that algorithms are themselves culturally enacted, constituted by the practices people use to engage with them. In this sense, algorithms are “unstable objects”—their meanings, effects, and even their technical parameters shift depending on who is interacting with them and under what institutional conditions.
A productive metaphor for visualising this instability is that of algorithmic systems as massive, networked boxes with “hundreds of hands” reaching in to tweak, tune, and swap out components (Klinger & Svensson, 2018). These hands belong to engineers, product managers, legal teams, regulators, and users, among many others. Understanding any algorithmic system therefore requires examining not just the code itself but the logic—commercial, political, moral—that guides those hands.
Algorithmic Governmentality and the “Dividual”
Beyond the question of who shapes algorithms, there is the deeper question of what algorithms do to the social fabric of governance. Rouvroy (2012) has theorised this transformation through the concept of “algorithmic governmentality,” arguing that AI shifts governance away from rule-based laws toward a regime of “data behaviorism.” In this regime, knowledge about individuals and populations is not produced through human narratives or legal deliberation but through statistical correlations among data points. Future behaviour is predicted, managed, and pre-empted before it becomes a matter of conscious social decision.
Within this governance regime, the very category of the individual undergoes a fundamental transformation. Cheney-Lippold (2017) describes the emergence of the “dividual”—the decomposition of persons into data clouds that are subject to automated processes of integration and disintegration. What was once a legally and socially constituted subject becomes, in the algorithmic gaze, a bundle of probabilistic inferences. This is not merely a philosophical abstraction; it has concrete consequences for how people are sorted, scored, and acted upon by institutions that deploy AI systems.
Methodological Toolkit: Ethnography of the Machine
Participant Observation and “Being There”
The bedrock of anthropological method is participant observation: the commitment to learning cultural assumptions by inhabiting shared social worlds, attending to what is not said as much as to what is (Boellstorff, 2015). Even in virtual or digitally mediated environments, this principle holds. Ethnographic research is predicated on “being there”—on the slow accumulation of contextual understanding that no survey or audit can substitute for. The challenge for the anthropology of AI is to adapt this commitment to environments that are partially invisible, distributed across infrastructure, and deliberately obscured.
Trace Ethnography
One response to this challenge is trace ethnography, which draws on Marcus’s (1995) multi-sited approach to “follow the thing”—or, in digital contexts, the data, the conflict, the person, or even the metaphor—across multiple sites and scales. Digital traces are fragments of past interactions left behind on platforms. They are not raw data; they are socioculturally embedded products, shaped by the conditions of their production and the architectures of their storage (Grenz & Kirschner, 2018). Reading these traces requires interpretative skill of precisely the kind anthropology has always cultivated.
Synthetic Ethnography
A newer and more provocative development is what De Seta et al. (2024) call “synthetic ethnography”: the use of AI models themselves as “field devices” or ethnographic probes. Rather than treating AI as merely an object of study, synthetic ethnography deploys AI as a methodological instrument, including practices such as “deepfaking the ethnographer” to gain access to and understanding of community practices that might otherwise remain opaque. This approach raises significant ethical questions—about deception, consent, and epistemic integrity—but it also opens methodological possibilities that conventional ethnography cannot reach.
Infrastructural Inversion and “Care for the Data”
Vertesi and Ribes (2019) offer another essential method: infrastructural inversion. This involves deliberately looking beneath the surface of familiar digital tools to reveal the “frozen discourses”—the values, assumptions, and power relations—embedded in their protocols and architectures. Infrastructure tends toward invisibility; it works precisely by disappearing from view. Inverting that invisibility makes platform sovereignty and the political economy of data legible.
Closely related is the study of maintenance and repair. Rather than focusing exclusively on innovation—new models, new capabilities, new products—infrastructural ethnography attends to the “messy reality of workarounds” and the “care for the data” performed by largely invisible technicians who keep systems running (Hegarty, 2022). This labour is rarely celebrated and rarely studied, yet it is constitutive of the AI systems we theorise.
Algorithmic Auditing and External Probing
While ethnography works from the inside out, science and technology studies (STS) offer the algorithm audit as a complementary method for inspecting systems from the outside in. Researchers systematically query algorithms—such as search engines or content recommendation systems—with a wide range of inputs and statistically compare the outputs to reveal hidden patterns and biases (Metaxa et al., 2021). The method treats the algorithm not as a testing device but as a site of inspection, advocating for the routine ability to probe platforms by impersonating various types of users.
Morris (2015) has argued for regulation oriented toward auditability—institutional frameworks that would make such probing a right rather than a technical workaround. More recently, Geiger et al. (2024) have reframed the epistemological foundations of auditing, proposing a move beyond “matters of fact” (did the system work as intended?) toward “matters of concern” that embrace competing views on how a system should be investigated and who it impacts. This reframing aligns auditing with a fundamentally democratic rather than merely technical project.
The Walkthrough Method and Interface Analysis
The interface is not merely a surface; it is a site where designers “script” user behaviour. The walkthrough method involves the systematic visual examination of every stage of an interaction with a platform or application, documenting how design choices influence user practices and source-critical habits (Grut, 2024). By identifying affordances—the features that enable or constrain particular actions—this method makes visible the “design intentions” that are often concealed behind apparently neutral defaults (Grut, 2024).
A related technique is inscription analysis, which traces how programmers embed specific values into technical artefacts. Gehl et al. (2017) have demonstrated how assumptions about gender and sexuality are inscribed into computer vision systems through training data and classification schemas, producing what amount to “gender scripts” that define how future users are expected to act and how certain bodies are recognised or misrecognised by automated systems.
Controversy Mapping and the Symmetry Principle
Controversies are methodologically privileged moments for STS analysis because they render visible the relations between actors that are normally taken for granted. Marres and Moats (2015) argue that researchers should treat controversies as empirical occasions rather than as problems to be adjudicated: instead of deciding what is “true” or “false,” analysts examine the event as one that makes legible the heterogeneous network of scientists, industry actors, publics, and technologies involved.
The “symmetry principle” requires that both “successful” and “failed” controversies be analysed with the same analytical lens, on the assumption that both are shaped by media-technological dynamics as much as by substantive content (Marres & Moats, 2015). In the domain of AI, Shaffer Shane (2023) proposes the study of “AI incidents”—moments of public engagement where individual and algorithmic interactions generate collective awareness of technological failure—as a particularly productive application of this approach, calling for a dedicated research agenda around what he terms “networked trouble.”
Interactional Expertise
A persistent challenge for anthropologists of AI is disciplinary asymmetry: the researcher is not a computer scientist. Fuller and Collier (2003) address this through the concept of interactional expertise—the capacity to possess enough technical knowledge to interact meaningfully with specialists, carry out sophisticated analysis, and follow the contours of technical debate, without needing to contribute to the actual science or write production code. This expertise allows researchers to navigate between what Fuller and Collier call “Deep Science” (the nonverbal, craft-based knowledge of coding and engineering) and “Shallow Science” (the verbal negotiation of the boundary between science and society). Interactional expertise is not a compromise; it is a methodological resource that enables genuine interdisciplinary engagement.
Algorithmic Imaginaries and Folk Theories
Finally, alongside the study of systems themselves, anthropologists must attend to the interpretive frameworks ordinary users construct to make sense of and intervene in algorithmic worlds. These “folk theories”—the intuitive causal models people develop about why a system behaves as it does—and the broader “algorithmic imaginaries” they inhabit constitute a social reality in their own right (Madrigal, 2014). How people understand, resist, accommodate, and game algorithms is as much a part of the sociotechnical assemblage as the code itself.
Case Studies: Power, Bias, and the “Human in the Loop”
The Reproduction of Inequality
A recurring finding across algorithmic ethnographies is that systems presented as objective and neutral are in fact deeply shaped by the social inequalities of the contexts in which they are built. Christin (2020) has demonstrated how algorithms reproduce and sometimes amplify existing hierarchies, functioning as what O’Neil famously called “weapons of math destruction.” The gender scripts identified by Gehl et al. (2017) and the racialized logics embedded in many classification systems illustrate how algorithmic “neutrality” is a political claim rather than a technical achievement.
The Hidden Workforce
The “magic” of AI is sustained by a vast and largely invisible workforce, predominantly located in the Global South, who perform the labour of sorting, tagging, and classifying the data used to train machine learning systems (Shaban, n.d.). This labour is poorly paid, psychologically demanding, and deliberately obscured in the public presentation of AI as autonomous and self-sufficient. An anthropological analysis insists on making this workforce visible—not as an incidental detail but as constitutive of the systems we use.
Predictive Policing and the Echo Chamber of Bias
Perhaps the most consequential domain of algorithmic harm is predictive policing. Systems trained on historical crime data inherit the biases embedded in policing practices—data that disproportionately records arrests in communities of colour, not because crime is more prevalent there but because policing is. The result is an echo chamber: the algorithm recommends increased policing in historically over-policed communities, generating more data that confirms and deepens the original bias. Cheney-Lippold (2017) situates this dynamic within the broader transformation of governance by “data behaviorism,” where statistical correlations substitute for deliberative human judgment.
Indigenous Data Sovereignty
Against this backdrop of extractive data colonialism, indigenous communities offer models of resistance and alternative governance. The Māori broadcasting initiative Te Hiku Media, for example, has developed frameworks for maintaining sovereignty over linguistic and cultural data that are produced within their communities, insisting that data is not a neutral resource to be freely extracted but a form of cultural patrimony to be governed according to community values (Fenster, 2015). Such cases challenge anthropologists to think not only critically—documenting harm—but also constructively, taking seriously the institutional innovations through which communities are asserting control over their digital futures.
Conclusion: The Future of Anthropological Reason
The central argument of this paper has been that artificial intelligence systems are not merely technical objects but sociotechnical assemblages embedded in relations of power, labour, and meaning that ethnographic methods are uniquely equipped to illuminate. What is at stake in this claim is not simply the methodological repertoire of a single discipline but the broader question of who gets to define what AI is, what it does, and whose interests it serves.
Broussard (2018) has coined the term “technochauvinism” to describe the belief that technology is always and inevitably the solution—a belief that tends to crowd out political, social, and ethical alternatives. Anthropology serves as a necessary and institutionally distinct check on this tendency, not by opposing technology but by insisting on its embeddedness in human social worlds.
The distinctive contribution of anthropology to AI analysis is, in Hannerz’s (2010) terms, the cultivation of human diversity as an epistemic resource. By centering “Majority World perspectives”—including, as Jaramillo-Dent and Arora (2025) demonstrate through the Antropofagia framework, the creative and epistemological traditions of Latin America and other non-Western contexts—we challenge the Western-dominant models of AI that currently set the terms of the field.
Ultimately, what is needed is a “deflationary understanding” of AI: a refusal to treat these systems as independent truth-seeking agents and an insistence on seeing them as computational instruments embedded in what Leslie (2025) calls “warm-blooded” research—research conducted by people, shaped by interests, and amenable to democratic accountability. Opening the black box, in this sense, is not a technical operation. It is a political and ethical commitment.
References
Boellstorff, T. (2015). Coming of age in Second Life: An anthropologist explores the virtually human. Princeton University Press.
Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.
Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York University Press.
Christin, A. (2020). The ethnographer and the algorithm: Beyond the black box. Theory and Society, 49(5), 897–918.
De Seta, G., Pohjonen, M., & Knuutila, A. (2024). Synthetic ethnography: Field devices for the qualitative study of generative models. Big Data & Society, 11(4), Article 20539517241303126.
Fenster, M. (2015). Transparency in search of a theory. European Journal of Social Theory, 18(2), 150–167.
Flew, T. (2023). Mediated trust and artificial intelligence. Emerging Media, 1(1), 22–29.
Fuller, S., & Collier, J. H. (2003). Philosophy, rhetoric, and the end of knowledge: A new beginning for science and technology studies. Routledge.
Gehl, R. W., Moyer-Horner, L., & Yeo, S. K. (2017). Training computers to see internet pornography: Gender and sexual discrimination in computer vision science. Television & New Media, 18(6), 529–547.
Geiger, R. S., Tandon, U., Gakhokidze, A., Song, L., & Irani, L. (2024). Making algorithms public: Reimagining auditing from matters of fact to matters of concern. International Journal of Communication 18, 634–655
Grenz, T., & Kirschner, H. (2018). Digital traces in context: Unraveling the app store: Toward an interpretative perspective on tracing. International Journal of Communication, 12, 17.
Grut, S. (2024). Source-critical affordances in social media apps. International Journal of Communication, 18, 22.
Hannerz, U. (2010). Anthropology’s world: Life in a twenty-first-century discipline. Pluto Books.
Hegarty, K. (2022). The invention of the archived web: Tracing the influence of library frameworks on web archiving infrastructure. Internet Histories, 6(4), 432–451.
Jaramillo-Dent, D., & Arora, P. (2025). An Antropofagia approach to AI and creativity: Lessons from Latin America to rethink collectivity, process and meaning in creative value. International Journal of Cultural Studies, 28(6), 1210–1230.
Klinger, U., & Svensson, J. (2018). The end of media logics? On algorithms and agency. New Media & Society, 20(12), 4653–4670.
Leslie, D. (2025). Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI. AI and Ethics, 5(4), 3439–3444.
Lewis, S. C., Sanders, A. K., & Carmody, C. (2019). Libel by algorithm? Automated journalism and the threat of legal liability. Journalism & Mass Communication Quarterly, 96(1), 60–81.
Madrigal, A. C. (2014, January 2). The age of algorithmic anxiety. The New Yorker. https://www.newyorker.com/culture/infinite-scroll/the-age-of-algorithmic-anxiety
Marcus, G. E. (1995). Ethnography in/of the world system: The emergence of multi-sited ethnography. Annual Review of Anthropology, 24(1), 95–117.
Marres, N., & Moats, D. (2015). Mapping controversies with social media: The case for symmetry. Social Media + Society, 1(2), Article 2056305115604176.
Metaxa, D., Park, J. S., Robertson, R. E., Karahalios, K., Wilson, C., Hancock, J., & Sandvig, C. (2021). Auditing algorithms: Understanding algorithmic systems from the outside in. Foundations and Trends® in Human–Computer Interaction, 14(4), 272–344.
Morris, J. W. (2015). Curation by code: Infomediaries and the data mining of taste. European Journal of Cultural Studies, 18(4–5), 446–463.
Rouvroy, A. (2012). The end(s) of critique: Data-behaviourism versus due process. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology (pp. 143–167). Routledge.
Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), Article 2053951717738104.
Shaffer Shane, T. (2023). AI incidents and ‘networked trouble’: The case for a research agenda. Big Data & Society, 10(2), Article 20539517231215360.
Shaban, H. (n.d.). The humans hiding behind the chatbots. New York Magazine/Intelligencer. https://nymag.com/intelligencer/article/ai-artificial-intelligence-humans-technology-business-factory.html
Trammell, A., & Cullen, A. L. (2021). A cultural approach to algorithmic bias in games. New Media & Society, 23(1), 159–174.
Vertesi, J., & Ribes, D. (Eds.). (2019). DigitalSTS: A field guide for science & technology studies. Princeton University Press.
Discover more from Erkan's Field Diary
Subscribe to get the latest posts sent to your email.
