My new keynote speech: “There Is No Neutral Machine: The Necessary Contribution of Social Sciences and Humanities to AI”

“There Is No Neutral Machine: The Necessary Contribution of Social Sciences and Humanities to AI”

Keynote Speech — Erkan Saka

Critical AI Studies Conference, Istanbul Bilgi University, 9 May 2026

[Formatted by using Claude.ai]

To understand AI, we must stop treating it as a purely technical or mathematical achievement. And to explain why this matters, we must start not with technology but with democracy.

 

Democratic self-governance depends on collective epistemic capacity: the ability of citizens, communities, and institutions to understand, contest, and shape the forces that structure their lives. AI is now one of those forces. When algorithms allocate resources, rank candidates, set prices, moderate speech, and shape what information reaches which citizens, the question of who designs these systems — and whose values they encode — becomes a fundamentally political question. The social sciences and humanities do not merely enrich the study of AI. They are necessary to any democratic account of it.

 

The Myth of the Neutral Machine: The technology industry often frames computer programming as de facto neutral, but algorithms are always embedded with the values and assumptions of their creators. Neutrality is not a technical achievement; it is a political claim, and like all political claims, it demands scrutiny.

 

Algorithms as Sociotechnical Assemblages: Social sciences, particularly Science and Technology Studies (STS), view algorithms not as standalone mathematical objects, but as “sociotechnical assemblages.” They are complex webs involving hardware, software, data, institutions, people, and legal procedures.

 

The “Empty Shell” Concept: Algorithms are essentially “empty shells” that only acquire functions, power, and political valence when they are enacted as part of complex social arrangements. Therefore, their ultimate consequences are entirely contingent on the cultural and economic contexts in which they are deployed.

 

Bypassing the “Black Box”

A major challenge in AI research is algorithmic opacity — the fact that algorithms operate in the dark, hidden behind corporate walls and complex, machine-learned neural networks. The social sciences offer a toolkit to bypass this technical opacity.

 

Ethnography of the Algorithm: While we cannot always study algorithmic operations directly because they are inaccessible or proprietary, we can study them ethnographically. Anthropologists examine the workplace cultures that produce these algorithms and the daily social practices that mediate how their outputs are received.

 

Virginia Eubanks demonstrated this with devastating clarity in her ethnographic study of automated welfare systems in the United States. Through sustained fieldwork in welfare offices, hospitals, and homeless shelters, she showed that algorithmic tools for allocating social services did not eliminate human bias — they automated and accelerated it, targeting the poor with a precision no caseworker could match. No technical audit of the algorithm would have surfaced this. It required being in the room with the people affected.

 

Reading Algorithms as Culture: Algorithms can be read as both texts and weapons. Rather than trying to reverse-engineer the code, social scientists look at the relationships and interactions people have with these systems, revealing the necessary inscription of technology into the social world.

 

Safiya Umoja Noble demonstrated this when she read Google search results not as relevance rankings but as cultural texts. By applying the interpretive tools of media studies and critical race theory, she showed that searches for Black girls returned pornographic content while searches for White girls returned wholesome results. The algorithm was not broken. It was faithfully reflecting and amplifying existing cultural hierarchies — a finding invisible to any purely technical evaluation.

 

Algorithmic Silence: But opacity is not only a matter of corporate walls or technical complexity. There is a further, more insidious dimension: what AI systems actively render invisible. Not content that is hidden from view, but knowledge that is epistemically erased — marginalized traditions, non-Western epistemologies, minority languages, subaltern histories — that never make it into training data in the first place, and therefore simply do not exist from the system’s perspective. Understanding this silence requires the interpretive resources of the humanities, not the mathematics of computer science.

From Human-Centered to Social-Centered AI

Computer science often relies on “human-centered” design, which typically focuses on individual user feedback and interface usability. Social sciences push for a necessary shift toward “social-centered AI.”

 

The Limits of Individualism: Traditional AI evaluations adopt an individualistic perspective, measuring metrics like helpfulness or toxicity for a single user. However, this assumes that fixing individual-level harms will automatically translate to societal well-being.

 

Impact on Institutions and Groups: A social-centered approach evaluates how AI technologies cause disruptions at the level of communities, social norms, and institutions. It helps us understand how the benefits of AI are unevenly distributed and how social costs are often borne by less resourced groups.

 

The Gender Shades study — conducted by Joy Buolamwini and Timnit Gebru in 2018 — is perhaps the most cited demonstration of why this shift matters. Standard technical benchmarks showed commercial facial recognition systems performing at over 90% accuracy. But by applying an intersectional sociological framework — examining performance broken down by gender and skin tone together — they found error rates for darker-skinned women as high as 34%. The bias was entirely invisible to purely technical evaluation, because the benchmark itself had been designed without a social lens. The problem was not in the algorithm. It was in who asked the question.

 

The Tacit Knowledge Problem: There is also a deeper epistemological limit that social-centered analysis reveals. Drawing on Michael Polanyi’s concept of tacit knowledge — the embodied, context-dependent, socially embedded knowledge that we know but cannot fully articulate — we can identify a structural boundary that generative AI cannot cross, not as a current limitation but by design. A surgeon’s judgment, a diplomat’s instinct, a teacher’s reading of a classroom: these competencies are constituted through social practice, not reducible to data. The humanities and social sciences do not merely complement technical AI; they name what AI cannot do, and insist that we not pretend otherwise.

 

Unpacking Bias, Power, and Cognitive Loss

When computer scientists address algorithmic bias, they often treat it as a “bug” to be fixed mathematically. Social scientists reveal that bias is a structural, political issue — and that the cognitive consequences of AI reach far beyond bias.

 

Machine Habitus: Drawing on Pierre Bourdieu’s sociological theories, researchers demonstrate that AI models develop a “machine habitus” that reflects human social class distinctions, gender norms, and cultural stratifications. These are not technical flaws, but deep societal inequalities mirrored through statistical data patterns.

 

Consider Amazon’s internal hiring algorithm, abandoned in 2018 after engineers discovered it had been systematically downgrading applications from women. The model had been trained on a decade of historical CVs — and it learned faithfully from that data, which reflected a male-dominated hiring culture. Engineers tried to patch the gender bias out; the fix failed. The problem was not in the code. It was in the social structure the code had been trained to replicate — something only visible through a sociological diagnosis of how historical inequality becomes encoded as statistical pattern.

 

Or consider Obermeyer et al.’s landmark 2019 study in Science, which examined a major US healthcare algorithm that used cost of care as a proxy for medical need. Technically, this seemed neutral. But because Black patients had historically received less care due to structural inequality, the algorithm systematically underestimated their medical needs. A technical audit of the model would have found nothing wrong. It was social science analysis — asking what “cost” means as a social variable, and whose history shaped its distribution — that revealed the injustice.

 

The Trap of Formalism: Social concepts like justice and fairness are procedural, contextual, and highly contestable. They cannot simply be resolved through mathematical formalisms without the critical perspective of social theorists, legal scholars, and ethicists.

 

A further illustration: the COMPAS recidivism algorithm, used by US courts to predict reoffending risk. Standard accuracy metrics looked acceptable. ProPublica’s 2016 investigation, applying a social science framework of group fairness, showed the algorithm was twice as likely to falsely flag Black defendants as high risk as White defendants. Engineers and judges were looking at accuracy; social scientists asked: accurate for whom, and with what consequences distributed across which populations?

 

De-centering the “White Geek’s Burden”: Social science critiques the colonial dynamics embedded in AI development, highlighting how global tech corporations extract cheap human data labor from the Global South while maintaining high-value architectural control in the Global North.

 

Cognitive Proletarianization: Beyond bias, we must also reckon with what Bernard Stiegler called the proletarianization of cognitive capacities. When we systematically delegate judgment, memory, interpretation, and evaluation to algorithmic systems, we do not merely automate tasks — we risk atrophying the very human faculties that democratic life depends upon. Stiegler’s pharmacological insight is crucial here: the same technology can function simultaneously as remedy and poison. AI can augment human cognition and hollow it out. Social science is the discipline equipped to track which is happening, for whom, and under what conditions.

 

The Power of Public Perception and “Folk Theories”

To understand AI’s impact, we must study what people believe it does, which is sometimes more politically significant than what it actually does.

 

Folk Theories of Algorithms: Everyday users develop “folk theories” to make sense of opaque algorithmic systems. These theories guide behavior and serve as frameworks for moral evaluation, allowing users to assign blame, respect, and responsibility to the machines and the companies that build them.

 

Researcher Taina Bucher studied how Facebook users developed sophisticated theories about how the newsfeed algorithm determines what they see — and found that users actively modified their posting behavior based on these theories: timing posts differently, pruning their friend lists, choosing certain words. The algorithm’s perceived logic was shaping social behavior as powerfully as its actual logic. This is simply not capturable through technical analysis alone. It requires ethnographic attention to what people believe, fear, and do in relation to systems they cannot see.

 

Perception Drives Deployment: The efficacy of algorithms is shaped by human perception. Discourses, fears, and tech-utopian dreams significantly impact the legitimacy, acceptability, and eventual scale of AI deployment in society.

 

Epistemic Resistance: The Social Sciences as Architects of Alternatives

The humanities and social sciences are sometimes cast purely as critics — necessary diagnosticians of AI’s pathologies, but with little to say about what comes next. This is wrong, and it undersells the field.

 

Around the world, communities are already building alternatives grounded in the values and frameworks that social science and humanities research makes possible.

 

Te Hiku Media in Aotearoa New Zealand is one of the most instructive examples anywhere of what indigenous-led AI development looks like in practice. Founded as a Māori iwi broadcaster in 1990, Te Hiku Media had accumulated over 1,000 hours of archival audio material — recordings of native speakers, some born in the late 19th century, whose te reo Māori remained untouched by colonial influence. In 2013, a meeting of kaumātua — tribal elders — made a pivotal decision: to put the language online. But rather than signing over rights to global platforms, as commercial terms would require, they built their own content distribution platform, Whare Kōrero — “house of speech.”

 

When they needed speech data to train an automatic speech recognition model, they ran a community crowdsourcing campaign governed by the Kaitiakitanga license — a license that guarantees the data will only ever be used for the benefit of the Māori people. Over 2,500 community members contributed more than 300 hours of labeled speech in just ten days. The result was an ASR model that outperforms anything the major tech companies have built for te reo: 92% accuracy, compared to OpenAI’s Whisper achieving a word error rate of 73% on the same test set — precisely because community relationship and data quality are inseparable. When data scientists have come to Te Hiku wanting access to the data, the organization has let them go. The principle is non-negotiable: “Indigenous AI is about agency, not automation.”

 

The broader Papa Reo platform — a seven-year, government-funded data science initiative and the only such project in New Zealand not led by a university — now aims to enable other minority language communities worldwide to build their own tools while retaining full sovereignty over their data. This is social science and indigenous knowledge governance made operational.

 

Mukurtu CMS illustrates a different but equally powerful challenge to the default logic of digital infrastructure. Built originally with the Warumungu community in Australia — and named after a Warumungu word for a “dilly bag,” a safe keeping place for sacred materials — Mukurtu embeds cultural protocols directly into software architecture. The internet’s default assumption is open access: that making information freely available is inherently progressive. Mukurtu refuses this. Its system of cultural protocols allows communities to define fine-grained access levels based on their own values and traditions: some materials accessible only to women, only to elders, only to members of specific clans, only during certain ceremonial periods. Access rules can evolve as community norms evolve. The software does not impose a single model of sharing — it extends the community’s existing social and cultural systems into the digital environment. Today Mukurtu is used by hundreds of indigenous communities worldwide. It is not anti-technology. It is a demonstration that different values, brought to design from the beginning, produce fundamentally different architectures.

 

In Latin America, the critical tradition runs from theoretical frameworks to grassroots activism. The Latin American Initiative for Data Justice has demanded community ownership of data and framed data as a shared resource tied to cultural and territorial sovereignty — directly challenging the individualistic Western model of privacy that underlies most AI governance discourse. Scholars working in the Latin American decolonial tradition — drawing on Quijano’s coloniality of power, Mignolo’s coloniality of knowledge, and Escobar’s work on pluriversal design — have built a rigorous theoretical apparatus for understanding AI not as a neutral tool applied to pre-existing social conditions, but as a sociotechnical system that actively reproduces Eurocentric and capitalist logics. Networks like the Feminist AI Research Network’s Latin American hub and the Big Data from the South initiative are translating this theoretical work into concrete research agendas and policy proposals. The argument is not that the Global South should reject technology. It is that the Global South is not merely a source of data for Northern AI systems — it is a site of distinct knowledge traditions, governance practices, and social values that deserve technological expression on their own terms.

 

Taken together, these three cases make a progression worth sitting with for a moment. Te Hiku Media shows communities building their own tools. Mukurtu shows communities encoding their own values into software architecture. Latin America shows scholars and activists building the theoretical frameworks for what data justice looks like at a structural level. Practice, design, theory — and in each case, the humanities and social sciences are not standing on the sidelines. They are doing the foundational work.

 

Conclusion: The Call for Interdisciplinary Collaboration — and the Question of Responsibility

Ultimately, the social sciences act as a critical counterweight to the sheer speed and “technochauvinism” of the tech industry.

 

Slowing Down for Moral Reason: Social diagnostics provide the spaces for moral reason required to slow down the runaway, hegemonic “build-test-fail-iterate” logic of Silicon Valley.

 

Red-Teaming and Governance: Tasks like AI “red-teaming” (stress-testing models for safety) and broader AI governance are fundamentally sociotechnical challenges. They require deep collaboration between machine learning engineers and social scientists to anticipate societal impacts and build equitable digital futures.

 

The Dissolving Subject: But I want to close with a question that I think runs beneath every panel you will hear today. As agency migrates — from human subject to artificial agent, from deliberation to prediction, from decision to optimization — the philosophical architecture of moral responsibility begins to dissolve. When an algorithmic system makes a lethal targeting decision, denies a loan, or recommends a sentence, who is responsible? The engineer? The institution? The state that licensed the system? The data that trained it?

 

This is not an abstract philosophical puzzle. It is a live political crisis. And it has no technical solution. The question of responsibility — of what it means to be a moral subject in a world increasingly governed by artificial agents — is precisely the kind of question that the social sciences and humanities have spent centuries developing the tools to ask.

 

That is why we are here. Not to advise the engineers after the fact, but to insist, from the beginning, that building AI systems is an act of moral and political consequence — and that consequence must be owned.

 


Discover more from Erkan's Field Diary

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.