Thoughts on academic life (7)- “Projecilik”—When Funding Becomes the Product

Episode 7: “Projecilik”—When Funding Becomes the Product

I was recently at an academic conference when a colleague introduced herself: “I’m working on a TÜBİTAK-funded project on digital literacy.” Not “I’m researching digital literacy.” Not “I’m studying how social media affects learning.” No—the funded project itself was the primary credential, the opening gambit, the thing most worth mentioning.

This wasn’t unusual. Walk through any Turkish university corridor, whose administration is obsessed with rankings, attend any presentation, read any CV, and you’ll encounter constant invocations of TÜBİTAK funding. “TÜBİTAK 1001 project.” “Supported by TÜBİTAK 3501.” “Principal investigator of TÜBİTAK 1002.” The funding itself has become the achievement—not the research it supposedly enables, not the knowledge it supposedly produces, but the fact of being funded.

Welcome to what I call Tübitakçılık—the transformation of research funding from a means to an end into an end in itself, from a tool enabling scholarship into a status symbol signaling one’s position in academic hierarchies. This is, of course, a local chapter of a global process of funded-ness as “an externally-validated signal of accomplishment.”

In Social Sciences, a TÜBİTAK grant now signals more than research capability. It signals insider status, network connections, institutional favor, and expertise at writing what the gatekeepers desire. Let me underline, I do not claim that all projects are unproductive. In fact, there are many and thank God that we have more funding for social sciences. Here, the critique is a peculiar perspective that goes beyond Türkiye’s boundaries.  

Most cited studies below do not come from Turkish cases, but I believe they support my observations here.

The Productivity Gap: Getting Funded ≠ Producing Research

We need local statistics here to support my observations. But I have only from the States and not from the social sciences. Still, to give some ideas: A comprehensive study of US National Institutes of Health (NIH) grants found that receiving an R01 grant worth approximately $1.7 million produced only one additional publication over five years, representing just a 7% increase in productivity (Jacob & Lefgren, 2011). Jacob & Lefgren state that researchers often shift between funding sources. Losing one grant doesn’t stop research, but working on alternative funding. In the meantime, the “Matthew effect” operates: success begets success, with funded researchers attracting additional funding more easily ($648,000 more in years 6-10 after initial award). However, much research happens regardless of specific grant funding. Over 90% of applicants in NIH studies published at least once in five years following their application, whether funded or not. Overall, competitive funding systems create perverse incentives that distort research priorities.

The Application Game: QRPs and Gaming

Competitive funding doesn’t just reward good research. It rewards skill at navigating funding systems—what we might politely call “grantsmanship” but what increasingly involves questionable research practices (QRPs).

A survey of applicants, reviewers, and panel members from a major European funding agency found that QRPs in grant applications are “remarkably prevalent”: over 60% regularly engaged in at least one such practice, 40% engaged occasionally in half the practices queried, and only 12% reported not engaging in any (Conix et al, 2023).

Common QRPs in funding applications include:

Claiming false novelty: Making proposals appear more innovative than they actually are. One Dutch researcher candidly noted: “No one does exactly what is in the grant, right? You write a cool proposal and decide later what’s actually possible”

Pre-conducting research: “Bad: Science is per definition not predictable. Competitive funding forces you to predict your science, i.e. first do experiments then write the grant. Afterwards claim success because all your ‘predictions’ turned out to be true”. Another researcher reported: “You have to have 2/3 of the paper already written to get the grant for the project”

Strategic misrepresentation: Exaggerating feasibility, downplaying risks, overselling impact, inflating preliminary data significance.

Network manipulation: Citing reviewers’ work strategically, proposing collaborations with influential researchers primarily for credibility, emphasizing connections to prestigious institutions .

Universities themselves often encourage these practices by making acquired funding “an important factor in tenure decisions or salary negotiations,” contributing to “a very competitive research environment that is conducive to QRPs” .

Networks and Gatekeepers: It’s Who You Know

Research on funding success consistently finds that social networks matter enormously. Your position connecting different parts of a network—significantly predicts grant success. The better-connected you are, the more likely you are to get funded.

The research world operates as what Bourdieu would call a distinctive habitus, “characterized by norms and rules which include the (written and unwritten) rules of the funding application process”. Understanding these unwritten rules—knowing the gatekeepers, speaking their language, signaling alignment with their priorities—often matters more than research quality itself.

This creates barriers for early-career researchers, those outside major institutions, those lacking connections to established networks. The same mechanisms that supposedly identify “excellence” actually reproduce existing hierarchies and exclusions.

The Massive Time Sink

Perhaps the most serious problem: competitive funding systems waste enormous amounts of researcher time. In many cases, half of the research funding goes not to research but to the application process itself. This is staggeringly inefficient.

The Signaling Problem

What bothers me most is the constant signaling—the need to invoke TÜBİTAK funding in introductions, CV descriptions, email signatures, and conference presentations. This reveals what the funding has become: not a tool for scholarship but a marker of status, not a means but an end.

Bourdieu noted that symbolic capital requires “constant reinforcement through performance and acknowledgment”. Hence the repeated mentions, the regular reminders: “I am funded. I am legitimate. I belong to the club of those deemed worthy.”

This performance is exhausting and distorting. It shifts focus from intellectual contribution to institutional credentials. It makes the funded project—the bureaucratic entity, the approved proposal, the allocated budget—more important than the knowledge supposedly being produced.

What’s the Alternative?

Some research suggests that stable, less competitive funding produces more innovative research than competitive grants. Lottery-based allocation systems have been proposed to reduce opportunity costs while maintaining fairness. Others advocate for more diverse funding mechanisms that don’t all reward the same types of proposals or require the same networking and gaming. Alongside these reforms, a range of alternative funding arrangements has gained traction. Philanthropic foundations and charitable donors now provide a substantial share of research budgets in some systems, often with greater flexibility than state grant agencies but also with their own thematic priorities and geographies.

Many universities have expanded internal seed and bridge‑funding schemes, offering small, relatively low‑bureaucracy grants to launch risky projects, gather pilot data, or sustain research between external awards. Crowdfunding platforms dedicated to science and scholarship allow researchers to appeal directly to lay publics to support niche, early‑stage, or controversial projects that would struggle in conventional peer‑reviewed competitions. Industry partnerships and incubator‑style programmes, meanwhile, promise resources, infrastructure, and co‑funding in exchange for orienting parts of academic work toward commercial applications. Finally, philanthropic and public funders have begun experimenting with participatory and community‑led grantmaking, in which affected communities, civil society groups, or peer networks take on a formal role in deciding which research gets funded, explicitly tying resource allocation to democratic ideals and local accountability rather than solely to disciplinary prestige…

Further Readings

Conix, S., De Peuter, S., Block, A. D., & Vaesen, K. (2023). Questionable research practices in competitive grant funding: A survey. PLOS ONE, 18(11), e0293310

Gigerenzer, G. et al. (2025). “Alternative models of funding curiosity‑driven research.” Proceedings of the National Academy of Sciences.

Barlösius, E., & Philipps, A. (2022). “Random grant allocation from the researchers’ perspective: Introducing the distinction into legitimate and illegitimate problems in Bourdieu’s field theory.” Social Science Information, 61(1), 101–123.


Discover more from Erkan's Field Diary

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.