Botland Empire
Manufacturing Consensus in the Age of Political Chatbots
“I think the potential of what the internet is going to do to society, both good and bad, is unimaginable. I think we’re actually on the cusp of something exhilarating and terrifying.”
- David Bowie in 1999
“An illusion shared by everyone becomes a reality”
- Eric Fromm
“We’ll know our disinformation program is complete when everything the American public believes is false.”
- William J. Casey (CIA Director 1981–1987)
Roughly fifty percent of global internet traffic is now estimated to be automated bot activity, and the proportion is growing. This article explores how increasingly sophisticated bots threaten the social, political, and informational foundations of modern society.
What exactly is a bot anyway?
Simply put, a bot is a piece of software designed to perform automated tasks at a high speed with vast amounts of data. It’s nothing new, mechanical automation underpinned the industrial revolution and in theory it’s just something that should make our lives easier. Unfortunately, as we know and as Marx exposed in detail, a potentially useful advancement can be used as a tool for exploitation and control. There are numerous types of bots. For example, Google is powered by a web crawler bot, Googlebot. There are scalper bots which buy up products like concert tickets as soon as they go on sale in order to sell them at a high price later. Then in recent years we have seen the emergence of generative AI chatbots like ChatGPT and Grok.
The significance of bots becomes clear only when we consider the digital terrain they inhabit. Social media platforms, built around engagement maximisation and data harvesting, have provided the perfect environment for automated systems to amplify narratives, distort consensus, and influence behaviour on a broad scale.
(Anti-) Social Media
The initial emergence of the internet and, in particular, social media brought with it the feeling that this could lead to some real social change. Although that promise didn’t materialise, at least it can be said that the early years of social media were more people-centred. Things changed as Big Tech scrambled to work out ways to profiteer from it. Emerging social media platforms quickly established themselves as part of the Big Tech ecosystem, with a singular priority: maximising user engagement at all costs. Higher engagement translates directly into increased advertising revenue and the extraction of ever more user data - described by some as the new gold. Now we are seeing the construction of vast data farms in order to accommodate the future Internet of Things (IoT) where Big Tech will finally infiltrate every aspect of our lives.
Beyond the endless stream of algorithmically curated videos designed to keep users hooked, there are more serious consequences with tangible effects on societies worldwide. Polarising content consistently generates higher engagement, reinforcing echo chambers that spill over into real‑world attitudes and behaviour. It’s fitting that rage bait was named Oxford’s Word of the Year in 2025. The apps that are so integrated into our lives are designed to thrive off anger, frustration, offense and provocation. While the internet and social media have undoubtedly expanded access to information and helped raise awareness around social and political issues, this constant cycle of outrage raises questions about their capacity to translate reaction into sustained political action. In practice, social media often channels dissent into fleeting moments of visibility rather than durable forms of collective organisation, limiting its effectiveness as a tool for long‑term organising. Why does it feel like we live in a world bereft of ideas and a way forward despite being, in theory, more connected than ever?
It shouldn’t be controversial to say that the political direction of these platforms is less a product of neutral design than of who owns and governs them. Social media spaces central to public discourse are controlled by a small, unaccountable billionaire class whose economic interests are fundamentally opposed to any movement seeking structural redistribution of power or wealth. This dynamic is reinforced by a mainstream media landscape that embraces right wing ideas and routinely excludes credible socialist voices, further narrowing the boundaries of acceptable political debate - a form of Neo-McCarthyism. At the same time, right‑wing parties have demonstrated a clear strategic focus on social media, recognising its influence and investing heavily in its use. When both digital platforms and mass media are aligned with entrenched economic power, it becomes difficult to view the global drift toward authoritarian and exclusionary politics as coincidental. The notion that these privately owned platforms can function as fair, democratic arenas for political organising, particularly for socialist movements, is clearly untenable in the absence of meaningful accountability.
Manufacturing Consensus
Social media bots are designed to disseminate specific narratives, either to manufacture support or to disrupt discussion through trolling. Samuel Woolley, an expert on computational propaganda, describes this process as “manufacturing consensus.” The internet has become a propagandist’s ideal environment: those with sufficient resources can deploy vast networks of automated accounts to simulate popular support and create the illusion of widespread agreement, whether for political or commercial purposes. Data is central to this process, which helps explain why technology companies remain so focused on maximising user engagement despite the documented social harm. Their objective is not simply to keep users online, but to expose them to more advertising and to harvest data that can be monetised or sold.
Aza Raskin, who helped develop the concept of infinite scrolling, has since warned that “behind every screen on your phone, there are … a thousand engineers that have worked on this thing to try to make it maximally addicting.” This logic extends beyond social media into broader technological trends such as the IoT. Often presented as progress, this expansion offers little in the way of material benefit while dramatically increasing opportunities for data extraction. The result is a dystopian yet likely (or inevitable) future in which targeted advertising and surveillance are embedded into the most mundane aspects of daily life.
The political use of harvested data was most visibly exposed in the Cambridge Analytica scandal, where personal information was exploited for targeted political advertising. It would be naïve to assume such practices have ceased. On the contrary, the rise of AI‑driven platforms and chatbots suggests that new and even more intrusive methods of data collection are being normalised, further entrenching the power of those who control digital infrastructure.
All That Glitters Is Not Gold
Human nature doesn’t change. Long before the internet, there were bluffers, celebrities, and snake‑oil salesmen. What the internet has done is repackage these practices in a sleek, authoritative form, which we have come to trust almost instinctively. A clear example is Google and Google Maps. It is rarely questioned that these platforms possess detailed knowledge of where we live and how we move through the world. Reviews, in particular, have become a default source of trust, especially when navigating unfamiliar cities or countries. At first glance, this appeared to be a democratic improvement: a collective system for evaluating businesses and services. Yet this trust has proven misplaced. As Yasha Levine argues in Surveillance Valley: The Secret Military History of the Internet (2018), the internet’s foundations lie not in openness or democracy, but in state‑sponsored surveillance and counterinsurgency.
If this was indeed the original intention, it has been executed with remarkable success. Trust has been carefully cultivated, allowing platforms to embed themselves into everyday life. But there is little reason to assume that corporations such as Google deserve this trust. Minimal investigation reveals services such as buyreviewz.com, where Google Maps reviews can be purchased with ease. Google is far from unique in this regard; a 2021 Which? investigation exposed an entire industry dedicated to producing fake reviews on Amazon.
The HBO documentary Fake Famous (2021) illustrates how easily online credibility can be manufactured. The filmmakers attempt to turn three ordinary individuals into social media influencers using a single primary tactic: bots. By purchasing automated followers, likes, and comments through services such as famoid.com, they demonstrate how visibility and perceived popularity can be artificially constructed. While bots were once relatively easy to identify, advances in AI have made them increasingly indistinguishable from real users. These systems can now convincingly mimic human behaviour, turning them into powerful tools for political manipulation and commercial deception alike.
What emerges is a digital environment in which trust, popularity, and consensus are no longer organic but engineered - Manufactured Consensus. As technological sophistication increases, so too does the capacity to manipulate public perception, raising serious questions about the reliability of online spaces that increasingly shape political opinion, consumer behaviour, and social reality itself.
Computational Propaganda and The Rise of Political Chatbots
The Cambridge Analytica scandal marked a turning point in public awareness of how personal data can be weaponised to influence electoral behaviour. The common, dismissive response is that data harvesting is largely harmless, yet the concentration of vast amounts of personal information in the hands of governments and private corporations is deeply concerning, whether for commercial or political ends. As digital technologies become more and more ubiquitous, this intrusion into private life is only set to intensify. It would be absurd to assume that, following Cambridge Analytica, political parties and social media companies have abandoned the use of data‑driven persuasion. Rather, these practices have become more discreet, embedded behind layers of intermediaries and technical complexity.
Communications researcher Philip N. Howard describes this opacity in Lie Machines (2020), noting how firms such as Imitacja Consulting (a pseudonym used for privacy reasons) are contracted to manage online political messaging. These companies often operate as subcontractors to other subcontractors, providing “social media optimisation” to political strategy firms that themselves answer to large public relations agencies. The result is a fragmented chain of responsibility that makes it extremely difficult to trace propaganda efforts back to their original source.
Samuel Woolley reaches a similar conclusion in Manufacturing Consensus (2023), challenging the tendency to attribute computational propaganda primarily to foreign actors. His research suggests that mainstream political campaigns and their domestic subcontractors routinely rely on digital propaganda tools. As Woolley explains, such content often originates online, spreads through ordinary users across their social networks, and eventually filters into traditional media channels. In this way, automated accounts and coordinated messaging campaigns plant narratives that gain legitimacy through repetition and visibility.
This process creates a powerful bandwagon effect, where manufactured stories appear organic and widely supported. One of the most pressing concerns today is that advances in artificial intelligence have made it increasingly difficult to distinguish between human users and automated agents. A study conducted by researchers at the University of Southern California demonstrated that participants were unable to reliably differentiate between humans and ChatGPT‑powered bots in online political discussions. In a simulated election debate, the bots not only blended in seamlessly but were also able to adapt their arguments in response to others.
The implications are profound. Beyond the spread of disinformation, political discourse now faces the challenge of highly sophisticated AI systems capable of engaging in nuanced, persuasive debate at scale. These developments raise urgent questions about the integrity of online political spaces and how long such technologies have already been shaping public opinion beyond meaningful scrutiny.
Documented Cases of Bot‑Driven Electoral Interference
Recent elections across multiple regions demonstrate that bot‑driven disinformation and AI‑assisted political manipulation are no longer speculative threats but established campaign tactics.
Chile: General Election 2025
Controversy surrounded the lead‑up to Chile’s 2025 general election, the first round of which took place on 16 November. A Chilevisión investigation identified the operators behind two prominent X accounts, Patito Verde and Neuroc, accused of coordinating disinformation and harassment campaigns via bot networks targeting presidential candidates Jeannette Jara and Evelyn Matthei. Investigative reporting outlet CIPER revealed deep political and media connections. The operator of Neuroc, Ricardo Inaiman Barrios, admitted to maintaining contact with the campaign team of José Antonio Kast, the candidate of the Chilean Republican Party. Meanwhile, Patito Verde was traced to Patricio Góngora, a senior director at Canal 13, a major television network owned by Iris Fontbona, Chile’s wealthiest individual. Góngora denied involvement until users identified his reflection in a photograph posted by the account alongside former president Sebastián Piñera. He subsequently resigned while continuing to deny wrongdoing.
This case illustrates how coordinated online manipulation can involve actors embedded within both political campaigns and mainstream media institutions, blurring the boundary between digital disinformation and legacy power structures.
India: General Election 2024
During India’s 2024 general election, WIRED reported widespread use of AI‑generated propaganda, including audio deepfakes, manipulated images, and parody content. Of particular concern was the deployment of AI‑generated voice clones in political robocalls. These systems allowed campaign messaging to be translated and personalised across India’s 22 official languages and thousands of regional dialects.
According to industry sources cited by WIRED, more than 50 million AI‑generated voice clone calls were made in the two months preceding the election, with millions more during the voting period. Unlike traditional robocalls, recipients were often unaware they were interacting with synthetic voices, raising serious concerns about deception and informed consent. Generative AI enables mass‑produced, personalised political messaging on large scale, making micro‑targeting both efficient and difficult to detect. In an environment saturated with information, such personalised content is more likely to capture attention and build trust.
India’s political technology ecosystem is characterised by a layered network of consulting agencies, subcontractors, and data specialists, many operating under strict non‑disclosure agreements. Graduates from elite institutions are frequently employed by firms contracted during election periods to monitor public sentiment and influence online discourse.
The Bharatiya Janata Party (BJP) has been particularly effective in leveraging digital infrastructure through its IT cells, using data‑driven strategies to promote Hindu nationalist narratives and marginalise opponents. Similar approaches have been adopted by right‑wing movements elsewhere, reflecting a broader understanding among certain political actors of how modern propaganda warfare operates in digital environments.
United States: Election Interference and AI Robocalls
Robocalls have long been a feature of U.S. elections, but recent advances in AI‑generated audio can no longer be ignored. During the 2024 election period, a robocall using a synthetic voice resembling President Joe Biden urged voters not to participate in the election. The call was traced to political consultant Steve Kramer, prompting regulatory action. In response, U.S. authorities moved to ban the use of AI‑generated voices in unsolicited political calls, acknowledging the threat posed by such technologies to electoral integrity.
United Kingdom: Social Media Amplification and Media Disparity
In the UK, multiple investigations have highlighted the disproportionate media visibility of Reform UK relative to its parliamentary representation. Data reported by UnHerd, drawing on analysis from BeBroadcast and Cast from Clay, showed that between January and September, Reform UK received significantly more broadcast mentions per MP than either Labour or the Conservatives. This imbalance extends to social media. Nigel Farage has amassed a TikTok following exceeding that of all other British MPs combined, underscoring the party’s strategic focus on digital platforms. Further analysis by journalist Don McGowan found that Reform UK was vastly overrepresented in mentions by major UK news outlets on X, while parties such as the Greens and Liberal Democrats received minimal coverage. Longer‑term studies by PR consultancy BeBroadcast reached similar conclusions.
These patterns suggest a feedback loop between social media amplification and legacy media exposure, reinforcing the visibility of right‑wing actors while marginalising alternative political voices. One plausible mechanism within this loop is the role of automated and coordinated account activity on social platforms. Bots and semi‑automated networks do not need to originate political narratives to exert influence; their primary function is to inflate engagement signals with likes, reposts, replies, and follower counts that platforms and journalists increasingly treat as indicators of public salience. Where such activity disproportionately amplifies Reform UK content, it can artificially elevate the party’s perceived prominence, increasing the likelihood that journalists and broadcasters identify it as “trending” or newsworthy.
This dynamic is reinforced by asymmetries in platform strategy and message design. Reform UK’s highly personalised, emotive, and leader‑centric communications are well suited to algorithmic amplification and easier to boost through automation than the more policy‑driven messaging of parties such as the Greens or Liberal Democrats. In this context, bot amplification does not create bias so much as magnify existing structural incentives within digital and broadcast media ecosystems, feeding a cycle in which inflated online visibility translates into disproportionate legacy media exposure, which in turn further legitimises and amplifies the original signals.
The Philippines: Digital Autocratisation
The Philippines has emerged as a significant testing ground for digital political manipulation. Cambridge Analytica whistleblower Christopher Wylie has described the country as an ideal environment for experimentation due to high social media usage and comparatively weak regulatory oversight.
AI disinformation analysis firm Cyabra reported that up to 45% of online discussions surrounding recent Philippine elections were driven by inauthentic accounts, including bots and sock puppets. In 2025, OpenAI disclosed that it had banned accounts linked to marketing firm Comm&Sense for using ChatGPT to generate large volumes of pro‑government content. Subsequent updates revealed further coordinated campaigns targeting political opponents.
Academic researcher Dr Tetiana Schipper characterises these developments as “digital autocratisation,” noting how successive administrations have used online manipulation to suppress dissent and control political narratives. Responsibility for countering disinformation has largely fallen to civil society, academia, and independent media, with limited institutional support.
The use of bots and AI‑driven systems as political weapons is now widespread, and ongoing technological advances threaten to further degrade online discourse. The erosion of trust, combined with the scale and sophistication of digital manipulation, has profound implications for democratic participation. As economic and political instability intensifies globally, digital platforms increasingly serve as battlegrounds for ideological influence. The convergence of legacy media, social media, and data‑driven propaganda raises urgent questions about power, accountability, and the future of political agency in the digital age.
Controlling the Narrative
Gaza and Western Political Panic
The widespread circulation of images and testimony from Gaza has provoked visible anxiety among political elites in Western states. Rather than engaging with the substance of the allegations, the response has often focused on controlling the narrative. In an interview with Sean Hannity, U.S. Secretary of State Marco Rubio framed public sympathy for Palestinians not as a reaction to material conditions on the ground, but as the result of online manipulation:
“I think places like TikTok have become cesspools of this kind of misinformation and indoctrination. It’s actually brainwashing. It’s reflected in the polling, where Americans under a certain age… are amazingly pro‑Palestinian — pro‑Hamas in their views of what’s happening in the region.”
This framing shifts attention away from state violence and toward a perceived failure of information control. The issue is not the reality of civilian suffering, but the effectiveness of Palestinian and anti‑war narratives in reaching a global audience. By reducing the situation to a binary of “Israel versus Hamas” or “pro‑Israel versus pro‑Palestine,” political discourse obscures the asymmetry of power and responsibility involved. Anti‑war and anti‑genocide protesters are frequently dismissed as naïve, extremist, or ideologically driven, while those responsible for large‑scale violence are recast as victims of misinformation.
In highly polarised information environments, this inversion of reality becomes normalised. Evidence of atrocities is dismissed as fabrication, with terms such as “Gazawood” deployed to discredit journalists and eyewitnesses. Social media platforms, particularly TikTok, have been singled out as threats precisely because they bypass traditional gatekeepers. Israeli officials have openly acknowledged the strategic importance of controlling platforms such as TikTok and X, as well as the role of influencers in shaping public opinion. Reports that influencers have been offered financial incentives to promote pro‑Israeli messaging further illustrate how narrative dominance is pursued through both state and market mechanisms.
Criminalising Dissent and Targeting Journalists
The desire to control public perception also helps explain the systematic targeting of journalists in Gaza, where at least 248 media workers had been killed by September 2025. Beyond the conflict zone, journalists and critics of Israeli policy have faced increasing repression in Western countries. In the UK, figures such as Kit Klarenberg and David Miller have been arrested under terrorism legislation, while others have been subjected to surveillance, harassment, or professional sanctions.
A widely circulated incident involved journalist Gabriel Nunziati, who was dismissed after asking a European Commission spokesperson whether Israel should be held financially responsible for rebuilding Gaza. Similar cases have emerged in both the UK and the United States, involving journalists and activists including Sarah Wilkinson, Dr Rahmeh Aladwan @Dr Rahmeh Aladwan, Asa Winstanley , Richard Medhurst, and George Galloway. Galloway’s account of being detained under the UK Terrorism Act highlights the erosion of basic civil liberties, where individuals are compelled to answer questions under threat of criminal sanction despite not being formally arrested.
Surveillance Technologies and the Suppression of Activism
These developments coincide with an expanding use of AI‑driven surveillance technologies to monitor and suppress political activism. During the spring 2024 student protests in the United States, more than 3,000 arrests were recorded. Following the return of Donald Trump to office, executive orders were introduced enabling the revocation of visas and deportation of non‑citizens deemed “Hamas sympathisers.” In the UK, the proscription of Palestine Action as a terrorist organisation has resulted in over 2,300 arrests, including that of Greta Thunberg.
At the technological level, governments increasingly rely on private firms to conduct mass surveillance. Platforms such as Palantir are used across multiple agencies in both the US and UK, alongside tools provided by companies including Clearview AI, Zignal Labs, and Paragon Solutions. Amnesty International has documented how systems such as Babel X, used by U.S. Customs and Border Protection, can aggregate vast quantities of personal data from a single identifier, including social media activity, location data, and advertising IDs. Live facial recognition has also been deployed at pro‑Palestinian demonstrations in the UK.
If such practices were associated with non‑Western states, they would likely provoke widespread condemnation. Instead, they are normalised under the language of security and counter‑terrorism. The result is a political environment in which dissent is increasingly surveilled, criminalised, and delegitimised, while the mechanisms of narrative control remain largely unaccountable.
Increasing Censorship and the Collapse of Due Process Online
One of the least discussed consequences of platform consolidation is the absence of due process in digital censorship. Practices such as shadow‑banning, deplatforming, and demonetisation operate without transparency, explanation, or meaningful avenues for appeal. Decisions that shape public visibility and economic survival are made unilaterally by private corporations, often through opaque algorithmic systems. Increasingly, these sanctions are no longer limited to online behaviour. There are now documented cases of individuals being removed from platforms for actions that occurred entirely offline, signalling a shift from content moderation toward behavioural governance. In effect, private companies are asserting the authority to police real‑world conduct and impose digital exclusion without legal standards, oversight, or accountability.
In the UK, the introduction of the Online Safety Act 2023 has further expanded the scope for content moderation and surveillance, while proposals to restrict or ban VPN usage signal an increasing willingness to regulate not just speech, but access itself. These measures are routinely justified in the language of safety and security, yet they concentrate extraordinary power in the hands of platforms and the state, blurring the boundary between public authority and private control.
Cyberspace vs Meatspace: When Reality Is Rewritten
The distinction between online and offline life has effectively collapsed. Algorithmic propaganda no longer merely reflects reality; it actively reshapes it. Narratives seeded online migrate into mainstream media, political discourse, and everyday social interaction, often detached from material conditions.
This inversion of reality is particularly visible in the rise of far‑right narratives, which are disproportionately amplified through bot networks, coordinated campaigns, and sympathetic media coverage. Figures such as Nigel Farage have demonstrated a sophisticated understanding of this ecosystem, leveraging social media virality alongside legacy media complicity. Farage’s long‑standing association with Steve Bannon, central to Cambridge Analytica’s operations, underscores how these strategies are neither accidental nor isolated.
Underground Truth in a Post‑Truth Landscape
As mainstream platforms become increasingly polluted by automation, manipulation, and censorship, credible information is pushed to the margins. Private Eye journalist Helen Lewis captured this shift succinctly when she suggested that the internet increasingly resembles a privatised sewer system, while truth retreats into smaller, human‑curated spaces. Her observation that we may be heading toward a future where people circulate underground publications simply to access reliable information is a warning that should be heeded.
This sentiment echoes a broader nostalgia for the pre‑platform internet, articulated by Reddit user Vipich in response to the question: ‘what was the greatest thing we almost had?’
“The ‘Old’ Internet.
Before everything consolidated into 4 or 5 giant corporate platforms (Facebook, Google, X, etc.), the web felt like the Wild West. It was personal blogs, weird niche forums, and creativity. Now it feels like everything is just a screenshot of a Tweet reposted to Instagram or TikTok. We traded community for an algorithm.”
Digital Colonisation and the Botland Empire
What emerges from this analysis is a form of digital colonisation: populations mapped, profiled, nudged, and pacified through data extraction and narrative engineering. The Philippines being used as a testing ground for online propaganda, exemplifies how entire societies can be treated as laboratories for behavioural manipulation. This Botland Empire is not governed by democratic consent, but by corporate interests, political expediency, and technological asymmetry.
Defending Reality
There is a battle underway for perception itself. While propaganda and psychological warfare are not new, the fusion of big data, generative AI, and corporate media has created an unprecedented capacity to manufacture consensus and invert truth. Across the Western world, fascist ideologies are resurging, scapegoating migrants and minorities while shielding the economic structures responsible for social collapse.
Political figures such as Trump or Netanyahu function less as architects than as avatars, useful faces for a system maintained by tech executives, media conglomerates, asset managers, and billionaires whose interests run fundamentally against the public good. Their presence at centres of power is not symbolic; it is structural.
When war narratives resurface, recycled myths of liberation, humanitarian bombing, and moral necessity activate the machinery with chilling efficiency. Dissent is marginalised, scepticism pathologised, and alternative interpretations erased. In such an environment, refusing the narrative becomes an act of resistance.
Final Word: Digital Colonisation and the Botland Empire
What is often described as a crisis of misinformation is better understood as the maturation of a system whose original purpose was never democratic. As Yasha Levine argues in Surveillance Valley, the internet did not emerge as a neutral space for free expression, but as a product of Cold War military research, designed for surveillance, counterinsurgency, and information warfare. It has become clear that its architecture was built to monitor populations, manage behaviour, and maintain strategic advantage. Seen in this light, the contemporary digital landscape is not a corruption of an otherwise emancipatory technology, but its logical evolution.
The privatisation of this infrastructure did not diminish its strategic function; it merely obscured it. Surveillance, behavioural profiling, and narrative control have been sold to to us as convenience, connectivity, and engagement. Social media platforms now sit atop an underlying system that continues to operate in alignment with Western geopolitical and economic interests. This is the foundation of what can be described as the Botland Empire: a form of digital colonisation in which attention replaces territory, data replaces resources, and consent is manufactured rather than secured.
Recent coverage of Iran illustrates how this system functions in practice. Across Western media and digital platforms, narratives rapidly converged around calls for intervention, often without serious scrutiny of the veracity of claims or the ethical implications of external involvement. Emotional framing has dominated, dissenting perspectives have been marginalised, and social media has amplified a sense of inevitability. This convergence does not require central coordination. Shared infrastructure, aligned incentives, and algorithmic amplification are sufficient to produce uniformity. The propaganda machine no longer needs a command centre when the architecture itself rewards conformity and suppresses deviation.
Bots and AI‑driven systems play a crucial role in this process. They seed narratives, simulate consensus, and blur the distinction between genuine public opinion and engineered belief. Algorithms elevate what serves power and quietly bury what challenges it. In this environment, reality itself becomes contested terrain, and truth is increasingly forced underground.
Antonio Gramsci’s observation from the Prison Notebooks captures the moment with unsettling precision:
“The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.”
The old order of liberal democracy, a free press, and public accountability is visibly decaying, while no credible alternative has emerged in the West. In the vacuum, morbid symptoms proliferate: computational propaganda, digital authoritarianism, manufactured outrage, and the normalisation of surveillance. Fascist narratives thrive not because they offer solutions, but because the informational terrain is owned by those that support them.
Defending reality requires recognising digital colonisation for what it is: an extension of empire into the realm of perception, mediated through platforms that claim neutrality while exercising immense power. The struggle is not only over information, but over the conditions under which truth can still exist. Whether something new can be born depends on whether this empire of bots, data, and narrative control is confronted or allowed to continue business as usual.


