DISINFORMATION AND MISINFORMATION: THE NEW WEAPONS OF MASS DESTRUCTION

Battling Weaponised Lies in an Ultra-Connected Digital World. In today’s hyper-connected digital world, disinformation has become a strategically powerful weapon. State and non-state actors are exploiting digital platforms, AI, and deepfakes to manipulate narratives, polarize societies, and undermine trust. This article explores how disinformation campaigns are orchestrated, their impact on national security, public perception, and enterprise risk, and what governments, media, and organizations must do to detect, counter, and build resilience against this growing threat.

The era of trustworthy news seems to have ended for good. In an age when people have little time to verify facts through credible sources, many now rely on social media for updates about the world. And with that dependence has emerged a new menace, a deluge of “breaking news” about events that never actually occurred. What’s worse, these fabrications spread rapidly across countless online groups and platforms, gaining the illusion of authenticity through repetition. Before long, fiction becomes accepted as fact, and truth becomes the first casualty.

Welcome to the big bad world of disinformation. During Operation Sindoor and its aftermath there were multiple news items on social media that backed Pakistan’s claim of having shot down seven India fighter aircraft. Details of the fighter jets shot down along with pictures floated across the internet. This number appears repeatedly in Pakistani official statements and public remarks, though India has categorically rejected these assertions as baseless, as it never happened. Seven fighter jets being shot down also means possibly pilots lost, and there were no such losses that India faced. And India could not have hidden that since its seven families that would have been impacted had it happened. That’s not something that can be brushed under the carpet.

Parallel to the military conflict, a surge of misinformation and disinformation proliferated online that shaped public perception and heightened tensions between both the countries. False reports of military victories, doctored videos purporting to show successful airstrikes, fabricated images of destroyed infrastructure, and unfounded rumors about the deaths or arrests of high-profile military and political figures proliferated across social media platforms, including X (formerly Twitter), Facebook, Instagram, and YouTube.

The Indian government’s Press Information Bureau (PIB) claimed to have countered at least seven major instances of misinformation, including altered images, recycled footage, and false attributions. Fact-checking agencies and independent researchers observed that disinformation exploited emotionally charged content in order to drive engagement, escalate nationalist sentiment and manufacture support for an all out war.

Analysis by the Indian news outlet The News Minute, in collaboration with Alt News fact-checker Mohammed Zubair, highlights that disinformation was strategically timed to intensify tensions, legitimize retaliatory military actions, and compel both governments to adopt increasingly belligerent stances. The online disinformation ecosystem fed into real-world escalation, shaping public opinion and diplomatic narratives.

The weaponization of misinformation and disinformation during this conflict is not an isolated phenomenon, but part of a broader global trend in hybrid warfare. A 2022 report by the Organization for Security and Co-operation in Europe (OSCE) Parliamentary Assembly highlighted similar tactics during the Russia-Ukraine conflict, where state and non-state actors systematically deployed propaganda and misinformation to polarize audiences, justify military operations, and manipulate international perceptions.

The India-Pakistan conflict of May 2025 highlights the evolving role of digital platforms as battlegrounds for narrative control, where emotionally charged content is leveraged to drive engagement, escalate tensions, and shape strategic outcomes.

The 1999 Sanjeev Nanda case, when his BMW mowed down six people at a police checkpoint on Delhi’s Lodhi Road—set off a lasting media pattern: emphasizing the car brand over the cause or context. Since then, headlines like “BMW crash,” “Audi accident,” or “Mercedes mishap” have become shorthand for stories of privilege, recklessness, and elite impunity. This framing persists, most recently, in September 2025, when a Finance Ministry official was killed in a BMW accident in New Delhi.

Over time, such reporting has nurtured a false narrative that luxury cars are inherently dangerous, an impression amplified by repetition rather than evidence. Here lies the fine line between misinformation (unintended inaccuracies) and disinformation (deliberate distortion to shape public perception). When media coverage consistently spotlights the brand instead of the behavior, it feeds bias and skews understanding. The result is a perception war, where luxury car logos become symbols of guilt, and nuanced truths about road safety and accountability are lost in the noise.

During the recently concluded SECURITY TODAY KNOWLEDGE SUMMIT 2025 in Mumbai, the subject was discussed in great detail in one of the discussions where Dr. Tehilla Shwartz Altshuler, (Senior Fellow at the Israel Democracy Institute) detailed the effects of this malaise.

Dr. Tehilla Shwartz Altshuler explained the critical distinctions between misinformation, disinformation, and malinformation. She said that ‘Misinformation’ refers to unintentional mistakes, such as inaccurate photo captions, wrong dates, faulty statistics, translation errors, or instances where satire is misunderstood as fact.

Whereas, ‘Disinformation’ involves deliberately fabricated or manipulated content, whether audio, visual, or textual, including intentionally created conspiracy theories or rumors designed to mislead, and ‘Malinformation’ is the intentional release or alteration of private information for personal or corporate gain rather than public interest, such as revenge pornography or the deliberate modification of genuine content, dates, or timestamps.

Understanding these distinctions, she noted, is crucial for governments, media, and enterprises to develop effective detection, countermeasures, and resilience strategies against the spectrum of information threats today.

In recent years, the increased flow of information resulting from the advent of new technologies, social media and artificial intelligence has triggered the phenomenon of disinformation, especially that generated by foreign actors. FIMI, or Foreign Information Manipulation and Interference, is the deliberate use of manipulative information tactics by state or non-state actors to negatively impact a target country’s values, political processes, or public trust. It is a form of hybrid threat that often involves a range of harmful activities like spreading disinformation, exploiting societal divisions, and using sophisticated techniques like AI-manipulated media or cyberattacks. The goal is to undermine democratic institutions and sow discord, often in ways that may not be outright illegal but are intended to deceive and destabilize. FIMI is a major concern for democratic countries, as it is a direct threat to the rule of law and the defence of the country’s interests beyond its borders.

Dr. Tehilla’s address delved into the dark underbelly of the digital ecosystem—what she termed “Cyfluence”—the growing phenomenon of poison machines and influence operations that do far more than spread lies. These campaigns weaponize the very architecture of the internet, turning digital networks against themselves. No longer confined to politics, such operations now target corporations, executives, and markets, manipulating narratives for strategic or financial gain. Dr. T highlighted how algorithms and machine-driven ecosystems amplify racism, hatred, radicalization, and white supremacy, framing it as a geo-strategic challenge where attribution and accountability are increasingly elusive. Citing the Times of India article “The New Jews? Why Hatred Against Indians Is Exploding Across the World,” she warned that bias and hostility are being algorithmically magnified. With the rise of agentic AI, chatbots, and automated content distribution, she cautioned that machine-generated narratives now possess the power to reshape perceptions, polarize societies, and distort reality itself.

Disinformation campaigns are understood as patterns of behaviour developed in the information domain, carried out in a coordinated and intentional manner to manipulate the information reality, whose implementation and dissemination pose a threat to the interests of the country, constitutional values, democratic processes, democratically constituted institutions and, therefore, national security.

Its effects can have far reaching implications.
● Harmful: seeks to undermine a country’s national interests, its capacity for influence or its institutions.
● Manipulative: mixes truth and fiction to create a discourse that manipulates emotions and creates confusion.
● Coordinated: involves a variety of state and non-state actors coordinating and amplifying their actions.
● Intentional: Is strategic. It responds to the specific interests of the promoting country or actors. It is essential that nations have a plan in readiness on how we can defend ourselves? Knowledge is needed to detect and respond to disinformation campaigns:
● Know: starting from the common framework of analysis DISARM and the common ABCDE method we can prepare protocols and algorithms that allow a rapid response.
● Respond and prevent: strategically planned communication activities to explain your organisation’s policies and proactive communication issues help to build a solid reputation in the face of disinformation campaigns, as there is no single response to disinformation campaigns and the decision on how to respond is always difficult.

At the STKS 2025 a Panel Discussion titled, “The Age of Disinformation:Battling Weaponised Lies in an Ultra-Connected Digital World” which further elaborated on the malaise and how it can have a devastating effect even on businesses and the corporate world.

The session was chaired and ably moderated by Mr. Rajan Luthra, Chairman’s Office, Head – Special Projects, Reliance Industries Limited. Panelists included Ms. Apeksha Kaushik, Principal Analyst at Gartner, and Dr. Sameer Patil, Director at the Centre for Security, Strategy and Technology, Observer Research Foundation, and Dr. Tehilla Shwartz Altshuler.

Discussions emphasized practical mitigation strategies, highlighting specific actions that CSOs and CISOs should plan, implement, and rehearse to safeguard their organizations and stakeholders in an era of increasingly sophisticated influence operations.
Apeksha drew attention to Gartner’s emerging concept of “Trust Operations”, observing that we are rapidly moving into a world where truth itself is becoming negotiable. She emphasized the urgent need for a verification framework—a system capable of validating authenticity—where anything outside it could be identified as false or manipulated. Citing a recent Gartner survey of security leaders, she noted that 43% have encountered audio-based attacks, 37% have faced video-based attacks, and many have dealt with synthetic identity impersonations.

Apeksha also highlighted the disturbing rise of “influence-as-a-service” platforms, where Generative AI models can be hired to create false content aimed at causing reputational or financial harm. Such tools have made it easy for even unskilled actors to conduct sophisticated deception campaigns. The consequences are staggering—global cybercrime losses exceeded $10 trillion last year, with instances of market manipulation and share price distortion driven by coordinated AI-enabled misinformation.
Rajan asked: How do you see this issue evolving? Do enterprises perceive it primarily as a security threat or more as a brand and reputation risk?

Sameer’s answer to this was: There’s a clear disconnect in how corporates view the problem,they’re caught in a dilemma about whether to treat it as a corporate security concern or a national security challenge. In recent times, we’ve seen coordinated campaigns aimed at discrediting large enterprises to manipulate their stock prices for profit. As organized crime groups recognize the high returns from such tactics, it’s only a matter of time before mid-cap and small-cap companies become targets of similar influence-driven attacks.

Sameer was of the opinion that a proactive approach was needed by anticipating what kind of fake agenda was going to happen and there were tools in the market to help prevent it.

The expert speakers agreed that the only way around this was to preempt the possible problems that could arise from disinformation campaigns and false narratives and take measures to stem their growth and effects.

In today’s hyper-connected digital world, disinformation has emerged as a powerful weapon, with state and non-state actors exploiting digital platforms, AI, and deepfakes to manipulate narratives, polarize societies, and erode trust. The session explored how such campaigns are orchestrated, their impact on national security, public perception, and enterprise risk, and the measures governments, media, and organizations must adopt to detect, counter, and build resilience against these growing threats.

Considering the fact that Indian democratic set up ensures that elections are virtually an annual affairs in some state or the other, the misinformation and disinformation campaigns gather steam in a desperate attempt to not just garner public support but also queer the pitch for the rival parties.

The Indian general election of 2024 saw a surge in the deployment of AI-based technologies, particularly deep fakes and disinformation campaigns. Political parties in India spent around $16 billion on what became the world’s largest election.

They used AI to create realistic albeit fake videos of people, like showing Prime Minister Narendra Modi dancing to a Bollywood song or a politician who died in 2018 endorsing a friend’s campaign. As with deep fakes and disinformation, there is an inherent danger of changing the public agenda, influencing citizens’ decisions in elections, and undermining the legitimacy of political parties and candidates.

The use of AI in India’s politics has changed how campaigns are run and messages are spread. This impact is important not just because India is the world’s largest democracy but also a fast growing economy. Fake videos circulated during the seven-phase election included Bollywood celebrities in misleading videos. Meta, which owns WhatsApp and Facebook, approved 14 AI-generated electoral ads with Hindu supremacist language, calling for violence against Muslims and an opposition leader.

The rapid growth of generative AI technology is changing politics and creating challenges for electoral democracy. The Global Risks Report 2024 called misinformation and disinformation a significant problem that could destabilise society by making people doubt election results.

For democracy to function effectively, it relies on a transparent and trustworthy information environment. Voters need to see what politicians are doing, understand candidates’ promises, and grasp the proposed policies. Deepfakes pose several threats to this process.

The government however is now battling hard to counter this campaign. To combat these threats, India has initiated several measures. The Digital India Media Literacy Initiative teaches critical thinking skills to millions of students annually. Technological solutions, such as AI-driven tools for detecting synthetic media and digital watermarking, are also being developed. Legislative measures, including the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aim to regulate social media platforms and digital news outlets. However, the success of these efforts depends on robust implementation and public trust.

It’s important to understand how disinformation campaigns are designed to spread false information. These campaigns typically involve several phases, each tailored to maximise their impact:

DESIGN Phase: This phase involves crafting narratives that blend partial truths with falsehoods to create emotionally resonant stories. For example, during the farmers’ protests in India, legitimate concerns about agricultural reforms were exaggerated into claims that the government planned to abolish minimum support prices entirely, despite government denials.

BUILD Phase: This phase focuses on creating the infrastructure of deception. It includes setting up networks of fake accounts, cloned news websites, and coordinated bot armies. These assets often remain dormant until activated, making them difficult to detect pre-emptively. The sophistication of these networks has increased significantly, with some accounts developing authentic personas before pivoting to spread disinformation.

SEED Phase: Disinformation campaigns are quietly launched in closed digital ecosystems like WhatsApp groups or Telegram channels. During the early days of the COVID-19 pandemic, fabricated statistics about India’s response first appeared in local community groups before spreading to mainstream platforms.

COPY Phase: This phase involves repurposing content across multiple formats, such as transforming false claims into memes, videos, and infographics designed for specific platforms. This cross-platform approach ensures maximum reach and complicates fact-checking efforts.

AMPLIFY Phase: Both automated systems and unwitting human participants are used to boost visibility. During the farmers’ protests, international celebrities inadvertently amplified unverified claims, while coordinated bot networks pushed hashtags to trend globally. These operations blur the line between organic and manufactured outrage.

CONTROL Phase: This phase works to discredit critics and create illusions of consensus. Fact-checkers face coordinated harassment, while fake “grass roots” movements create the appearance of widespread agreement with false narratives.

EFFECT Phase: The final phase harvests real-world consequences, ranging from street violence to policy reversals driven by manufactured public pressure. A study found that districts with higher exposure to targeted disinformation experienced significantly more protest activity and communal incidents.

Governments, media, and organizations each have a critical role to play in countering the spread of misinformation, disinformation, and false narratives. Governments must invest in information integrity frameworks that combine regulation with education rather than censorship. This includes strengthening cyber and media literacy programs, supporting independent fact-checking networks, and enforcing transparency requirements for digital platforms that algorithmically amplify misleading content. Law enforcement and judicial systems must also evolve to recognize the real-world harm caused by viral falsehoods, ensuring swift accountability for deliberate manipulation without stifling free expression.
Media organizations, on their part, need to reclaim credibility through responsibility. Journalistic training must emphasize verification over virality, headlines should inform, not inflame. Integrating AI-driven content authentication tools, strengthening editorial oversight, and disclosing corrections transparently can restore public trust. Meanwhile, corporations and institutions should build narrative resilience, by monitoring online discourse, responding swiftly to misinformation with verified facts, and cultivating a culture of internal awareness about how misinformation can affect reputation and policy. Together, these actions create an ecosystem where truth is not just defended but proactively reinforced, ensuring that facts, not falsehoods, shape public understanding.

SECURITY TODAY’s Knowledge Summit addressed the multiple manners in which this malaise can be contained and experts advised the corporate world to stay constantly prepared because the concept of false narrative and its deep effects was not going away in a hurry. The experts cautioned that it was imperative to stay abreast of new technology that may help in keeping checks and balances on this. This level of campaigns can have a massive impact on the corporate world so it is essential for CSOs and CISOs to prepare defences for this too. Just like they prepare their defenses for physical threats to the company. Only here the enemy and threat perpetrator is unseen and unknown and may never have a physical form.

Previous articleIndian Railways to unveil AI-powered surveillance to detect tampered doors in freight wagons
Next articleAgartala police struggle without access to smart city CCTV footage as vehicle thefts soar