Experts at UN panel discussing AI and security risks in cybersecurity and global terrorism

At a United Nations panel organized by Pakistan, global security experts issued a stark warning: terrorist groups are increasingly exploiting artificial intelligence, encrypted platforms, and digital currencies. The risks of artificial intelligence are no longer theoretical  they are playing out in real time, reshaping how threats are launched, funded, and hidden from authorities.

The panel, held at UN headquarters in New York in collaboration with the UN Office of Counter-Terrorism (UNOCT), brought together diplomats, security researchers, and academics to examine the dangerous intersection of AI and security risks across the globe.

Background: A Shifting Threat Landscape

For decades, counterterrorism relied on tracking communication networks, financial flows, and physical movements. But that playbook is becoming obsolete. Artificial intelligence has handed extremist networks new tools  tools that are cheap, powerful, and easily accessible.

The risks of artificial intelligence in the security domain are now a central concern for governments and international bodies alike. From AI-generated propaganda to autonomous drone guidance, the technology is being turned against the very societies that created it. Experts warn that without urgent global cooperation, the dangers of AI in conflict zones and terror networks will only grow.

Full Details: How AI Is Being Weaponized

1. AI-Powered Propaganda and Radicalization

Terror groups are using AI to produce highly convincing propaganda content at scale. Deepfake videos, AI-written manifestos, and algorithmically targeted recruitment messages are spreading across social media faster than platforms can remove them.

This is one of the most visible AI and security risks in cybersecurity today  the ability to manufacture disinformation that looks completely authentic. Security analysts note that AI-generated content has already been used to recruit sympathizers in conflict regions across Africa and the Middle East.

2. Encrypted Platforms and Untraceable Communication

Alongside AI, the use of end-to-end encrypted platforms makes monitoring nearly impossible for intelligence agencies. When combined with AI tools that can auto-generate cover identities and evade keyword detection, these communications become virtually invisible to traditional surveillance.

The security risks and misuse of AI are especially acute here. Bad actors are using AI not just to communicate, but to plan operations, coordinate logistics, and move money through decentralized crypto networks  all outside the reach of conventional law enforcement.

3. AI in Cyberattacks

The risks of artificial intelligence in cybersecurity go beyond terrorism. Nation-state actors and criminal organizations are deploying AI to launch more sophisticated cyberattacks probing network defenses, writing malware automatically, and bypassing security systems faster than human analysts can respond.

AI-powered attacks can adapt in real time, making them far more effective than traditional hacking tools. This is why AI and security risks in cybersecurity have become a top priority for defense agencies, corporations, and international regulators alike.

4. Autonomous Weapons and AI in Armed Conflict

One of the most alarming dangers of AI is its potential use in autonomous weapons systems. Drones and robotic platforms guided by AI can identify and engage targets with little to no human oversight. This raises serious ethical and legal questions under international humanitarian law.

Experts at the UN panel stressed that the absence of binding global treaties on lethal autonomous weapons is one of the greatest unresolved AI risks and benefits debates of our time. Some nations argue AI weapons improve precision and reduce civilian casualties  but critics warn they lower the threshold for conflict and can malfunction catastrophically.

Expert Quotes

Pakistan’s Permanent Representative to the UN, Asim Iftikhar Ahmad, told the panel that the global community must respond to these evolving threats with equal urgency and coordination. He emphasized that no single country can address the security risks and misuse of AI alone.

Security analysts at the event noted that the shift toward AI-enabled terrorism represents a move toward a “more decentralized and harder-to-detect” threat environment  one that requires new frameworks, new technology, and deeper international trust.

One cybersecurity researcher told attendees that the dangers of AI are compounded by the speed at which the technology evolves. “Regulations written today may already be outdated by the time they are enacted,” she said.

The Environmental Side of AI’s Danger

The dangers of AI are not limited to security. Why is AI bad for the environment is a question gaining serious attention from scientists and policymakers alike.

Large AI models require enormous computing power to train and operate. Data centers running these systems consume vast amounts of electricity much of it still generated from fossil fuels. Studies have estimated that training a single large language model can emit as much carbon dioxide as several transatlantic flights.

Water usage is another concern. AI data centers require significant water for cooling systems, putting pressure on local water supplies in drought-prone regions. As AI scales globally, its environmental footprint is set to grow dramatically adding another dimension to the risks of artificial intelligence PDF discussions circulating among international bodies and research institutions.

Global and Regional Impact

The AI and security risks discussed at the UN panel have implications far beyond New York. For developing nations with limited cybersecurity infrastructure, the dangers of AI-powered attacks are especially severe. Governments in South Asia, Africa, and Latin America often lack the tools or expertise to defend against AI-driven threats.

Regionally, Pakistan’s initiative to lead this conversation at the UN signals a growing recognition that AI risks and benefits must be weighed carefully and that the global south must have a seat at the table.

For the broader international community, the challenge is regulatory alignment. The European Union has introduced AI governance frameworks, but a comprehensive global treaty on AI in conflict and terrorism remains elusive. Without one, the risks of artificial intelligence will continue to outpace the world’s ability to manage them.

Conclusion: What Comes Next

The UN panel in New York is unlikely to be the last such gathering. As AI capabilities continue to expand, governments, security agencies, and international institutions will need to move faster and cooperate more deeply. Experts expect future discussions to focus on binding international agreements, shared AI threat intelligence databases, and technical standards for “safe” AI development.

The AI risks and benefits debate is no longer an abstract philosophical exercise. It is a live policy crisis  with real consequences for global security, human rights, and the planet’s environment. The world’s response to AI and security risks in the coming years will define the safety architecture of the 21st century.

FAQs

What are the 4 risks of AI?

The four key risks of artificial intelligence are: (1) Security and misuse risks  AI being exploited by criminals, hackers, or terrorist groups to conduct cyberattacks, generate disinformation, or plan operations; (2) Privacy risks  AI systems collecting, analyzing, and potentially exposing vast amounts of personal data without consent; (3) Ethical and bias risks  AI models making discriminatory decisions in hiring, lending, law enforcement, or healthcare; and (4) Existential and safety risks  the long-term danger of AI systems becoming too powerful or unpredictable for humans to control safely.

What are the security risks and misuse of AI?

The security risks and misuse of AI include the use of AI to create deepfake videos and propaganda, launch automated cyberattacks, guide autonomous weapons, bypass encryption and surveillance tools, and enable untraceable financial transactions through AI-linked crypto systems. Terrorist groups are increasingly using AI-powered platforms to recruit, radicalize, and coordinate operations with a level of sophistication that traditional intelligence methods struggle to detect.

What is the biggest risk with AI?

Most experts agree that the biggest risk with AI is the lack of adequate global governance and regulation. AI technology is advancing faster than the legal frameworks designed to control it. This gap creates opportunities for malicious actors from lone-wolf cybercriminals to organized terror networks and hostile nation-states to exploit AI in ways that cause large-scale harm. The risks of artificial intelligence are magnified when powerful AI tools are accessible to bad actors who face no meaningful accountability or oversight.