Summary of "Cybersecurity Trends in 2026: Shadow AI, Quantum & Deepfakes"
High-level overview
- This is an end-of-year IBM Technology channel forecast and review of cybersecurity trends for 2026 (and beyond). It revisits prior predictions from 2023–2025 and uses data from IBM reports and other industry sources.
-
Central theme:
AI is both a force-multiplier for defenders and a powerful enabler for attackers.
-
Other major topics covered: autonomous agents, deepfakes, quantum threats to cryptography, and practical authentication improvements (passkeys).
Key technological concepts, findings, and impacts
Shadow AI
- Unapproved or unsupervised AI instances in cloud environments increase breach cost and risk.
- IBM Cost of a Data Breach finding: organizations with shadow AI paid about $670,000 more per breach.
- About 60% of organizations lack AI governance/security policies to control shadow AI.
Deepfakes
- Rapid growth in cataloged deepfakes: ~500,000 in 2023 → ~8 million in 2025 (≈1,500% increase).
- Detection is becoming a losing battle as deepfakes improve. Recommended approach: train users to evaluate requests and actions rather than rely solely on artifact detection.
AI-generated attacks and malware
- AI and agents can generate exploits and polymorphic malware (self-changing to evade detection), lowering attacker skill requirements and making defense harder.
- Examples include automated ransomware chains, polymorphic malware creation, and automated kill chains that perform reconnaissance, exploit development, data theft, and ransom collection.
Agents (autonomous goal-driven AIs)
- Two principal threat vectors:
- Attacks on agents: hijack or compromise agents to amplify damage.
- Attacks by agents: malicious automation that executes attacks autonomously.
- Specific risks:
- Rapid, amplified damage when agents are compromised.
- Zero-click / indirect prompt injections (agent reads email and executes embedded instructions without user interaction).
- Proliferation of non-human identities that require lifecycle and privilege management.
- Potential attacker use: hyper-personalized phishing, fully automated malware/ransomware campaigns, end-to-end automated kill chains, and enhanced social engineering using data and deepfakes.
Attack surface & LLM-specific vulnerabilities
- AI increases overall attack surface.
- OWASP’s Top Ten for Large Language Models ranks prompt injection as the top vulnerability (2023 and again in 2025).
- Defenses are needed that detect and mitigate prompt injection and other LLM-specific threats.
AI for defense
- AI is being used to detect and block AI-specific attacks (for example, an IBM-built product to detect prompt injections).
- Expect increasing adoption of adaptive, real-time AI-driven security tooling.
Quantum computing and post-quantum cryptography
- Quantum-capable systems will eventually break current public-key cryptography (the so-called “Q-Day” risk).
- Interest in quantum-safe / post-quantum algorithms is rising, but real-world deployments remain limited.
- Recommendation: begin implementing quantum-safe cryptography now.
Authentication & passkeys
- Passkeys (FIDO Alliance) are a practical, phishing-resistant alternative to passwords.
- FIDO report: about 93% of accounts with major providers (Amazon, Google, Microsoft, PayPal, etc.) are eligible for passkeys; roughly one-third of users have enabled them.
- IBM uses passkeys enterprise-wide; the presenter noted storing 17 passkeys in their password manager.
- Recommendation: adopt passkeys to reduce credential-based and phishing-related breaches.
Broader societal and industry impacts
- Education: stop trying to ban AI. Instead, teach students how to work with AI and AI-enabled workflows because workplaces will expect AI use.
- Arts, music, marketing, coding: AI will significantly change these fields (examples include AI-generated bands/music, AI-generated marketing copy, and AI-assisted code generation with fewer pure coding roles over time).
Data sources, tools, and studies referenced
- IBM Cost of a Data Breach Report (breach cost statistics and shadow AI findings).
- OWASP Top Ten for Large Language Models (prompt injection ranking).
- FIDO Alliance report on passkey eligibility and adoption.
- IBM product that detects prompt injections (example of AI defending against AI).
Practical recommendations
- Implement AI governance and security policies to control shadow AI.
- Deploy adaptive AI defenses to detect prompt injection and other AI-native attacks.
- Manage agent identities and privileges carefully (least privilege and lifecycle management).
- Train users to scrutinize requests coming from deepfakes or automated messages (assume deepfakes will keep improving).
- Begin planning and adoption of post-quantum (quantum-safe) cryptography now.
- Adopt passkeys to reduce credential and phishing risk.
Related content mentioned
- Annual IBM cybersecurity prediction videos (2023–2025).
- Prior video on zero-click attacks (indirect prompt injection through email agents).
- Upcoming video on AI’s impact on education (“AI in the future of education”).
- IBM Cost of a Data Breach Report (annual analytical source).
Main speakers and cited organizations
- IBM Technology channel presenter (IBM cybersecurity expert; also an adjunct professor at NC State University).
- Cited organizations and sources: IBM (Cost of a Data Breach Report; IBM product for prompt-injection detection), OWASP, FIDO Alliance, and industry adopters (Amazon, Google, Microsoft, PayPal, Target, TikTok).
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...