
- Rapid adoption of AI is expanding cloud attack surfaces, raising unprecedented security risks, Palo Alto warns
- Excessive permissions and misconfigurations lead to accidents; 80% of them are related to identity issues, not malware
- Non-human identities outnumber humans and are poorly managed, creating exploitable entry points for adversaries
Rapid enterprise adoption of artificial intelligence (AI) tools and cloud-native AI services is dramatically expanding cloud attack surfaces and exposing businesses to greater risk than ever before.
This is according toState of Cloud Security Report”, a new research paper published by cybersecurity researchers Palo Alto Networks.
According to the paper, there are some major issues with AI adoption; The speed at which AI is deployed, the permissions granted to it, misconfigurations, and the rise of non-human identities.
Permissions, misconfigurations, and non-human identities
Palo Alto says organizations are deploying workloads faster than they can secure them — often without full visibility into how tools are accessing, processing or sharing sensitive data.
In fact, the report notes that more than 70% of organizations are now using AI-powered cloud services in production, a sharp rise year-over-year. The speed at which these tools are being deployed is now seen as a major contributor to the “unprecedented increase” in cloud security risks.
Then there is the issue of excessive permissions. AI services often require widespread access Cloud Resources, APIs, and Data Stores – The report shows that many organizations are granting overly permissive identities to AI-driven workloads. According to research, 80% of cloud security incidents last year were related to identity issues, not malware.
Palo Alto also cited misconfigurations as a growing problem, especially in environments that support AI development. Storage clusters, databases, and AI training pipelines are often exposed, something that threat actors are increasingly exploiting, rather than simply trying to spread malware.
Finally, research points to a rise in non-human identities, such as service accounts, API keys, and automation tokens used by AI systems. In many cloud environments, there are now more non-human identities than human identities, and many of them are poorly monitored, rarely rotated, and difficult to attribute.
“The emergence of large language models (LLMs) and agentic AI pushes the attack surface beyond traditional infrastructure,” the report concluded.
“Adversaries target tools and LLM systems, the underlying infrastructure that supports model development, the actions these systems take, and, most importantly, their memory stores. Each represents a potential compromise point.”
The best antivirus software for all budgets
Follow TechRadar on Google News and Add us as a favorite source Get expert news, reviews and opinions in your feeds. Make sure to click the follow button!
And of course you can too Follow TechRadar on TikTok To get news, reviews and unboxings in video form, and get regular updates from us on WhatsApp also.