What happens if agentic AI falls into the wrong hands? [Q&A]

what-happens-if-agentic-ai-falls-into-the-wrong-hands?-[q&a]
What happens if agentic AI falls into the wrong hands? [Q&A]
Evil AI robot

Agentic AI systems are increasingly taking on real-world roles — by 2028 Gartner predicts that 15 percent of day-to-day work decisions will be made by agentic AI.

But do we fully understand the potential harm that could be caused if these systems were weaponized by bad actors? We spoke to Keeley Crockett, senior IEEE member and professor in computational intelligence at the Manchester Metropolitan University to find out.

BN: What is agentic AI and what differentiates it from a standard AI assistant?

KC: Agentic AI refers to artificial intelligence systems with a high degree of autonomy that can act independently to achieve goals without constant control from humans. These systems can make decisions, perform actions, and adapt to situations based on their programming and the data they process, typically without human input.

AI assistants like Siri and Alexa are reactive systems. They respond to voice commands and currently have no independent goal setting abilities. Instead, they perform single, simple tasks and, importantly, cannot take any meaningful actions unless they have direct human input.

BN: What kinds of personal information would these systems need access to? How much more data is this compared to current apps?

KC: In human-centric agentic AI systems, large volumes of personal and often sensitive data have to be collected for autonomous decision making to take place. There are understandably many privacy concerns. It is feasible that such a system could independently choose to collect more data than necessary without explicit consent from a person. This data could potentially be retained for longer than necessary for the purpose it was collected for – violating the GDPR data protection principles. For example, an organization processing personal data needs to ensure it is adequate for the purpose, relevant and limited to what is necessary. There must be a clear lawful basis for the processing to take place.

Whereas a typical app, subject to a person accepting the terms of service/conditions, may collect personal identifiers (i.e. name, email, date of birth, etc.), location data, financial and payment information, health and biometric data, usage data (clickstream in the app, frequency of use etc.) and even your contacts. In comparison, an agentic AI app may also collect data on user behavior, such as preferences inferred from actions and choices; data about their environment (i.e. movements). Furthermore, it could potentially extract data from multiple applications including a user’s email, calendar, smart home device or social media activity. This data can then be synchronized so it can be used for psychological or personality profiling or predictive modeling of future actions.

BN: How would a hacker abuse an agentic AI system? What are the risks associated with them accessing an AI that manages someone’s personal life?

KC: Gaining unauthorized control of personal data could lead to a hacker being able to influence a person’s behavior through the hijacking of one or more agents within the system. For example, behavioral nudging would allow the agent to influence what content a person sees, ranging from misinformation, guiding purchases of specific products and services or even harmful content.

For example, if the agentic system has been given permission by the user to act autonomously on their behalf for certain tasks, this could lead to impersonation, where the agentic system sends emails, text or voice messages posing as the user. In smart home control, doors could be unlocked, alarms disabled and even home security cameras tampered with. In terms of financial transactions, purchases could be automatically authorized. All these abusive interventions have serious implications on humans such as the ability to blackmail, cause harassment and lead to identity theft.

Technically, bad actors could also poison the training data by injecting malicious and potentially biased data leading to model drift.

BN: What’s the difference between how today’s algorithms collect our data versus how these future AI agents would handle our information? Is this really a bigger privacy risk?

KC: A real concern is the opacity of the data within agentic systems. Humans may not understand what data is being collected about them or how that data is being used. The degree of autonomy raises many ethical questions, and as 91 percent of global technology leaders agree that use of agentic AI to analyze greater amounts of data will grow in 2026, these are questions that organizations need answers to.

One such concern is around informed consent. Do people actually know what they are consenting to? This is especially true when complex terminology is used in very long terms of service documents, which are often full of technical and legal jargon. Control is another factor. Vast quantities of data are passed between and acquired by different agents within the system as they work towards achieving a set goal. The real question is who or what has control of the data? This is exceedingly complex when agents are operating on a global scale, or if certain agents have been procured to achieve one specific task within the agentic AI system.

Accountability is the final concern when it comes to privacy risk. When it goes wrong, who is accountable? This aspect of AI governance is still a work in progress for many organizations, and the public often do not know their rights when it comes to AI.

BN: Are these AI systems more vulnerable to attacks than the technology we use now, if so, why?

KC: Gartner predicts that by 2028 one in four company data breaches will involve agentic AI abuse, which suggests that agentic AI will prove vulnerable. As agentic AI systems may operate within the cloud, this risks data in transit being improperly encrypted which increases the risk that it could be intercepted.

Agents within the agentic AI system could be hijacked allowing attackers to take over or impersonate an individual agent gaining access to data. For data resting in the cloud, multi-tenancy vulnerabilities in cloud infrastructure creates the possibility of data leakage between different agentic AI systems. If the agentic AI system uses third party products/services, then third-party APIs can increase the number of potential security breaches. This is especially true if the due diligence on the third party provided has not been done thoroughly.

Another issue is that agentic systems could autonomously initiate data transfers without explicit human approval allowing the transmission of personal and sensitive data unknowingly.

BN: What steps can people take now to protect themselves as agentic AI becomes more common? Should we be worried about giving AI systems this much control over our lives?

KC: It is very easy to become complacent, especially when faced with the desire to have everything at our fingertips immediately. Nearly all technology leaders (96 percent) agree that agentic AI usage is going to continue growing at a lightning speed in 2026. That’s why it is key to be more aware and ‘data-savvy’. Users should always read the terms of service and privacy notice to try and understand the purpose of the application collecting their data, who owns it and who controls it. App developers should also pull their weight; they need to change how they communicate the collection and use of data with clear summaries which are accessible for everyone.

Respondents to a recent IEEE survey see agentic AI reaching mass or near-mass adoption by consumers in 2026. Therefore, it is more important than ever to gain control of what AI can see and what it can do. Transparency is needed when AI and agentic AI systems are being used. This includes what data they use and how it is being used. We should also prioritize those systems that provide clear and easy to understand explanations when it comes to their automated decisions — there should be no secret data transactions. We can also choose not to use these systems for high-risk personal decision making.

Looking forward, we need to ensure that agentic AI systems are ethically deployed and used responsibly.

Image credit: grandeduc/depositphotos.com