With artificial intelligence (AI) rapidly making an impact on various industries and organizations in recent years, it has consequently made local businesses and organizations more susceptible to cybersecurity threats.
The growing footprint of AI continues to influence multiple facets of daily life, spanning from digital assistants that support everyday activities to advanced agents driving business automation. With this development, cybersecurity firm Trend Micro Philippines is raising awareness on the need for proactive security measures that will prepare defenders of AI systems, improve risk governance of AI, and develop new patterns for agentic AI security.
Vulnerabilities posed by AI-based solutions
While AI can act as a business enabler, it is also supercharging the capabilities of cybercriminals. This can be done in various ways, such as through indirect prompt injections in content that can lead to leaked user-uploaded data. One type of prompt injection technique is Pleak or Prompt Leakage, the risk that preset system prompts or instructions meant to be followed by the model can reveal sensitive data when exposed.
Aside from prompt-based attacks, deepfakes have become the leading type of AI-driven scam, with criminals no longer needing to rely on underground services but are instead exploiting publicly available—and sometimes even free—applications that enable deepfake creation, including real-time streaming manipulation, multilingual voice cloning, and image nudifying.
Cyberattack techniques leveraging AI are rapidly evolving, not just in sophistication but in their impact on the broader security landscape. The widespread adoption of AI is prompting both defenders and attackers to rethink their approaches: as organizations expand their use of AI-powered digital assistants and agentic applications, new vulnerabilities emerge that can be exploited by threat actors. Recent incidents highlighted at international cybersecurity competitions, such as Pwn2Own, have demonstrated that AI systems, especially those integrated with business infrastructure, can be compromised through overlooked or poorly secured components. This underscores the importance of securing every layer of the AI ecosystem, from third-party libraries to containerized deployments, and adopting proactive strategies such as regular security assessments and robust supply chain governance to stay ahead of evolving threats.
In the rush to develop and distribute new AI systems, the underlying software stack of custom AI systems may not undergo the same amount of testing and audit compared to non-AI and traditional software systems. This lack of stringent testing leads to increased vulnerabilities, and may accidentally expose the systems prematurely not only to the internet, but to external threat actors as well. “People are rushing into AI too fast and are not considering standard security practices to make things secure,” shared Morton Swimmer, Principal Threat Researcher, Trend Micro.
Future-proofing digital assistants
While AI-based services such as Microsoft Copilot, Grok and ChatGPT are transforming how organizations interact with technology by understanding natural language and automating complex workflows, future-proofing these tools should be a top priority for security teams as they continue to evolve. As these digital assistants advance and handle increasing volumes of sensitive data, they become more attractive targets for attackers seeking to exploit their unique capabilities and user interactions.
Last March, Trend Micro launched Trend Cybertron, an AI model and agent that can independently act on cyberthreats. Using GenAI, Cybertron can analyze complex environments, automate incident analysis and decision-making, and coordinate an organization’s response to attacks. Trend Micro also released an AI Security Blueprint that provides architecture recommendations for hardening AI systems which helps in bridging the gap between risk awareness and actionable steps.
In line with its efforts to be #EngineeredToDoGood, Trend Micro regularly publishes its Annual Risk Report as well as dedicated research and case studies publicly to its website. In making these resources available, Trend Micro welcomes the global cybersecurity community to collectively develop its capabilities and share threat intelligence. These efforts provide practical guidance for organizations and small businesses to identify the necessary investments, apply the right mitigation techniques, and strengthen their defenses against AI-driven cyber threats. Through consistent research and continuous system improvements, these seemingly small steps will ultimately help steer the broader trajectory of AI.
Maximize Momentum
In a society where technological innovations such as AI, machine learning, and quantum computing are rapidly evolving, it is vital for organizations and businesses, big and small, to take part in enhancing their cybersecurity strategies. DECODE 2025: Maximize Momentum explored how to take the cybersecurity innovations of the previous years and use it to keep going and stay ahead of risks brought about by emerging technologies, including Artificial Intelligence.
Since it was first held in 2017, DECODE has attracted an average of 1,000 attendees annually, with a significant number of return participants, highlighting its value and enduring relevance as a trusted platform for dialogue and upskilling cybersecurity professionals across the country.
To learn more about Trend Micro’s findings from its State of AI Security Report 2025, visit www.trendmicro.com.






















































































