Connect with us

Hi, what are you looking for?

HEADLINES

SPECIAL REPORT: Still limited but AI can play important role in changing cybersecurity landscape

Photo by Tara Winstead from Pexels.com

The cybersecurity landscape is fast changing, and businesses across all industries as well as consumers are facing evolving threats to their data and privacy. Cybercrime threats, which are expanding fast, come in many forms. Social engineering, phishing and ransomware attacks that are infecting computers or networks, and fooling users to pay a ransom or risk losing their data, as well as massive automated botnets that are infecting consumer devices, are happening not only in the Philippines but around the world.

Cybercrime has become a major challenge as technology continues to advance. As risks for cyber threats are still high, finding a means to reduce these security breaches is becoming crucial than ever. According to a report by Cybersecurity Ventures, it was estimated that cybercrime will cost the world $8-trillion in 2023, a significant increase from the $3-trillion in 2015. It is expected that the global cybercrime damage costs will grow by 15% per year over the next three years, reaching $10.5-trillion annually by 2025.

PHOTO BY cottonbro studio FROM PEXELS.COM

Role of AI in Cybersecurity

Businesses and individuals need to be proactive in protecting themselves from the damaging effects of cybercrime. Apart from relying on anti-virus, anti-malware, and firewall technologies, businesses need the latest security technologies to protect their data. One of these is the Artificial Intelligence (AI) which plays an important role in cybersecurity.

AI refers to a tool that is able to substitute for human intelligence in the performance of certain tasks. The idea behind this is to use AI-enabled software to augment human expertise by rapidly identifying new types of malware traffic or hacking attempts. In cybersecurity, AI is increasingly critical in protecting online systems from attacks by cybercriminals and unauthorized access attempts. AI systems can be trained to enable automatic cyber threat detection, generate alerts, identify new malware, and protect businesses’ sensitive data.

A report from Ponemon Institute (Cost of a Data Breach) found that organizations fully deploying security AI and automation incurred $3.05-million less on average in breach costs compared with studied organizations that have not deployed the technology. With AI and automation, analysts can process and analyze large volumes of data more quickly and identify patterns to indicate potential threats, before an attack occurs. It significantly increases efficiency by automating repetitive and time-consuming tasks.

Advertisement. Scroll to continue reading.

According to Vicky Ray, director-UNIT 42, Cyber Consulting & Threat Intelligence, Asia Pacific & Japan at Palo Alto Networks, AI services can within minutes detect new malware and identify malicious domains while policy enforcement, based on the results of AI services, prevents successful cyberattacks. AI’s unlimited scalability and performance enable them to collect and analyze data from their customers’ environments and respond in near real time.

She also said automated solutions for the Security Operation Center (SOC), or a group of people defending organizations against security breaches while identifying, investigating, and mitigating cybersecurity threats, use AI and machine learning (ML) to find important security events such as “Zero Day Attacks” without generating low-value alerts that require analyst time, attention, and manual remediation.

For Yeo Siang Tiong, general manager for Southeast Asia at Kaspersky, human and machine interaction will remain essential and irreplaceable in security and all other industries. “AI remains dependent on humans for it to work effectively. For us at Kaspersky, we consider ML as the most relevant AI cybersecurity discipline there is. But, as with any other breakthroughs invented by humankind, keeping AI-powered technology in safe hands is essential to ensuring security.”

Yeo defines AI in two layers – one is that it helps cut down the initial threats to work on, and the other is the heuristics/behavior analytics that is on the software, to block and detect there and then when the attack happens.

Yeo cited as an example the Kaspersky Internet Security for Android, a mobile devices’ cybersecurity solution where AI is used. “In using AI in Kaspersky for Android, for example, our mobile products detect around 33% of all new Android threats. AI frees our researchers to work on the remaining 67% threat, to detect them and perfect our solutions.”

PHOTO BY cottonbro studio FROM PEXELS.COM

Cybercriminals take advantage of AI

In recent years, the use of AI in the business sector has shown to provide significant opportunity for more powerful threat detection than ever before. But, while AI and ML can be important tools for cyberdefense, these can also be used by cyber threat actors for bad purposes. In fact, Yeo says cyber thieves are taking advantage of this technology to automate their attacks particularly by using data-poisoning and model-stealing techniques. He cited ChatGPT as an example of AI that can be used both by cybercriminals and cyberdefenders to reach their goals.

Cybercriminals, mostly amateur virus writers, are now using it to create malware like Trojans. But using the same AI chatbot, infosec analysts can reverse-engineer the pseudo-code or machine code, to figure out how it works.

“We have seen how ChatGPT can generate convincing texts (which are not necessarily accurate) so it’s highly likely that automated spear-phishing attacks using chatbots are being launched. Unfortunately, this would translate to an increase in successful phishing attacks, particularly in email, social networks, and messengers,” noted Yeo.

Advertisement. Scroll to continue reading.

Ray also said that automation is already used by the adversary. “As the cost of computing continues to decline, adversaries will continue to adopt automation more prominently to orchestrate attacks at minimal cost. In fact, many free and open source tools are available online that enable repeated, successful attacks against poorly defended networks,” she said.

With the rise of generative AI like ChatGPT, Ray said their Unit 42 researchers have analyzed how cybercriminals take advantage of AI today, all of which can be used to attack both organizations and individuals:

  • Multiple phishing URLs attempt to impersonate official OpenAI sites. Typically, scammers create a fake website that closely mimics the appearance of the ChatGPT official website, then trick users into downloading malware or sharing sensitive information;
  • Scammers might use ChatGPT-related social engineering for identity theft or financial fraud. Despite OpenAI giving users a free version of ChatGPT, scammers lead victims to fraudulent websites, claiming they need to pay for these services;
  • Scammers also exploit the growing popularity of OpenAI for crypto fraud; and
  • Many copycat AI chatbot applications have also appeared on the market. This could increase security risks such as the theft of sensitive confidential data.
PHOTO BY cottonbro studio FROM PEXELS.COM

Usage of AI somewhat limited

Current pain points in cybersecurity can be addressed through the use of AI technologies such as human error in configuration, human efficiency in repeated activities, threat alert fatigue, threat response time, new threat identification and prediction, staffing capacity, and adaptability. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

AI has been significant in building automated security systems, natural language processing, face recognition, and automatic threat detection. With AI, analysis of massive quantities of risk data at breakneck speed has also become possible. Those who have benefited from this are large companies that deal with huge amounts of data as well as small- or mid-sized companies with under-resourced security teams.

Despite these benefits, there are companies that are not turning over their cybersecurity programs to AI. This has made the implementation or the use of AI in organizations somewhat limited.

Yeo relayed that they have been seeing that the challenges in implementing AI are the high cost of deployment, complex infrastructure and integration issues.

According to Ray, what is needed is a solution that learns from all deployment and a tool that leverages information from all its users, not just a single organization. The bigger the pool of environments and users, the smarter the AI. To that end, you also need a system that is able to handle both large volumes and different kinds of data, as well as a ML component to help analyze the information. These complexities lead to its own challenges that could hinder the use of AI and ML in cybersecurity for companies, including the steep learning curve required to understand ML models, talent scarcity, and securing ML models.

Advertisement. Scroll to continue reading.
PHOTO BY cottonbro studio FROM PEXELS.COM

AI can monitor behavior of hackers

AI can serve as a tool in analyzing data to determine behavior of hackers to see if there’s a pattern to their attacks. For this, Ray cited their Cortex XDR solution which uses AI and ML to understand data from organizations and users to establish a baseline of normal behavior and then alerts on high-fidelity anomalies to efficiently detect advanced, targeted, and insider attacks, in addition to non-malicious, risky behavior.

Ray noted that analytics lets you spot adversaries attempting to blend in with legitimate users. “For example, Cortex XDR uses ML models to detect whether or not an administrative connection to a server is expected. This solution helped our customers have better security outcomes, like 8x faster investigations and a 98% reduction in alerts.”

One of the applications of AI in cybersecurity, according to Yeo, is to model and monitor the behavior of system users. “By monitoring the interaction between a system and a user, a takeover attack is recognized and identified.”

Compared with signature-based, traditional anti-malware software, antivirus tools with AI or ML capabilities are able to detect network or system anomalies by identifying programs that show suspicious behavior and block them from accessing systems resources.

PHOTO BY Pixabay FROM PEXELS.COM

AI-based cybersecurity market expected to grow

Meanwhile, the rise in cyberattacks is helping fuel growth in the market for AI-based security products. According to latest figures from a global technology research and advisory firm, the AI-based cybersecurity market is expected to grow by US$19-billion for the period 2021 to 2025. Organizations are increasingly turning to AI and ML to boost their security infrastructure as the volume of cybersecurity threats has become challenging for humans to handle by themselves.  

According to Gartner, 80% of enterprises will have adopted a strategy to unify web, cloud services, and private application access using a secure access service edge (SASE) or service security edge (SSE) architecture by 2025, up from 20% in 2021. With increasingly complicated work environments, employees are now plugging in their own devices into the network without realizing how dangerous that can be to the organization’s cybersecurity. With these complications, the attack surface has widened significantly and cyber adversaries now have more entry points to launch an attack. A SASE solution can provide complete session protection, regardless of whether a user is on or off the corporate network.

With the trend of generative AI like ChatGPT and its risks, more companies have to look for security solutions to address these emerging threats. Palo Alto Networks offers the industry’s first AI Operations (AIOps) solution natively integrated into SASE (or secure access service edge) with Autonomous Digital Experience Management (ADEM). This feature helps companies leverage AI-based problem detection and predictive analytics to automate complex, manual IT operations, increase productivity, and reduce mean time to resolution (MTTR).

PHOTO BY cottonbro studio FROM PEXELS.COM

Future of AI in Cybersecurity

AI in cybersecurity is increasingly and continuously playing a pivotal role in the fight against more advanced cyber threats. AI and ML are not omnipotent but they promise so many possibilities for cybersecurity.

“We can no longer rely on traditional security against targeted attacks but we can use AI and ML to bolster defenses and effectively respond to phishing attacks or breaches,” said Yeo, adding that “with defensive AI, rapid detection and containment of any emerging cyberthreat is possible as well as fighting back when AI is used as part of the attack method.”

Yeo further disclosed that AI and ML play a valuable role in information security. “Here at Kaspersky, we began using ML-based algorithms long before the Next-Gen buzz and these algorithms are used in many stages of our detection pipeline, including clustering methods to pre-process incoming file streams in-lab, deep learning models for cloud detection, and ML-created records in product databases.

Advertisement. Scroll to continue reading.

However, their studies revealed that ML algorithms could be vulnerable to many forms of attack. When using ML in security systems, they suggest that ML methods should not be regarded as the ultimate answer. Instead, they need to be a part of a multi-layered security approach, where complementary protection technologies and human expertise work together, watching one another’s backs.

For Ray, businesses will continue to encounter a variety of difficulties navigating the AI cybersecurity landscape as AI is developed. Establishing corporate policies is critical to doing business ethically, while improving cybersecurity.

“We need to establish effective governance and legal frameworks that enable greater trust in AI technologies being implemented around us to be safe, reliable, and contribute to a just and sustainable world. The delicate balance between AI and humans will therefore emerge as a key factor towards successful cybersecurity in which trust, transparency, and accountability supplement the benefits of machines,” Ray ended. 

Advertisement. Scroll to continue reading.
Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

From December 5, 2024 to January 31, 2025, enjoy exclusive deals on the best ASUS laptops and ROG devices, and enjoy matching them with the newest kicks...

HEADLINES

This move underscores Globe's commitment to expanding its services to meet the evolving needs of its customers, particularly the Overseas Filipino Worker (OFW) market.

HEADLINES

Smart also sounds the alarm on criminals using ‘fake cell towers’ to bypass network defenses. The Philippine National Police had earlier called on the...

HEADLINES

“Our free AI-powered platform provides access to in-demand job skills that address the gaps in the school-to-work journey of K-12 students in the Philippines,”...

HEADLINES

The program is a step-up from the standard on-the-job training in college level education. The tie-up involves students from the College of Accountancy, Business Administration...

HEADLINES

Through this collaboration, Smart and PLDT Enterprise will leverage Ericsson’s expertise in private 5G to offer tailor-made network solutions for enterprises, helping them maximize...

HEADLINES

"Every technology is now either enabling AI or is enabled by AI. We are evolving from static and reactive AI to a more dynamic,...

HEADLINES

With AWS, Grab is pursuing a technology-led strategy to accelerate growth across its mobility, deliveries and financial services verticals, including its new digibanks, while...

Advertisement