How AI, GenAI malware is redefining cyber threats and strengthening the hands of criminals | Mint

How AI, GenAI malware is redefining cyber threats and strengthening the hands of criminals | Mint

Source: Live Mint

Trojan threats – malware disguised as legitimate programmes – continue to plague India, but sophisticated artificial intelligence (AI) and generative AI (GenAI) attacks are increasingly rearing their heads.

This combined force will continue to pose a huge risk to the so-called endpoints that include Internet of Things (IoT) devices, laptops, smartphones, servers, printers, and systems that connect to a network, acting as access points for communication or data exchanges, caution security firms.

The numbers tell the story. About 370 million security incidents across more than 8 million endpoints were detected in India in 2024 till date, according to a new joint report by the Data Security Council of India (DSCI) and Quick Heal Technologies. Thus, on average, the country faced 702 potential security threats every minute, or almost 12 new cyber threats every second.

Trojans led the malware pack with 43.38% of the detections, followed by Infectors (malicious programmes or codes such as viruses or worms that infect and compromise systems) at 34.23%. Telangana, Tamil Nadu, and Delhi were the most affected regions while banking, financial services and insurance (BFSI), healthcare and hospitality were the most targeted sectors.

Also Read | World sees 600 mn cyberattacks daily, AI can secure devices: Microsoft’s Chik

However, about 85% of the detections relied on signature-based methods and the rest were behaviour-based ones. Signature-based detection identifies threats by comparing them to a database of known malicious code or patterns, like a fingerprint match. Behaviour-based detection, on the other hand, monitors how programmes or files act, flagging unusual or suspicious activities even if the threat is unfamiliar.

Modern-day cyber threats such as zero-day attacks, advanced persistent threats (APTs), and fileless malware can evade traditional signature-based solutions. And as hackers deepen their integration of large language models (LLMs) and other AI tools, the complexity and frequency of cyberattacks are expected to escalate.

Low barrier

LLMs assist in malware development by refining code or creating new variants, lowering the skill barrier for attackers and accelerating the proliferation of advanced malware. Hence, while the integration of AI and machine learning has enhanced the capability to analyse and identify suspicious patterns in real time, it has also strengthened the hands of cyber criminals who have access to these or even better tools to launch more sophisticated attacks.

Cyber threats will increasingly rely on AI, with GenAI enabling advanced, adaptable malware and realistic scams, the DSCI report noted. Social media and AI-driven impersonations will blur the line between real and fake interactions.

Ransomware will target supply chains and critical infrastructure, while rising cloud adoption may expose vulnerabilities like misconfigured settings and insecure application programming interfaces (APIs), the report says.

Hardware supply chains and IoT devices face the risk of tampering, and fake apps in fintech and government sectors will persist as key threats. Further, geopolitical tensions will drive state-sponsored attacks on public utilities and critical systems, according to the report.

“Cybercriminals operate like a well-oiled supply chain, with specialised groups for infiltration, data extraction, monetisation, and laundering. In contrast, organisations often respond to crises in silos rather than as a coordinated front,” Palo Alto Networks’ chief information officer Meerah Rajavel told Mint in a recent interview.

Also Read | High-quality data key to unlocking value from AI, GenAI: Snowflake AI head

Cybercriminals continue to weaponise AI and use it for nefarious purposes, says a new report by security firm Fortinet. They are increasingly exploiting generative AI tools, particularly LLMs, to enhance the scale and sophistication of their attacks.

Another alarming application is automated phishing campaigns where LLMs generate flawless, context-aware emails that mimic those from trusted contacts, making these AI-crafted emails almost indistinguishable from legitimate messages, and significantly increasing the success of spear-phishing attacks.

During critical events like elections or health crises, the ability to create large volumes of persuasive, automated content can overwhelm fact-checkers and amplify societal discord. Hackers, according to the Fortinet report, leverage LLMs for generative profiling, analysing social media posts, public records, and other online content to create highly personalised communication.

Further, spam toolkits with ChatGPT capabilities such as GoMailPro and Predator allow hackers to simply ask ChatGPT to translate, write, or improve the text to be sent to victims. LLMs can power ‘password spraying’ attacks by analysing patterns in a few common passwords instead of targeting just one account repeatedly in a brute attack, making it harder for security systems to detect and block the attack.

Deepfake attacks

Attackers use deepfake technology for voice phishing or ‘vishing’ to create synthetic voices that mimic those of executives or colleagues, convincing employees to share sensitive data or authorise fraudulent transactions. Prices for deepfake services typically cost $10 per image and $500 per minute of video, though higher rates are possible.

Artists showcase their work in Telegram groups, often featuring celebrity examples to attract clients, according to Trend Micro analysts. These portfolios highlight their best creations and include pricing and samples of deepfake images and videos.

In a more targeted use, deepfake services are sold to bypass know-your-customer (KYC) verification systems. Criminals create deepfake images using stolen IDs to deceive systems requiring users to verify their identity by photographing themselves with their ID in hand. This practice exploits KYC measures at banks and cryptocurrency platforms.

In a May 2024 report, Trend Micro pointed out that commercial LLMs typically do not obey requests if deemed malicious. Criminals are generally wary of directly accessing services like ChatGPT for fear of being tracked and exposed.

Also Read | ‘Growing AI use raises cyberattack risks, could threaten financial stability’

The security firm, however, highlighted the so-called “jailbreak-as-a-service” trend wherein hackers use complex prompts to trick LLM-based chatbots into answering questions that violate their policies. They cite companies like EscapeGPT, LoopGPT and BlackhatGPT as cases in point.

Trend Micro analysts assert that hackers do not adopt new technology solely for the sake of keeping up with innovation but do so only “if the return on investment is higher than what is already working for them.” They expect criminal exploitation of LLMs to rise, with services becoming more advanced and anonymous access remaining a priority.

They conclude that while GenAI holds the “potential for significant cyberattacks… widespread adoption may take 12–24 months,” giving defenders a window to strengthen their defences against these emerging threats. This may prove to be a much-needed silver lining in the cybercrime cloud.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsAIHow AI, GenAI malware is redefining cyber threats and strengthening the hands of criminals



Read Full Article

Leave a Reply

Your email address will not be published. Required fields are marked *