security meets culture
Could your next Gaming Mouse from Amazon install Malware?
DarkComet Spyware Resurfaces Disguised as Fake Bitcoin Wallet
By Deeba Ahmed for Hackread
Old DarkComet RAT spyware is back, hiding inside fake Bitcoin wallets and trading apps to steal credentials via keylogging.
Old DarkComet RAT spyware is back, hiding inside fake Bitcoin wallets and trading apps to steal credentials via keylogging.
Credit: linkedin.com
Cybercriminals are constantly looking for new ways to steal money, and the world of cryptocurrency, especially Bitcoin, has become a major target. Recently, a new piece of old computer spyware, known as DarkComet RAT, was found cleverly hidden inside a file that looked exactly like a legitimate Bitcoin wallet or trading program.
The malware was discovered and analyzed by Point Wild's Lat61 Threat Intelligence Team. This particular software is a Remote Access Trojan (RAT), which allows a hacker to take full, secret control of a victim's computer. It's a highly capable tool, offering features that range from recording every single keystroke you make (keylogging) to stealing files, watching you through your webcam, and even controlling your desktop remotely.
Disguised and Dangerous
The DarkComet RAT, which was originally developed back in 2008 but later discontinued by its creator, is still widely available to criminals. The spyware was also mentioned in WikiLeaks' Vault 7 data leak, which revealed that the American CIA and the Syrian government under President Bashar al-Assad had both used DarkComet to hack the devices of their own citizens.
The latest sample analyzed was delivered inside a compressed RAR file, which is a common trick used by attackers to evade security filters and encourage users to open the file themselves. Upon extraction, the file was revealed as an application named "94k BTC wallet.exe".
Further probing revealed a key detail: the file was "packed" using a technique called UPX. This technique helps the malware remain disguised and much smaller in size, making it harder for simple security tools to detect it before it runs. As we know it, hiding the malicious code this way is a major challenge for computer defenses.
The Attackers' Goal
Once a victim is tricked into running the file, the DarkComet RAT immediately begins its attack. It copies itself into a hidden system folder and creates an autostart entry to ensure it loads every time the computer is turned on, successfully achieving persistence.
The malware then attempts to connect to a specific remote location-- kvejo991.ddns.net over port 1604-- to communicate with the attacker and receive commands. It is worth noting that the central goal of DarkComet was clearly seen in its keylogging activity, where it recorded all of the victim's keystrokes and saved them in a local folder called dclogs. This is a huge risk, as these logs could easily contain passwords, bank details, or, most critically, the credentials to access Bitcoin wallets, leading directly to financial losses.
The malware was discovered and analyzed by Point Wild's Lat61 Threat Intelligence Team. This particular software is a Remote Access Trojan (RAT), which allows a hacker to take full, secret control of a victim's computer. It's a highly capable tool, offering features that range from recording every single keystroke you make (keylogging) to stealing files, watching you through your webcam, and even controlling your desktop remotely.
Disguised and Dangerous
The DarkComet RAT, which was originally developed back in 2008 but later discontinued by its creator, is still widely available to criminals. The spyware was also mentioned in WikiLeaks' Vault 7 data leak, which revealed that the American CIA and the Syrian government under President Bashar al-Assad had both used DarkComet to hack the devices of their own citizens.
The latest sample analyzed was delivered inside a compressed RAR file, which is a common trick used by attackers to evade security filters and encourage users to open the file themselves. Upon extraction, the file was revealed as an application named "94k BTC wallet.exe".
Further probing revealed a key detail: the file was "packed" using a technique called UPX. This technique helps the malware remain disguised and much smaller in size, making it harder for simple security tools to detect it before it runs. As we know it, hiding the malicious code this way is a major challenge for computer defenses.
The Attackers' Goal
Once a victim is tricked into running the file, the DarkComet RAT immediately begins its attack. It copies itself into a hidden system folder and creates an autostart entry to ensure it loads every time the computer is turned on, successfully achieving persistence.
The malware then attempts to connect to a specific remote location-- kvejo991.ddns.net over port 1604-- to communicate with the attacker and receive commands. It is worth noting that the central goal of DarkComet was clearly seen in its keylogging activity, where it recorded all of the victim's keystrokes and saved them in a local folder called dclogs. This is a huge risk, as these logs could easily contain passwords, bank details, or, most critically, the credentials to access Bitcoin wallets, leading directly to financial losses.
This research was shared with Hackread.com. It clearly shows how old malware is being repurposed with modern lures, emphasizing the need for all cryptocurrency users to download wallets and trading tools only from verified and trusted sources.
The findings offer a critical warning for anyone involved in digital currency. As Dr. Zulfikar Ramzan, CTO of Point Wild, and Head of the Lat61 Threat Intelligence Team, explains: "Old malware never truly dies-- it just gets repackaged. DarkComet's return inside a fake Bitcoin tool shows how cybercriminals recycle classic RATs to exploit modern hype."
Fake Chrome Extension 'Safery' Steals Ethereum Wallet Seed Phrases using Sui Blockchain
By Ravie Lakshmanan for The Hacker News
The Hacker News
Cybersecurity researchers have uncovered a malicious Chrome extension that poses as a legitimate Ethereum wallet but harbors functionality to exfiltrate users' seed phrases.
The name of the extension is "Safery: Ethereum Wallet," with the threat actor describing it as a "secure wallet for managing Ethereum cryptocurrency with flexible settings." It was uploaded to the Chrome Web Store on September 29, 2025, and was updated as recently as November 12. It's still available for download as of writing.
"Marketed as a simple, secure Ethereum (ETH) wallet, it contains a backdoor that exfiltrates seed phrases by encoding them into Sui addresses and broadcasting microtransactions from a threat actor-controlled Sui wallet," Socket security researcher Kirill Boychenko said.
Specifically, the malware present within the browser add-on is designed to steal wallet mnemonic phrases by encoding them as fake Sui wallet addresses and then using micro-transactions to send 0.000001 SUI to those wallets from a hard-coded threat actor-controlled wallet.
The end goal of the malware is to smuggle the seed phrase inside normal looking blockchain transactions without the need for setting up a command-and-control (C2) server to receive the information. Once the transactions are complete, the threat actor can decode the recipient addresses to reconstruct the original seed phrase and ultimately drain assets from it.
The name of the extension is "Safery: Ethereum Wallet," with the threat actor describing it as a "secure wallet for managing Ethereum cryptocurrency with flexible settings." It was uploaded to the Chrome Web Store on September 29, 2025, and was updated as recently as November 12. It's still available for download as of writing.
"Marketed as a simple, secure Ethereum (ETH) wallet, it contains a backdoor that exfiltrates seed phrases by encoding them into Sui addresses and broadcasting microtransactions from a threat actor-controlled Sui wallet," Socket security researcher Kirill Boychenko said.
Specifically, the malware present within the browser add-on is designed to steal wallet mnemonic phrases by encoding them as fake Sui wallet addresses and then using micro-transactions to send 0.000001 SUI to those wallets from a hard-coded threat actor-controlled wallet.
The end goal of the malware is to smuggle the seed phrase inside normal looking blockchain transactions without the need for setting up a command-and-control (C2) server to receive the information. Once the transactions are complete, the threat actor can decode the recipient addresses to reconstruct the original seed phrase and ultimately drain assets from it.
"This extension steals wallet seed phrases by encoding them as fake Sui addresses and sending micro-transactions to them from an attacker-controlled wallet, allowing the attacker to monitor the blockchain, decode the addresses back to seed phrases, and drain victims' funds," Koi Security notes in an analysis.
To counter the risk posed by the threat, users are advised to stick to trusted wallet extensions. Defenders are recommended to scan extensions for mnemonic encoders, synthetic address generators, and hard-coded seed phrases, as well as block those that write on the chain during wallet import or creation.
"This technique lets threat actors switch chains and RPC endpoints with little effort, so detections that rely on domains, URLs, or specific extension IDs will miss it," Boychenko said. "Treat unexpected blockchain RPC calls from the browser as high signal, especially when the product claims to be single chain."
Beware of Security Alert-Themed Malicious Emails that Steal
Your Email Logins
By Mayura Kathir for GB Hackers
GB Hackers
A sophisticated phishing campaign is currently targeting email users with deceptive security alert notifications that appear to originate from their own organization's domain.
The phishing emails are crafted to resemble legitimate security notifications from email delivery systems.
These messages inform recipients that specific messages have been blocked and require manual release a premise designed to create urgency and prompt immediate action.
The attack leverages social engineering tactics by impersonating internal email delivery systems and directing victims to fraudulent webmail login pages designed to capture credentials.
What makes this campaign particularly insidious is that the emails appear to be sent from the recipient's own corporate domain, significantly enhancing their credibility and bypassing typical domain-based security checks.
Upon clicking the provided links, victims are directed to convincing replica pages of popular webmail platforms. Notably, these malicious pages come prefilled with the recipient's email address, further reinforcing the illusion of legitimacy.
This pre-population technique is a psychological manipulation tactic that reduces user hesitation by appearing personalized and authentic.
How Attackers Exploit This Information
Once victims enter their credentials on these fake login pages, attackers gain immediate access to their email accounts.
The phishing emails are crafted to resemble legitimate security notifications from email delivery systems.
These messages inform recipients that specific messages have been blocked and require manual release a premise designed to create urgency and prompt immediate action.
The attack leverages social engineering tactics by impersonating internal email delivery systems and directing victims to fraudulent webmail login pages designed to capture credentials.
What makes this campaign particularly insidious is that the emails appear to be sent from the recipient's own corporate domain, significantly enhancing their credibility and bypassing typical domain-based security checks.
Upon clicking the provided links, victims are directed to convincing replica pages of popular webmail platforms. Notably, these malicious pages come prefilled with the recipient's email address, further reinforcing the illusion of legitimacy.
This pre-population technique is a psychological manipulation tactic that reduces user hesitation by appearing personalized and authentic.
How Attackers Exploit This Information
Once victims enter their credentials on these fake login pages, attackers gain immediate access to their email accounts.
This represents a critical security breach with far-reaching consequences. Compromised email accounts serve as gateways to extensive sensitive information and become launchpads for further attacks.
Attackers can access confidential business communications, financial records, and personal identification information and potentially use the account to conduct business email compromise (BEC) attacks against colleagues and customers.
The fact that these emails appear to originate from the victim's own domain makes them substantially more convincing than traditional phishing attempts.
Security-conscious employees who typically scrutinize suspicious domains are more likely to trust messages appearing to come from their own organization.
This exploitation of domain trust represents an evolution in phishing tactics that security teams must actively address.
Defensive Measures
Organizations and individual users should implement multiple layers of protection against this threat. Email security solutions should flag messages containing credential collection links, regardless of the sender domain.
Multi-factor authentication (MFA) remains essential even if credentials are compromised, an attacker without access to the secondary authentication method cannot penetrate the account.
User education is equally critical. Employees should be trained to recognize suspicious characteristics even in messages appearing to come from internal sources.
Legitimate IT security notifications typically do not direct users to external login pages. Additionally, users should be encouraged to verify alert messages through alternative communication channels before taking action.
If you suspect you may have inadvertently entered credentials on a suspicious page, change your email password immediately and enable MFA if not already active.
Notify your IT security team, and monitor your account for unauthorized access or forwarding rules. Be aware that attackers may use compromised accounts to send additional phishing emails to your contacts, so inform your network of the potential breach.
This security alert-themed phishing campaign demonstrates how attackers continue to refine social engineering techniques by exploiting trust in internal systems.
Vigilance, proper security infrastructure, and rapid response protocols are essential defenses against these sophisticated credential theft attempts.
Zoom Workplace for Windows Flaw Allows Local Privilege Escalation
By Divya for GB Hackers
GB Hackers
A security vulnerability has been discovered in Zoom Workplace's VDI Client for Windows that could allow attackers to escalate their privileges on affected systems.
The flaw, tracked as CVE-2025-64740 and assigned bulletin ZSB-25042, has been rated as High severity with a CVSS score of 7.5.
Understanding the Vulnerability
The weakness stems from improper verification of cryptographic signatures in the Zoom Workplace VDI Client installer.
The flaw, tracked as CVE-2025-64740 and assigned bulletin ZSB-25042, has been rated as High severity with a CVSS score of 7.5.
Understanding the Vulnerability
The weakness stems from improper verification of cryptographic signatures in the Zoom Workplace VDI Client installer.
In simpler terms, the installer doesn't properly verify that installation files are legitimate before executing them.
This oversight creates an opportunity for attackers who have already gained local access to a system to escalate their permissions, moving from a regular user account to an administrator-level account.
This isn't a remote attack where hackers can infiltrate systems from the internet. Instead, it requires an attacker already to have authentication and local access to the target machine.
However, once inside, they can exploit this flaw to gain complete control, potentially compromising sensitive data or installing malware that affects the entire organization.
Security researchers at Mandiant, a leading threat intelligence firm owned by Google, discovered and reported this vulnerability to Zoom.
Mandiant's identification of this flaw highlights the importance of specialized security research in protecting enterprise software.
Organizations using Zoom Workplace VDI Client for Windows are at risk if they're running versions before:
- Version 6.3.14
- Version 6.4.12
- Version 6.5.10
The vulnerability affects all earlier versions across these respective tracks. VDI-- Virtual Desktop Infrastructure-- environments are critical in enterprise settings, making this discovery especially important for organizations that rely on virtual desktops for remote work and secure computing.
The CVSS score of 7.5 reflects the serious nature of this flaw. While it requires the attacker to have already local system access and user interaction to exploit, the potential impact is severe.
A successful attack could allow unauthorized privilege escalation, enabling attackers to execute arbitrary code with elevated permissions, access restricted files, or compromise system integrity.
Zoom has released patched versions addressing this vulnerability. Organizations should immediately update their Zoom Workplace VDI Client installations to the latest available versions.
Zoom users can download and install the latest security updates from the official Zoom download center.
For security teams managing VDI environments, prioritizing this update is essential. The combination of Mandiant's discovery and Zoom's quick patch release demonstrates the importance of staying current with security updates.
If your organization uses Zoom Workplace VDI Client for Windows, treat this update as urgent. While the vulnerability requires existing system access to exploit, the potential for privilege escalation makes it a significant security risk.
Update immediately to the patched versions to eliminate this attack vector and maintain your security posture.
Quantum Route Redirect PhaaS Targets Microsoft 365 Users Worldwide
By Bill Toulas for bleepingcomputer
bleepingcomputer
A new phishing automation platform named Quantum Route Redirect is using around 1,000 domains to steal Microsoft 365 users' credentials.
The kit comes pre-configured with phishing domains to allow less skilled threat actors to achieve maximum results with the least effort.
Since August, analysts at security awareness company KnowBe4 have noticed Quantum Route Redirect (QRR) attacks in the wild across a wide geography, although nearly three-quarters are located in the US.
They say that the kit "is an advanced automation platform" that can cover all the stages of a phishing attack, from rerouting traffic to malicious domains to tracking victims.
Attacks start with a malicious email made to appear as a DocuSign request, a payment notification, a missed voicemail, or a QR code.
The kit comes pre-configured with phishing domains to allow less skilled threat actors to achieve maximum results with the least effort.
Since August, analysts at security awareness company KnowBe4 have noticed Quantum Route Redirect (QRR) attacks in the wild across a wide geography, although nearly three-quarters are located in the US.
They say that the kit "is an advanced automation platform" that can cover all the stages of a phishing attack, from rerouting traffic to malicious domains to tracking victims.
Attacks start with a malicious email made to appear as a DocuSign request, a payment notification, a missed voicemail, or a QR code.
The emails direct targets to a credential harvesting page hosted on a URL that follows a specific pattern.
"Our researchers also observed that the domain URLs consistently follow the pattern '/([\w\d-]+\.){2}[\w]{,3}\/quantum.php/' and are typically hosted on parked or compromised domains," explains KnowBe4.
"The choice to host on legitimate domains can help to socially engineer the human targets of these attacks."
KnowBe4 says it has identified about 1,000 domains hosting QRR phishing pages.
A built-in filtering mechanism can distinguish between bots and human visitors, the researchers say, adding that QRR can redirect potential victims to a phishing page, while automated systems, such as email security tools, are sent to benign sites.
As the central traffic routing system on QRR performs its redirecting tasks automatically, operators can view the related statistics on the dashboard, where the number of real versus non-human visitors is logged in real-time.
KnowBe4 has observed the QRR phishing kit targeting Microsoft 365 accounts across 90 countries, but 76% of the attacks were directed at users in the US.
The researchers expect the use of Quantum Route Redirect to increase due to the methods used to evade URL scanning technologies.
Similar services that gained prominence earlier this year include VoidProxy, Darcula, Morphing Meerkat, and Tycoon2FA.
However, there are defense methods that can protect against this threat.
KnowBe4 analysts recommend implementing robust URL filtering that can detect phishing attempts, along with tools that can monitor accounts for signs of compromise if a user's credentials are stolen.
Microsoft Uncovers 'Whisper Leak' Attack that Identifies AI Chat Topics in Encrypted Traffic
By Ravie Lakshmanan for The Hacker News
Microsoft has disclosed details of a novel side-channel attack targeting remote language models that could enable a passive adversary with capabilities to observe network traffic to glean details about model conversation topics despite encryption protections under certain circumstances.
This leakage of data exchanged between humans and streaming-mode language models could pose serious risks to the privacy of user and enterprise communications, the company noted. The attack has been codenamed Whisper Leak.
"Cyber attackers in a position to observe the encrypted traffic-- for example, a nation-state actor at the internet service provider layer, someone on the local network, or someone connected to the same WiFi router-- could use this cyber attack to infer if the user's prompt is on a specific topic," security researchers Jonathan Bar Or and Geoff McDonald, along with the Microsoft Defender Security Research Team, said.
Put differently, the attack allows an attacker to observe encrypted TLS traffic between a user and LLM service, extract packet size and timing sequences, and use trained classifiers to infer whether the conversation topic matches a sensitive target category.
Model streaming in large language models (LLMs) is a technique that allows for incremental data reception as the model generates responses, instead of having to wait for the entire output to be computed. It's a critical feedback mechanism as certain responses can take time, depending on the complexity of the prompt or task.
The latest technique demonstrated by Microsoft is significant, not least because it works despite the fact that the communications with artificial intelligence (AI) chatbots are encrypted with HTTPS, which ensures that the contents of the exchange stay secure and cannot be tampered with.
Many a side-channel attack has been devised against LLMs in recent years, including the ability to infer the length of individual plaintext tokens from the size of encrypted packets in streaming model responses or by exploiting timing differences caused by caching LLM inferences to execute input theft-- aka InputSnatch.
Whisper Leak builds upon these findings to explore the possibility that "the sequence of encrypted packet sizes and inter-arrival times during a streaming language model response contains enough information to classify the topic of the initial prompt, even in the cases where responses are streamed in groupings of tokens," per Microsoft.
To test this hypothesis, the Windows maker said it trained a binary classifier as a proof-of-concept that's capable of differentiating between a specific topic prompt and the rest-- i.e., noise-- using 3 different machine learning models: LightGBM, Bi-LSTM, and BERT.
The result is that many models from Mistral, xAI, DeepSeek, and OpenAI have been found to achieve scores above 98%, thereby making it possible for an attacker monitoring random conversations with the chatbots to reliably flag that specific topic.
"If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics-- whether that's money laundering, political dissent, or other monitored subjects-- even though all the traffic is encrypted," Microsoft said.
To make matters worse, the researchers found that the effectiveness of Whisper Leak can improve as the attacker collects more training samples over time, turning it into a practical threat. Following responsible disclosure, OpenAI, Mistral, Microsoft, and xAI have all deployed mitigations to counter the risk.
"Combined with more sophisticated attack models and the richer patterns available in multi-turn conversations or multiple conversations from the same user, this means a cyberattacker with patience and resources could achieve higher success rates than our initial results suggest," it added.
One effective countermeasure devised by OpenAI, Microsoft, and Mistral involves adding a "random sequence of text of variable length" to each response, which, in turn, masks the length of each token to render the side-channel moot.
Microsoft is also recommending that users concerned about their privacy when talking to AI providers can avoid discussing highly sensitive topics when using untrusted networks, utilize a VPN for an extra layer of protection, use non-streaming models of LLMs, and switch to providers that have implemented mitigations.
The disclosure comes as a new evaluation of 8 open-weight LLMs from Alibaba (Qwen3-32B), DeepSeek (v3.1), Google (Gemma 3-1B-IT), Meta (Llama 3.3-70B-Instruct), Microsoft (Phi-4), Mistral (Large-2 aka Large-Instruct-2047), OpenAI (GPT-OSS-20b), and Zhipu AI (GLM 4.5-Air) has found them to be highly susceptible to adversarial manipulation, specifically when it comes to multi-turn attacks.
"These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions," Cisco AI Defense researchers Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan, and Adam Swanda said in an accompanying paper.
"We assess that alignment strategies and lab priorities significantly influence resilience: capability-focused models such as Llama 3.3 and Qwen 3 demonstrate higher multi-turn susceptibility, whereas safety-oriented designs such as Google Gemma 3 exhibit more balanced performance."
These discoveries show that organizations adopting open-source models can face operational risks in the absence of additional security guardrails, adding to a growing body of research exposing fundamental security weaknesses in LLMs and AI chatbots ever since OpenAI ChatGPT's public debut in November 2022.
This makes it crucial that developers enforce adequate security controls when integrating such capabilities into their workflows, fine-tune open-weight models to be more robust to jailbreaks and other attacks, conduct periodic AI red-teaming assessments, and implement strict system prompts that are aligned with defined use cases.
Lost iPhone? - Don't Fall for Phishing Texts Saying it was Found
By Lawrence Abrams for bleepingcomputer
bleepingcomputer
The Swiss National Cyber Security Centre (NCSC) is warning iPhone owners about a phishing scam that claims to have found your lost or stolen iPhone but is actually trying to steal your Apple ID credentials.
When iPhone customers lose their phone or it is stolen, they can set a custom message in Apple's Find My app that appears on the lock screen. When lost, this message may include an email address or phone number to contact the owner.
According to the NCSC, threat actors may be using this information to send targeted phishing texts (smishing) through SMS or iMessage to the displayed contact information, claiming to be from Apple's Find My team and stating that their phone had been found.
"Losing your iPhone is always annoying. Not only is the device gone, but your personal data may also be lost," explains the NCSC.
"Once the initial panic has passed, most people are left hoping that someone honest will find it. But if scammers have your phone, they may try to exploit this hope. They send text messages or iMessages that appear to come from Apple, claiming that the lost iPhone has been found abroad."
The phishing message includes convincing details such as the phone's model, color, and any other information that can be extracted directly from the locked device.
"We are pleased to inform you that your lost iPhone 14 128GB Midnight has been successfully located," reads the phishing text.
"To view the current location of your device, please click the link below: <phishing url>"
"If you did not initiate a lost device report or believe this message was sent in error, please disregard it or contact our support team immediately."
When iPhone customers lose their phone or it is stolen, they can set a custom message in Apple's Find My app that appears on the lock screen. When lost, this message may include an email address or phone number to contact the owner.
According to the NCSC, threat actors may be using this information to send targeted phishing texts (smishing) through SMS or iMessage to the displayed contact information, claiming to be from Apple's Find My team and stating that their phone had been found.
"Losing your iPhone is always annoying. Not only is the device gone, but your personal data may also be lost," explains the NCSC.
"Once the initial panic has passed, most people are left hoping that someone honest will find it. But if scammers have your phone, they may try to exploit this hope. They send text messages or iMessages that appear to come from Apple, claiming that the lost iPhone has been found abroad."
The phishing message includes convincing details such as the phone's model, color, and any other information that can be extracted directly from the locked device.
"We are pleased to inform you that your lost iPhone 14 128GB Midnight has been successfully located," reads the phishing text.
"To view the current location of your device, please click the link below: <phishing url>"
"If you did not initiate a lost device report or believe this message was sent in error, please disregard it or contact our support team immediately."
The phishing message contains a link to the alleged Find My website that shows the device's location.
However, instead of leading to Apple's official website, it redirects to a phishing page with a login prompt that mimics Apple's Find My website. When victims enter their Apple ID and password, the credentials are sent to the attackers, giving them full access to the account.
The cybersecurity agency explains that the scammers' real goal is to remove Apple's Activation Lock. This security feature is used to link an iPhone to its owner's Apple ID and prevents others from erasing or reselling it.
Since there is no known method to bypass this lock, criminals rely on phishing attacks to trick users into giving their credentials.
The NCSC says it is unclear how the attackers obtained the target's phone number, but it could be from the SIM card in the device or from the custom message displayed on the lock screen when a device is marked as lost.
The agency also recommends the following:
- Never click links in unsolicited messages or enter Apple ID details on external websites.
- If a device is lost, immediately enable Lost Mode through the Find My app or iCloud.com/find to secure it.
- Use a dedicated email address if displaying contact details on a lost device's lock screen.
- Keep the device registered to your Apple account to keep Activation Lock enabled.
- Ensure your SIM card is protected with a PIN to prevent misuse of your number.
The NCSC advises users to ignore any text messages like these, stating that Apple will never contact customers via SMS or email to report a found device.
New LandFall Spyware Exploited Samsung Zero-Day
via WhatsApp Messages
By Bill Toulas for bleepingcomputer
bleepingcomputer
A threat actor exploited a zero-day vulnerability in Samsung's Android image processing library to deploy a previously unknown spyware called 'LandFall' using malicious images sent over WhatsApp.
The security issue was patched this year in April, but researchers found evidence that the LandFall operation was active since at least July 2024, and targeted select Samsung Galaxy users in the Middle East.
Identified as CVE-2025-21042, the zero-day is an out-of-bounds write in libimagecodec.quram.so and has a critical severity rating. A remote attacker successfully exploiting it can execute arbitrary code on a target device.
According to researchers at Palo Alto Networks' Unit 42, the LandFall spyware is likely a commercial surveillance framework used in targeted intrusions.
The attacks begin with the delivery of a malformed .DNG raw image format with a .ZIP archive appended towards the end of the file.
The security issue was patched this year in April, but researchers found evidence that the LandFall operation was active since at least July 2024, and targeted select Samsung Galaxy users in the Middle East.
Identified as CVE-2025-21042, the zero-day is an out-of-bounds write in libimagecodec.quram.so and has a critical severity rating. A remote attacker successfully exploiting it can execute arbitrary code on a target device.
According to researchers at Palo Alto Networks' Unit 42, the LandFall spyware is likely a commercial surveillance framework used in targeted intrusions.
The attacks begin with the delivery of a malformed .DNG raw image format with a .ZIP archive appended towards the end of the file.
Unit 42 researchers retrieved and examined samples that were submitted to the VirusTotal scanning platform starting July 23, 2024, indicating WhatsApp as the delivery channel, based on the filenames used.
From a technical perspective, the DNGs embed two main components: a loader (b.so) that can retrieve and load additional modules, and a SELinux policy manipulator (l.so), which modifies security settings on the device to elevate permissions and establish persistence.
According to the researchers, LandFall can fingerprint devices based on hardware and SIM IDs (IMEI, IMSI, the SIM card number, user account, Bluetooth, location services, or the list of installed applications.
However, additional capabilities observed include executing modules, achieving persistence, evading detection, and bypassing protections. Among the spying features, the malware counts:
- microphone recording
- call recording
- location tracking
- accessing photos, contacts, SMS, call logs, and files
- accessing the browsing history
According to Unit 42's analysis, the spyware targets Galaxy S22, S23, and S24 series devices, as well as Z Fold 4 and Z Flip 4, covering a broad range of Samsung's latest flagship models, excluding the latest S25 series devices.
It's worth noting that LandFall and its use of DNG images is another case of broader exploitation seen recently in commercial spyware tools.
There have been exploitation chains in the past involving the DNG format for Apple iOS, with CVE-2025-43300, and also for WhatsApp, with CVE-2025-55177.
Samsung also fixed CVE-2025-21043 recently, which also impacts libimagecodec.quram.so, after WhatsApp security researchers discovered and reported it.
Attribution murky
The data from the VirusTotal samples that the researchers examined indicate potential targets in Iraq, Iran, Turkey, and Morocco.
Unit 42 was able to identify and correlate 6 command-and-control (C2) servers with the LandFall campaign, some of them flagged for malicious activity by Turkey's CERT.
C2 domain registration and infrastructure patterns share similarities with those seen in Stealth Falcon operations, originating from the United Arab Emirates.
Another clue is the use of the "Bridge Head" name for the loader component, a naming convention that is commonly seen in NSO Group, Variston, Cytrox, and Quadream products.
However, LandFall could not be confidently linked to any known threat groups or spyware vendors.
To protect against spyware attacks, apply security updates for your mobile OS and apps promptly, disable automatic media downloading on messaging apps, and consider activating 'Advanced Protection' on Android and 'Lockdown Mode' on iOS.
Don't Fall for This 'Free' Toothbrush Scam - Unless Your Wallet
Needs a Good Flossing
By Ben Stegner for Make Use Of
Credit: Ben Stegner/MakeUseOf
I've seen a lot of phishing emails over the years, but I recently received one that used an unusual tactic. Instead of asking me to confirm my payment details or update my password, this one offered me a free premium toothbrush.
I can see how this would catch people off guard, but like all phony emails, this promises something you'll never get. Keep an eye out for this one hitting your inbox.
The "free" toothbrush offer
This email purports to come from a health insurance provider's rewards program and offers a free toothbrush to help improve your dental hygiene. Initially, this seems reasonable-- some healthcare plans include discounts on health and hygiene equipment. However, when you look at the details, the fake becomes clearer.
I can see how this would catch people off guard, but like all phony emails, this promises something you'll never get. Keep an eye out for this one hitting your inbox.
The "free" toothbrush offer
This email purports to come from a health insurance provider's rewards program and offers a free toothbrush to help improve your dental hygiene. Initially, this seems reasonable-- some healthcare plans include discounts on health and hygiene equipment. However, when you look at the details, the fake becomes clearer.
Mention of "United Healthcare Smile Rewards" is the first red flag. This isn't the name of UnitedHealthcare's (UHC) rewards program; the real one is called "UnitedHealthcare Rewards".
NOTE: The company stylizes its name as "UnitedHealthcare", which this email doesn't do.
The subject line is also strange; "November 1 Network Status Check" sounds like it would accompany a phishing email asking you to confirm your password, not offering a toothbrush.
It has telltale signs of other phishing emails: the sender's email domain is @smoothcubans.com, which has nothing to do with UHC. A strange address is CCed. The greeting uses "Member" instead of a specific name, and "United Healthcare Services" isn't a way the actual company refers to itself. The email formatting is generic, and there are no official logos embedded.
I didn't realize it at first, but this email uses the same format as the constant "cloud storage full" emails I've been getting for months. Unlike those, the address at the bottom does match a location associated with UHC. Gmail hasn't been sending those to spam, despite my marking them as such every time. This one initially hit my inbox, but was later moved to spam.
NOTE: While I was writing, I got a second, similar email offering a free toothbrush from another brand-- not sent to spam. This sender seems to have found a hole in Gmail's spam filters, as I've never gotten this much consistent garbage in my inbox before.
Checking out the "free" offer
Out of curiosity, I wanted to check out what the scheme was, so I could document it. Most phishing setups are obvious, but this one had me wondering.
I opened the link in a virtual machine for safety; it redirected to a website with a messy name. The page promised a "chance to win" a "dental kit", which was different than the free toothbrush the email promised.
Like with many fraudulent websites, there was an alert pressuring you to act fast because the offer expired today. The "reviews" all had today's date on them and were incredibly generic; amusingly, one review mentioned not getting a prize even though it was "such a cool survey". It's a clear sign you can't trust a review when the site controls them like this.
This site is a good reminder that just because a site has a "secure page" lock, it doesn't mean it's trustworthy. You can have a secure connection to a site that's lying to you-- the certificate only means your information is encrypted in transit.
If you scroll down, you'll see a copyright notice that doesn't name any company, which is suspicious. There's also a note saying "Third-party offers linked to this survey may have additional requirements, such as entry fees or subscription enrollment". And as we'll see, this is the crux of the scam.
I proceeded through the survey, which asked general questions like how you feel about UHC and how happy you are with its services. Upon completing it, I was offered a dental kit worth $522 for "free"-- I just had to pay shipping.
The terms of the "offer" going back and forth between a contest and a free giveaway were another sign of this being shady. Legitimate companies will have clear terms for their giveaways so they don't get sued for misrepresentation.
Just pay shipping - and a whole lot more
Clicking the button to claim my prize led me to a new website where I was asked to fill out my details. A widget showed 5 in stock, which slowly dropped as I waited-- to create a fake sense of urgency.
After the fake details, I was asked for my credit card to cover the cost of shipping. The box promised an additional $2.36 off when paying with Mastercard-- or "Master Card" as it was improperly stylized in the dropdown box. I imagine this is to help push people to finish the scam process, in case they suspect something is up at this point.
I tried random numbers for the credit card field and a check appeared, but I didn't want to go any further. Instead, I clicked the Terms & Conditions link at the bottom to investigate more.
Check the terms - or else
The terms laid out the full story of the scam, as the first two paragraphs explain what you're on the hook for.
An "exclusive welcome bonus" promises a $125 gift card to the "Best Consumers Gadget Club", but a quick search reveals that nothing by this name exists. Regardless, you'll be charged "full price" every month if you don't cancel within 3 days.
There's a second charge: a 45-day trial of the "#1 Fitness App" on the web-- which isn't named. If you don't cancel by calling the number, you'll start paying for a subscription to that bogus service, too.
Other than here, these scam subscriptions are only mentioned in that fine print I called out above. The scammers-- accurately-- assume that nobody is going to read the terms and conditions. If you enter your payment details, you've agreed to sign up for a bunch of garbage you don't need.
Notice how they spread the 2 subscriptions out, which is insidious. The first one fires after 3 days, at which point you might contact your card provider and complain about a fraudulent charge. But the fitness app charge doesn't occur for 45 days, at which point you're likely to have forgotten about this ordeal.
If you don't watch your card statements, you might even think you're paying for a legitimate subscription. You shouldn't pay for subscriptions you don't need, let alone fake ones that provide nothing.
Treat all random emails as shady
While the initial email sounds plausible, alarm bells should go off once you see the lousy survey website and are asked to enter your credit card details for an item you were initially told was free. Another common factor with dangerous sites is their bogus URLs; in this case, the survey URL was gibberish, and the "free item" page had a generic name.
Neither site's URL pretended to be from UHC, but that company isn't doing itself any favors in this realm. Rather than using subdomains of the main website for offshoot pages-- like my.uhc.com or chat.uhc.com-- it has a unique URL for every website-- like myuhcfp.com and uhcglobal.com.
This means there's no clear relation between sites, and it's much easier for fakes to blend in. With the above URLs, it's much harder to realize that a made-up one like "myuhcplan.com" is fake.
We discussed the warning signs that appear throughout this process, but the scammers hope you move quickly-- because you have to act now-- and ignore those. Remember that random emails promising free items are a huge red flag, and you should look around websites carefully to see if they're legitimate.
If you call for a scam like this, you should be able to contact your card provider to get the money back, since it was taken dishonestly. But it's better to identify these schemes and run away long before that stage.
Beware the 'Hi, How are You?' Text - It's a Scam - Here's How it Works
By Lance Whitney for ZDNET
Americans lost $3.5 billion to investment scams in early 2025. Here's how to avoid becoming the next victim.
Americans lost $3.5 billion to investment scams in early 2025. Here's how to avoid becoming the next victim.
Yuliya Taba / iStock / Getty Images Plus
Have you ever received a friendly "Hi, how are you?" greeting through a text or social media post from a person you don't know? That could be someone making an innocent mistake, or it could be a cybercriminal looking to run an investment scam on you.
Investment scams are now the fifth most common type of fraud in the US, with more than 66,700 reports during the first half of 2025, according to broker comparison site Broker Chooser. Over that period, unsuspecting victims lost a total of $3.5 billion to such fraud. Scammers earned a whopping $939 million in cryptocurrency, a rise of $261 million from the same period in 2024.
Savvy crooks who try to pull off this type of fraud know they can exploit people eager to make a quick buck. With that in mind, the median loss for these crimes reached $10,000 during the first half of this year, up from $9,300 for all of 2024. That's the highest median amount among scams recorded by Broker Chooser and 376% higher than the second-highest median loss of $2,100 from business and job scams.
People in certain states seem especially susceptible to investment scams. For the first half of 2025, Nevada topped the list with 211 reports for every 1 million residents, resulting in a total loss of more than $40.4 million. In second place was Arizona with 202 reports per million residents and total losses of more than $95.1 million. Florida took third place with 185 reports per 1 million people and losses of more than $241 million.
One tactic used by many scammers is known as "pig butchering." Here, the criminal approaches someone through a social network or dating site and tries to foster a relationship over the course of several months. When the time feels right, the scammer convinces the target to invest in phony cryptocurrency by showing them imaginary gains. As the crook further reels in their catch, the person is asked to invest more money. In the end, the criminal runs away with the digital loot, and the victim is all the poorer for it.
Social media is the most popular platform for investment scams, accounting for 13,577 reported cases and total losses of $589.1 million over the first half of the year. That's because scammers know that many people turn to social media for investment advice.
Websites and apps are the second most popular way to run these scams, resulting in 6,007 reports and $266 million lost over the first six months of 2025. With the help of AI, criminals can create convincing apps and websites that can easily trick victims into falling for the scam.
Many scammers also use text messages to approach potential victims. A seemingly innocent greeting can easily turn into a friendly, ongoing conversation until the criminal senses the right time to pull off the scam.
To protect yourself from investment scams, Broker Chooser senior broker analyst Brandon Bovey offers the following 6 tips:
1. Be wary of responding to unsolicited messages
Tread carefully if you receive a "Hi, is this John?" or "Hi, is this Jennifer?" message
That could be the start of a pig butchering campaign. Criminals often hunt for victims through texts, social media sites, and dating apps, pretending to have reached you by mistake. If you tell them that they have the wrong person, they'll still try to keep the conversation going, hoping to gain your trust or friendship. The topic eventually turns to investing, and that's when the scam takes off.
If you receive a text or other message not intended for you, your best bet is to just ignore it.
2. Look out for people trying to manipulate your trust
Once a scammer has built a relationship with you, even an online one, watch out if they convey an interest in helping you make money. No random stranger is going to suddenly come into your life looking to help you earn a profit.
One tactic is to ask you to invest a small amount of money and then increase that over time. If you then try to withdraw your funds, they'll create barriers that prevent you from accessing your account. Don't take the bait in the first place.
3. Watch out if they push you to an encrypted platform
If you're texting with a stranger and they ask you to move the conversation to WhatsApp or Telegram, don't do it. Many scammers turn to these encrypted platforms because the secure messages are more difficult for law enforcement to detect and trace.
4. Don't fall for so-called success stories
Scammers like to brag about their own alleged wealth by sharing stories about how they achieved their gains. The goal is to make you feel as if you're missing out on the profit party. They may share how they made money through cryptocurrency, forex-- foreign exchange-- or other investments.
Here, they try to reel you in through a FOMO-- fear of missing out-- ploy before they ask you to invest your own money, or you ask them how to get in on the investment.
5. Look out for urgent or high-pressure tactics
A scammer may try to pressure you into investing by creating a sense of urgency before the opportunity fades away. At that point, you may already trust them, so your spider sense won't necessarily be tingling. But that sense of urgency should be a red flag that this so-called opportunity isn't legit.
6. Avoid phony trading websites
Scammers often cook up fake trading sites and platforms that look like the real thing. Such sites may show you phony account balances, pretend profits, and make-believe customer reviews or ratings. When faced with such a website, check for its licensing and regulation approvals, and consult independent reviews.
Investment scams are now the fifth most common type of fraud in the US, with more than 66,700 reports during the first half of 2025, according to broker comparison site Broker Chooser. Over that period, unsuspecting victims lost a total of $3.5 billion to such fraud. Scammers earned a whopping $939 million in cryptocurrency, a rise of $261 million from the same period in 2024.
Savvy crooks who try to pull off this type of fraud know they can exploit people eager to make a quick buck. With that in mind, the median loss for these crimes reached $10,000 during the first half of this year, up from $9,300 for all of 2024. That's the highest median amount among scams recorded by Broker Chooser and 376% higher than the second-highest median loss of $2,100 from business and job scams.
People in certain states seem especially susceptible to investment scams. For the first half of 2025, Nevada topped the list with 211 reports for every 1 million residents, resulting in a total loss of more than $40.4 million. In second place was Arizona with 202 reports per million residents and total losses of more than $95.1 million. Florida took third place with 185 reports per 1 million people and losses of more than $241 million.
One tactic used by many scammers is known as "pig butchering." Here, the criminal approaches someone through a social network or dating site and tries to foster a relationship over the course of several months. When the time feels right, the scammer convinces the target to invest in phony cryptocurrency by showing them imaginary gains. As the crook further reels in their catch, the person is asked to invest more money. In the end, the criminal runs away with the digital loot, and the victim is all the poorer for it.
Social media is the most popular platform for investment scams, accounting for 13,577 reported cases and total losses of $589.1 million over the first half of the year. That's because scammers know that many people turn to social media for investment advice.
Websites and apps are the second most popular way to run these scams, resulting in 6,007 reports and $266 million lost over the first six months of 2025. With the help of AI, criminals can create convincing apps and websites that can easily trick victims into falling for the scam.
Many scammers also use text messages to approach potential victims. A seemingly innocent greeting can easily turn into a friendly, ongoing conversation until the criminal senses the right time to pull off the scam.
To protect yourself from investment scams, Broker Chooser senior broker analyst Brandon Bovey offers the following 6 tips:
1. Be wary of responding to unsolicited messages
Tread carefully if you receive a "Hi, is this John?" or "Hi, is this Jennifer?" message
That could be the start of a pig butchering campaign. Criminals often hunt for victims through texts, social media sites, and dating apps, pretending to have reached you by mistake. If you tell them that they have the wrong person, they'll still try to keep the conversation going, hoping to gain your trust or friendship. The topic eventually turns to investing, and that's when the scam takes off.
If you receive a text or other message not intended for you, your best bet is to just ignore it.
2. Look out for people trying to manipulate your trust
Once a scammer has built a relationship with you, even an online one, watch out if they convey an interest in helping you make money. No random stranger is going to suddenly come into your life looking to help you earn a profit.
One tactic is to ask you to invest a small amount of money and then increase that over time. If you then try to withdraw your funds, they'll create barriers that prevent you from accessing your account. Don't take the bait in the first place.
3. Watch out if they push you to an encrypted platform
If you're texting with a stranger and they ask you to move the conversation to WhatsApp or Telegram, don't do it. Many scammers turn to these encrypted platforms because the secure messages are more difficult for law enforcement to detect and trace.
4. Don't fall for so-called success stories
Scammers like to brag about their own alleged wealth by sharing stories about how they achieved their gains. The goal is to make you feel as if you're missing out on the profit party. They may share how they made money through cryptocurrency, forex-- foreign exchange-- or other investments.
Here, they try to reel you in through a FOMO-- fear of missing out-- ploy before they ask you to invest your own money, or you ask them how to get in on the investment.
5. Look out for urgent or high-pressure tactics
A scammer may try to pressure you into investing by creating a sense of urgency before the opportunity fades away. At that point, you may already trust them, so your spider sense won't necessarily be tingling. But that sense of urgency should be a red flag that this so-called opportunity isn't legit.
6. Avoid phony trading websites
Scammers often cook up fake trading sites and platforms that look like the real thing. Such sites may show you phony account balances, pretend profits, and make-believe customer reviews or ratings. When faced with such a website, check for its licensing and regulation approvals, and consult independent reviews.
ClickFix Malware Attacks Evolve with Multi-OS Support, Video Tutorials
By Bill Toulas for bleepingcomputer
bleepingcomputer
ClickFix attacks have evolved to feature videos that guide victims through the self-infection process, a timer to pressure targets into taking risky actions, and automatic detection of the operating system to provide the correct commands.
In a typical ClickFix attack, the threat actor relies on social-engineering to trick users into pasting and executing code or commands from a malicious page.
The lures used may vary from identity verification to software problem solutions. The goal is to make the target execute malware that fetches and launches a payload, usually an information stealer.
Most of the times, these attacks provided text instructions on a web page but newer versions rely on an embedded video to make the attack less suspicious.
Push Security researchers have spotted this change in recent ClickFix campaigns, where a fake Cloudflare CAPTCHA verification challenge detected the victim's OS and loaded a video tutorial on how to paste and run the malicious commands.
Through a JavaScript, the threat actor can hide the commands and copy them automatically into the user's clipboard, thus reducing the chances of human error.
On the same window, the challenge included a 1-minute countdown timer that presses the victim into taking quick action and leaving little time to verify the authenticity or safety of the verification process.
Adding to the deception is a "users verified in the last hour" counter, making the window appear as part of a legitimate Cloudflare bot check tool.
In a typical ClickFix attack, the threat actor relies on social-engineering to trick users into pasting and executing code or commands from a malicious page.
The lures used may vary from identity verification to software problem solutions. The goal is to make the target execute malware that fetches and launches a payload, usually an information stealer.
Most of the times, these attacks provided text instructions on a web page but newer versions rely on an embedded video to make the attack less suspicious.
Push Security researchers have spotted this change in recent ClickFix campaigns, where a fake Cloudflare CAPTCHA verification challenge detected the victim's OS and loaded a video tutorial on how to paste and run the malicious commands.
Through a JavaScript, the threat actor can hide the commands and copy them automatically into the user's clipboard, thus reducing the chances of human error.
On the same window, the challenge included a 1-minute countdown timer that presses the victim into taking quick action and leaving little time to verify the authenticity or safety of the verification process.
Adding to the deception is a "users verified in the last hour" counter, making the window appear as part of a legitimate Cloudflare bot check tool.
Advanced ClickFix Cloudflare CAPTCHA with video and timer. Source: Push Security
Although we have seen ClickFix attacks against all major operating systems before, including macOS and Linux, the automatic detection and adjustment of the instructions is a new development.
Push Security reports that these more advanced ClickFix webpages are promoted primarily through malvertizing on Google Search.
The threat actors either exploit known flaws on outdated WordPress plugins to compromise legitimate sites and inject their malicious JavaScript on pages, or "vibe-code" sites and use SEO poisoning tactics to rank them higher up in the search results.
Regarding the payloads delivered in these attacks, Push researchers noticed that they depended on the operating system, but included the MSHTA executable in Windows, PowerShell scripts, and various other living-off-the-land binaries.
The researchers speculate that future ClickFix attacks could run entirely in the browser, evading EDR protections.
As ClickFix evolves and takes more convincing and deceptive forms, users should remember that executing code on the terminal can never be a part of any online-based verification process, and no copied commands should ever be executed unless the user fully understands what they do.
Push Security reports that these more advanced ClickFix webpages are promoted primarily through malvertizing on Google Search.
The threat actors either exploit known flaws on outdated WordPress plugins to compromise legitimate sites and inject their malicious JavaScript on pages, or "vibe-code" sites and use SEO poisoning tactics to rank them higher up in the search results.
Regarding the payloads delivered in these attacks, Push researchers noticed that they depended on the operating system, but included the MSHTA executable in Windows, PowerShell scripts, and various other living-off-the-land binaries.
The researchers speculate that future ClickFix attacks could run entirely in the browser, evading EDR protections.
As ClickFix evolves and takes more convincing and deceptive forms, users should remember that executing code on the terminal can never be a part of any online-based verification process, and no copied commands should ever be executed unless the user fully understands what they do.
Multiple ChatGPT Security Bugs Allow Rampant Data Theft
By Jai Vijayan for Dark Reading
Attackers can use them to inject arbitrary prompts, exfiltrate personal user information, bypass safety mechanisms, and take other malicious actions.
Attackers can use them to inject arbitrary prompts, exfiltrate personal user information, bypass safety mechanisms, and take other malicious actions.
Source: PeachY Photograph via Shutterstock
In yet another "Your chatbot may be leaking" moment, researchers have uncovered multiple weaknesses in OpenAI's ChatGPT that could allow an attacker to exfiltrate private information from a user's chat history and stored memories.
The issues-- 7 of them in total-- stem largely from how ChatGPT and its helper model, SearchGPT, behave when browsing or searching the Web in response to user queries, whether looking up information, summarizing pages, or opening URLs. They allow attackers to manipulate the chatbot's behavior in different ways without the user's knowledge.
Millions of GenAI Users Exposed to Privacy Problems?
Researchers at Tenable who discovered the flaws described them as leaving millions of ChatGPT users as potentially vulnerable to attacks. "By mixing and matching all of the vulnerabilities and techniques we discovered, we were able to create proofs of concept (PoCs) for multiple complete attack vectors," Tenable researchers Moshe Bernstein and Liv Matan said in a report this week. These included exploits for indirect prompt injection, bypassing safety features, exfiltrating private user information, and creating persistence.
Tenable's discovery adds to a growing body of research exposing fundamental security weaknesses in large language models and AI chatbots. Since ChatGPT's public debut in late 2022, researchers have repeatedly demonstrated how prompt injection attacks, data leakage vulnerabilities, and jailbreaking techniques can compromise these systems in ways fundamentally different from traditional software vulnerabilities, and how they are a lot harder to mitigate. The new research is another reminder of the need for caution for enterprises that are integrating LLMs and chatbots into their workflows without much thought about the potential security implications.
In a nutshell, the 7 vulnerabilities Tenable uncovered stem from how ChatGPT ingests and processes instructions from external sources, including websites it browses, search results, blog comments, and specially crafted URLs. The security vendor showed how attackers could exploit the flaws by hiding malicious prompts in blog comments, poisoning search results to bypass ChatGPT's safety filters and taking advantage of how ChatGPT processes conversation history and stores memories.
One of the 7 flaws involves indirect prompt injection, where the researchers showed how an adversary could plant malicious instructions on a trusted Web page, like in its comments section. If later a user were to ask ChatGPT to summarize the contents of that page, the chatbot's Web browsing component would dutifully follow the malicious instructions-- which could, for instance, involve sending the user a link to a malicious site.
Another method for prompt injection -- a 1-click method-- that Tenable discovered attackers could use was through an OpenAI feature that allows users to prompt ChatGPT through URLs like https://chatgpt.com/?q={Prompt}. According to Tenable, because ChatGPT automatically submits whatever query is in that URL parameter, attackers can craft malicious links disguised as helpful ChatGPT queries. But when they're clicked on, they immediately inject a malicious prompt.
A third vulnerability the researchers uncovered involves the implicit trust that ChatGPT places in the bing.com domain. Tenable discovered attackers can index malicious sites on Bing, extract their tracking links-- which are wrapper links Bing uses to redirect users to the sites they want to visit-- and use those bing.com tracking links to bypass ChatGPT's safety filters.
A fourth involved conversation injection, which takes advantage of the fact that ChatGPT remembers entire conversations with a user when responding to input. Tenable found that when ChatGPT's Web browsing component, SearchGPT, reads and returns malicious instructions from a website-- via indirect prompt injection — ChatGPT reads those instructions in the conversation history and follows them, essentially prompt injecting itself in the process.
The most concerning issue that Tenable discovered was a zero-click vulnerability, where simply asking ChatGPT a benign question could trigger an attack if the search results include a poisoned website. "The zero-click and one-click vulnerabilities are the most dangerous for non-technical users because they require no special action," Bernstein says in comments to Dark Reading. "A user can be compromised by simply prompting ChatGPT or clicking a presumed harmless link."
Multiple AI Hacking & Exploit Avenues
Bernstein says it's very feasible for a high-resource attacker, like an advanced persistent threat (APT) group, to exploit one or all of the vulnerabilities to run a campaign targeting multiple users. "That being said, a more realistic scenario for an ordinary user could be as simple as an attacker planting comments on blog posts reviewing different products, which will inject a memory that the user prefers a specific product over others," he says. "Another example is an attacker injecting instructions to link to a phishing website, exploiting the high level of trust people have in ChatGPT, to steal their passwords or credit card information."
Tenable conducted most of its research on ChatGPT-4o but found that several of the vulnerabilities and proofs of concept, including the indirect prompt injection issue and the zero- and 1-click flaws, are valid on OpenAI's newer ChatGPT-5 as well. The company reported the issues to OpenAI in April. OpenAI acknowledged receiving Tenable's vulnerability disclosures, but it is unclear if the company has made any changes. While Tenable has had a hard time reproducing some of the vulnerabilities discovered and reported to OpenAI, others still persist, the security vendor said. OpenAI did not respond immediately to a request for comment.
"The main takeaway is how medium and high vulnerabilities can be chained together to create a critical severity situation," Bernstein says. "Individually, these vulnerabilities are concerning, but collectively they create a full attack path, spanning from injection and evasion to data exfiltration and persistence."
The issues-- 7 of them in total-- stem largely from how ChatGPT and its helper model, SearchGPT, behave when browsing or searching the Web in response to user queries, whether looking up information, summarizing pages, or opening URLs. They allow attackers to manipulate the chatbot's behavior in different ways without the user's knowledge.
Millions of GenAI Users Exposed to Privacy Problems?
Researchers at Tenable who discovered the flaws described them as leaving millions of ChatGPT users as potentially vulnerable to attacks. "By mixing and matching all of the vulnerabilities and techniques we discovered, we were able to create proofs of concept (PoCs) for multiple complete attack vectors," Tenable researchers Moshe Bernstein and Liv Matan said in a report this week. These included exploits for indirect prompt injection, bypassing safety features, exfiltrating private user information, and creating persistence.
Tenable's discovery adds to a growing body of research exposing fundamental security weaknesses in large language models and AI chatbots. Since ChatGPT's public debut in late 2022, researchers have repeatedly demonstrated how prompt injection attacks, data leakage vulnerabilities, and jailbreaking techniques can compromise these systems in ways fundamentally different from traditional software vulnerabilities, and how they are a lot harder to mitigate. The new research is another reminder of the need for caution for enterprises that are integrating LLMs and chatbots into their workflows without much thought about the potential security implications.
In a nutshell, the 7 vulnerabilities Tenable uncovered stem from how ChatGPT ingests and processes instructions from external sources, including websites it browses, search results, blog comments, and specially crafted URLs. The security vendor showed how attackers could exploit the flaws by hiding malicious prompts in blog comments, poisoning search results to bypass ChatGPT's safety filters and taking advantage of how ChatGPT processes conversation history and stores memories.
One of the 7 flaws involves indirect prompt injection, where the researchers showed how an adversary could plant malicious instructions on a trusted Web page, like in its comments section. If later a user were to ask ChatGPT to summarize the contents of that page, the chatbot's Web browsing component would dutifully follow the malicious instructions-- which could, for instance, involve sending the user a link to a malicious site.
Another method for prompt injection -- a 1-click method-- that Tenable discovered attackers could use was through an OpenAI feature that allows users to prompt ChatGPT through URLs like https://chatgpt.com/?q={Prompt}. According to Tenable, because ChatGPT automatically submits whatever query is in that URL parameter, attackers can craft malicious links disguised as helpful ChatGPT queries. But when they're clicked on, they immediately inject a malicious prompt.
A third vulnerability the researchers uncovered involves the implicit trust that ChatGPT places in the bing.com domain. Tenable discovered attackers can index malicious sites on Bing, extract their tracking links-- which are wrapper links Bing uses to redirect users to the sites they want to visit-- and use those bing.com tracking links to bypass ChatGPT's safety filters.
A fourth involved conversation injection, which takes advantage of the fact that ChatGPT remembers entire conversations with a user when responding to input. Tenable found that when ChatGPT's Web browsing component, SearchGPT, reads and returns malicious instructions from a website-- via indirect prompt injection — ChatGPT reads those instructions in the conversation history and follows them, essentially prompt injecting itself in the process.
The most concerning issue that Tenable discovered was a zero-click vulnerability, where simply asking ChatGPT a benign question could trigger an attack if the search results include a poisoned website. "The zero-click and one-click vulnerabilities are the most dangerous for non-technical users because they require no special action," Bernstein says in comments to Dark Reading. "A user can be compromised by simply prompting ChatGPT or clicking a presumed harmless link."
Multiple AI Hacking & Exploit Avenues
Bernstein says it's very feasible for a high-resource attacker, like an advanced persistent threat (APT) group, to exploit one or all of the vulnerabilities to run a campaign targeting multiple users. "That being said, a more realistic scenario for an ordinary user could be as simple as an attacker planting comments on blog posts reviewing different products, which will inject a memory that the user prefers a specific product over others," he says. "Another example is an attacker injecting instructions to link to a phishing website, exploiting the high level of trust people have in ChatGPT, to steal their passwords or credit card information."
Tenable conducted most of its research on ChatGPT-4o but found that several of the vulnerabilities and proofs of concept, including the indirect prompt injection issue and the zero- and 1-click flaws, are valid on OpenAI's newer ChatGPT-5 as well. The company reported the issues to OpenAI in April. OpenAI acknowledged receiving Tenable's vulnerability disclosures, but it is unclear if the company has made any changes. While Tenable has had a hard time reproducing some of the vulnerabilities discovered and reported to OpenAI, others still persist, the security vendor said. OpenAI did not respond immediately to a request for comment.
"The main takeaway is how medium and high vulnerabilities can be chained together to create a critical severity situation," Bernstein says. "Individually, these vulnerabilities are concerning, but collectively they create a full attack path, spanning from injection and evasion to data exfiltration and persistence."
Google Warns of New AI-Powered Malware Families Deployed
in the Wild
By Bill Toulas for bleepingcomputer
bleepingcomputer
Google's Threat Intelligence Group (GTIG) has identified a major shift this year, with adversaries leveraging artificial intelligence to deploy new malware families that integrate large language models (LLMs) during execution.
This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatility that are virtually impossible to achieve with traditional malware.
Google calls the technique "just-in-time" self-modification and highlights the experimental PromptFlux malware dropper and the PromptSteal-- a.k.a. LameHug-- data miner deployed in Ukraine, as examples for dynamic script generation, code obfuscation, and creation of on-demand functions.
PromptFlux is an experimental VBScript dropper that leverages Google's LLM Gemini in its latest version to generate obfuscated VBScript variants.
It attempts persistence via Startup folder entries, and spreads laterally on removable drives and mapped network shares.
"The most novel component of PROMPTFLUX is its 'Thinking Robot' module, designed to periodically query Gemini to obtain new code for evading antivirus software," explains Google.
The prompt is very specific and machine-parsable, according to the researchers, who see indications that the malware's creators aim to create an ever-evolving "metamorphic script."
This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatility that are virtually impossible to achieve with traditional malware.
Google calls the technique "just-in-time" self-modification and highlights the experimental PromptFlux malware dropper and the PromptSteal-- a.k.a. LameHug-- data miner deployed in Ukraine, as examples for dynamic script generation, code obfuscation, and creation of on-demand functions.
PromptFlux is an experimental VBScript dropper that leverages Google's LLM Gemini in its latest version to generate obfuscated VBScript variants.
It attempts persistence via Startup folder entries, and spreads laterally on removable drives and mapped network shares.
"The most novel component of PROMPTFLUX is its 'Thinking Robot' module, designed to periodically query Gemini to obtain new code for evading antivirus software," explains Google.
The prompt is very specific and machine-parsable, according to the researchers, who see indications that the malware's creators aim to create an ever-evolving "metamorphic script."
Google could not attribute PromptFlux to a specific threat actor, but noted that the tactics, techniques, and procedures indicate that it is being used by a financially motivated group.
Although PromptFlux was in an early development stage, not capable to inflict any real damage to targets, Google took action to disable its access to the Gemini API and delete all assets associated with it.
Another AI-powered malware Google discovered this year, which is used in operations, is FruitShell, a PowerShell reverse shell that establishes remote command-and-control (C2) access and executes arbitrary commands on compromised hosts.
The malware is publicly available, and the researchers say that it includes hard-coded prompts intended to bypass LLM-powered security analysis.
Google also highlights QuietVault, a JavaScript credential stealer that targets GitHub/NPM tokens, exfiltrating captured credentials on dynamically created public GitHub repositories.
QuietVault leverages on-host AI CLI tools and prompts to search for additional secrets and exfiltrate them too.
On the same list of AI-enabled malware is also PromptLock, an experimental ransomware that relies on Lua scripts to steal and encrypt data on Windows, macOS, and Linux machines.
Cases of Gemini abuse
Apart from AI-powered malware, Google's report also documents multiple cases where threat actors abused Gemini across the entire attack lifecycle.
A China-nexus actor posed as a capture-the-flag (CTF) participant to bypass Gemini's safety filters and obtain exploit details, using the model to find vulnerabilities, craft phishing lures, and build exfiltration tools.
Iranian hackers MuddyCoast (UNC3313) pretended to be a student to use Gemini for malware development and debugging, accidentally exposing C2 domains and keys.
Iranian group APT42 abused Gemini for phishing and data analysis, creating lures, translating content, and developing a "Data Processing Agent" that converted natural language into SQL for personal-data mining.
China's APT41 leveraged Gemini for code assistance, enhancing its OSSTUN C2 framework and utilizing obfuscation libraries to increase malware sophistication.
Finally, the North Korean threat group Masan (UNC1069) utilized Gemini for crypto theft, multilingual phishing, and creating deepfake lures, while Pukchong (UNC4899) employed it for developing code targeting edge devices and browsers.
In all cases Google identified, it disabled the associated accounts and reinforced model safeguards based on the observed tactics, to make their bypassing for abuse harder.
AI-powered cybercrime tools on underground forums
Google researchers discovered that on underground marketplaces, both English and Russian-speaking, the interest in malicious AI-based tools and services is growing, as they lower the technical bar for deploying more complex attacks.
"Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings," Google says in a report published today.
The offers range from utilities that generate deepfakes and images to malware development, phishing, research and reconnaissance, and vulnerability exploitation.
As the cybercrime market for AI-powered tools is getting more mature, the trend indicates a replacement of the conventional tools used in malicious operations.
The Google Threat Intelligence Group (GTIG) has identified multiple actors advertising multifunctional tools that can cover the stages of an attack.
The push to AI-based services seems to be aggressive, as many developers promote the new features in the free version of their offers, which often include API and Discord access for higher prices.
Google underlines that the approach to AI from any developer "must be both bold and responsible" and AI systems should be designed with "strong safety guardrails" to prevent abuse, discourage, and disrupt any misuse and adversary operations.
The company says that it investigates any signs of abuse of its services and products, which include activities linked to government-backed threat actors. Apart from collaboration with law enforcement when appropriate, the company is also using the experience from fighting adversaries "to improve safety and security for our AI models."
© vocalbits.com