Blog

No items found.

Employees and Email: How Considering User Experience Strengthens Email Security

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023
10
Apr 2023
As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

NEWSLETTER

Like this and want more?

Stay up to date on the latest industry news and insights.
You can unsubscribe at any time. Privacy Policy
INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Dan Fein
VP, Product

Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.

Carlos Gray
Product Manager

Carlos Gonzalez Gray is a Product Marketing Manager at Darktrace. Based in the Madrid Office, Carlos engages with the global product team to ensure each product supports the company’s overall strategy and goals throughout their entire lifecycle. Previous to his position in the product team, Carlos worked as a Cyber Technology Specialist where he specialized in the OT sector protecting critical infrastructure.  His background as a consultant in Spain to IBEX 35 companies led him to become well versed in matters of compliance, auditing and data privacy as well. Carlos holds an Honors BA in Political Science and a Masters in Cybersecurity from IE University.

share this article
USE CASES
No items found.
COre coverage

Blog

Thought Leadership

The Implications of NIS2 on Cyber Security and AI

Default blog imageDefault blog image
05
Dec 2023

The NIS2 Directive requires member states to adopt laws that will improve the cyber resilience of organizations within the EU. It impacts organizations that are “operators of essential services”. Under NIS 1, EU member states could choose what this meant. In an effort to ensure more consistent application, NIS2 has set out its own definition. It eliminates the distinction between operators of essential services and digital service providers from NIS1, instead defining a new list of sectors:

  • Energy (electricity, district heating and cooling, gas, oil, hydrogen)
  • Transport (air, rail, water, road)
  • Banking (credit institutions)
  • Financial market infrastructures
  • Health (healthcare providers and pharma companies)
  • Drinking water (suppliers and distributors)
  • Digital infrastructure (DNS, TLD registries, telcos, data center providers, etc.)
  • ICT service providers (B2B): MSSPs and managed service providers
  • Public administration (central and regional government institutions, as defined per member state)
  • Space
  • Postal and courier services
  • Waste management
  • Chemicals
  • Food
  • Manufacturing of medical devices
  • Computers and electronics
  • Machinery and equipment
  • Motor vehicles, trailers and semi-trailers and other transport equipment
  • Digital providers (online market places, online search engines, and social networking service platforms) and research organizations.

With these updates, it becomes harder to try and find industry segments not included within the scope. NIS2 represents legally binding cyber security requirements for a significant region and economy. Standout features that have garnered the most attention include the tight timelines associated with notification requirements. Under NIS 2, in-scope entities must submit an initial report or “early warning” to the competent national authority or computer security incident response team (CSIRT) within 24 hours from when the entity became aware of a significant incident. This is a new development from the first iteration of the Directive, which used more vague language of the need to notify authorities “without undue delay”.

Another aspect gaining attention is oversight and regulation – regulators are going to be empowered with significant investigation and supervision powers including on-site inspections.

The stakes are now higher, with the prospect of fines that are capped at €10 million or 2% of an offending organization’s annual worldwide turnover – whichever is greater. Added to that, the NIS2 Directive includes an explicit obligation to hold members of management bodies personally responsible for breaches of their duties to ensure compliance with NIS2 obligations – and members can be held personally liable.  

The risk management measures introduced in the Directive are not altogether surprising – they reflect common best practices. Many organizations (especially those that are newly in scope for NIS2) may have to expand their cyber security capabilities, but there’s nothing controversial or alarming in the required measures.  For organizations in this situation, there are various tools, best practices, and frameworks they can leverage.  Darktrace in particular provides capabilities in the areas of visibility, incident handling, and reporting that can help.

NIS2 and Cyber AI

The use of AI is not an outright requirement within NIS2 – which may be down to lack of knowledge and expertise in the area, and/or the immaturity of the sector. The clue to this might be in the timing: the provisional agreement on the NIS2 text was reached in May 2022 – six months before ChatGPT and other open-source Generative AI tools propelled broader AI technology into the forefront of public consciousness. If the language were drafted today, it's not far-fetched to imagine AI being mentioned much more prominently and perhaps even becoming a requirement.

NIS2 does, however, very clearly recommend that “member states should encourage the use of any innovative technology, including artificial intelligence”[1].  Another section speaks directly to essential and important entities, saying that they should “evaluate their own cyber security capabilities, and where appropriate, pursue the integration of cyber security enhancing technologies, such as artificial intelligence or machine learning systems…”[2]

One of the recitals states that “member states should adopt policies on the promotion of active cyber protection”.  Where active cyber protection is defined as “the prevention, detection, monitoring, analysis and mitigation of network security breaches in an active manner.”[3]  

From a Darktrace perspective, our self-learning Cyber AI technology is precisely what enables our technology to deliver active cyber protection – protecting organizations and uplifting security teams at every stage of an incident lifecycle – from proactively hardening defenses before an attack is launched, to real-time threat detection and response, through to recovering quickly back to a state of good health.  

The visibility provided by Darktrace is vital to understanding the effectiveness of policies and ensuring policy compliance. NIS2 also covers incident handling and business continuity, which Darktrace HEAL addresses through AI-enabled incident response, readiness reports, simulations, and secure collaborations.

Reporting is integral to NIS2 and organizations can leverage Darktrace’s incident reporting features to present the necessary technical details of an incident and provide a jump start to compiling a full report with business context and impact.  

What’s Next for NIS2

We don’t yet know the details for how EU member states will transpose NIS2 into national law – they have until 17th October 2024 to work this out. The Commission also commits to reviewing the functioning of the Directive every three years. Given how much our overall understanding and appreciation for not only the dangers of AI but also its power (perhaps even necessity in the realm of cyber security) is changing, we may see many member states will leverage the recitals’ references to AI in order to make a strong push if not a requirement that essential and important organizations within their jurisdiction leverage AI.

Organizations are starting to prepare now to meet the forthcoming legislation related to NIS2. To see how Darktrace can help, talk to your representative or contact us.


[1] (51) on page 11
[2]
(89) on page 17
[3]
(57) on page 12

Continue reading
About the author
John Allen
VP, Cyber Risk & Compliance

Blog

Inside the SOC

PurpleFox in a Henhouse: How Darktrace Hunted Down a Persistent and Dynamic Rootkit

Default blog imageDefault blog image
27
Nov 2023

Versatile Malware: PurpleFox

As organizations and security teams across the world move to bolster their digital defenses against cyber threats, threats actors, in turn, are forced to adopt more sophisticated tactics, techniques and procedures (TTPs) to circumvent them. Rather than being static and predictable, malware strains are becoming increasingly versatile and therefore elusive to traditional security tools.

One such example is PurpleFox. First observed in 2018, PurpleFox is a combined fileless rootkit and backdoor trojan known to target Windows machines. PurpleFox is known for consistently adapting its functionalities over time, utilizing different infection vectors including known vulnerabilities (CVEs), fake Telegram installers, and phishing. It is also leveraged by other campaigns to deliver ransomware tools, spyware, and cryptocurrency mining malware. It is also widely known for using Microsoft Software Installer (MSI) files masquerading as other file types.

The Evolution of PurpleFox

The Original Strain

First reported in March 2018, PurpleFox was identified to be a trojan that drops itself onto Windows machines using an MSI installation package that alters registry values to replace a legitimate Windows system file [1]. The initial stage of infection relied on the third-party toolkit RIG Exploit Kit (EK). RIG EK is hosted on compromised or malicious websites and is dropped onto the unsuspecting system when they visit browse that site. The built-in Windows installer (MSIEXEC) is leveraged to run the installation package retrieved from the website. This, in turn, drops two files into the Windows directory – namely a malicious dynamic-link library (DLL) that acts as a loader, and the payload of the malware. After infection, PurpleFox is often used to retrieve and deploy other types of malware.  

Subsequent Variants

Since its initial discovery, PurpleFox has also been observed leveraging PowerShell to enable fileless infection and additional privilege escalation vulnerabilities to increase the likelihood of successful infection [2]. The PowerShell script had also been reported to be masquerading as a .jpg image file. PowerSploit modules are utilized to gain elevated privileges if the current user lacks administrator privileges. Once obtained, the script proceeds to retrieve and execute a malicious MSI package, also masquerading as an image file. As of 2020, PurpleFox no longer relied on the RIG EK for its delivery phase, instead spreading via the exploitation of the SMB protocol [3]. The malware would leverage the compromised systems as hosts for the PurpleFox payloads to facilitate its spread to other systems. This mode of infection can occur without any user action, akin to a worm.

The current iteration of PurpleFox reportedly uses brute-forcing of vulnerable services, such as SMB, to facilitate its spread over the network and escalate privileges. By scanning internet-facing Windows computers, PurpleFox exploits weak passwords for Windows user accounts through SMB, including administrative credentials to facilitate further privilege escalation.

Darktrace detection of PurpleFox

In July 2023, Darktrace observed an example of a PurpleFox infection on the network of a customer in the healthcare sector. This observation was a slightly different method of downloading the PurpleFox payload. An affected device was observed initiating a series of service control requests using DCE-RPC, instructing the device to make connections to a host of servers to download a malicious .PNG file, later confirmed to be the PurpleFox rootkit. The device was then observed carrying out worm-like activity to other external internet-facing servers, as well as scanning related subnets.

Darktrace DETECT™ was able to successfully identify and track this compromise across the cyber kill chain and ensure the customer was able to take swift remedial action to prevent the attack from escalating further.

While the customer in question did have Darktrace RESPOND™, it was configured in human confirmation mode, meaning any mitigative actions had to be manually applied by the customer’s security team. If RESPOND had been enabled in autonomous response mode at the time of the attack, it would have been able to take swift action against the compromise to contain it at the earliest instance.

Attack Overview

Figure 1: Timeline of PurpleFox malware kill chain.

Initial Scanning over SMB

On July 14, 2023, Darktrace detected the affected device scanning other internal devices on the customer’s network via port 445. The numerous connections were consistent with the aforementioned worm-like activity that has been reported from PurpleFox behavior as it appears to be targeting SMB services looking for open or vulnerable channels to exploit.

This initial scanning activity was detected by Darktrace DETECT, specifically through the model breach ‘Device / Suspicious SMB Scanning Activity’. Darktrace’s Cyber AI Analyst™ then launched an autonomous investigation into these internal connections and tied them into one larger-scale network reconnaissance incident, rather than a series of isolated connections.

Figure 2: Cyber AI Analyst technical details summarizing the initial scanning activity seen with the internal network scan over port 445.

As Darktrace RESPOND was configured in human confirmation mode, it was unable to autonomously block these internal connections. However, it did suggest blocking connections on port 445, which could have been manually applied by the customer’s security team.

Figure 3: The affected device’s Model Breach Event Log showing the initial scanning activity observed by Darktrace DETECT and the corresponding suggested RESPOND action.

Privilege Escalation

The device successfully logged in via NTLM with the credential, ‘administrator’. Darktrace recognized that the endpoint was external to the customer’s environment, indicating that the affected device was now being used to propagate the malware to other networks. Considering the lack of observed brute-force activity up to this point, the credentials for ‘administrator’ had likely been compromised prior to Darktrace’s deployment on the network, or outside of Darktrace’s purview via a phishing attack.

Exploitation

Darktrace then detected a series of service control requests over DCE-RPC using the credential ‘admin’ to make SVCCTL Create Service W Requests. A script was then observed where the controlled device is instructed to launch mshta.exe, a Windows-native binary designed to execute Microsoft HTML Application (HTA) files. This enables the execution of arbitrary script code, VBScript in this case.

Figure 4: PurpleFox remote service control activity captured by a Darktrace DETECT model breach.
Figure 5: The infected device’s Model Breach Event Log showing the anomalous service control activity being picked up by DETECT.

There are a few MSIEXEC flags to note:

  • /i : installs or configures a product
  • /Q : sets the user interface level. In this case, it is set to ‘No UI’, which is used for “quiet” execution, so no user interaction is required

Evidently, this was an attempt to evade detection by endpoint users as it is surreptitiously installed onto the system. This corresponds to the download of the rootkit that has previously been associated with PurpleFox. At this stage, the infected device continues to be leveraged as an attack device and scans SMB services over external endpoints. The device also appeared to attempt brute-forcing over NTLM using the same ‘administrator’ credential to these endpoints. This activity was identified by Darktrace DETECT which, if enabled in autonomous response mode would have instantly blocked similar outbound connections, thus preventing the spread of PurpleFox.

Figure 6: The infected device’s Model Breach Event Log showing the outbound activity corresponding to PurpleFox’s wormlike spread. This was caught by DETECT and the corresponding suggested RESPOND action.

Installation

On August 9, Darktrace observed the device making initial attempts to download a malicious .PNG file. This was a notable change in tactics from previously reported PurpleFox campaigns which had been observed utilizing .MOE files for their payloads [3]. The .MOE payloads are binary files that are more easily detected and blocked by traditional signatured-based security measures as they are not associated with known software. The ubiquity of .PNG files, especially on the web, make identifying and blacklisting the files significantly more difficult.

The first connection was made with the URI ‘/test.png’.  It was noted that the HTTP method here was HEAD, a method similar to GET requests except the server must not return a message-body in the response.

The metainformation contained in the HTTP headers in response to a HEAD request should be identical to the information sent in response to a GET request. This method is often used to test hypertext links for validity and recent modification. This is likely a way of checking if the server hosting the payload is still active. Avoiding connections that could possibly be detected by antivirus solutions can help keep this activity under-the-radar.

Figure 7: Packet Capture from an affected customer device showing the initial HTTP requests to the payload server.
Figure 8: Packet Capture showing the HTTP requests to download the payloads.

The server responds with a status code of 200 before the download begins. The HEAD request could be part of the attacker’s verification that the server is still running, and that the payload is available for download. The ‘/test.png’ HEAD request was sent twice, likely for double confirmation to begin the file transfer.

Figure 9: PCAP from the affected customer device showing the Windows Installer user-agent associated with the .PNG file download.

Subsequent analysis using a Packet Capture (PCAP) tool revealed that this connection used the Windows Installer user agent that has previously been associated with PurpleFox. The device then began to download a payload that was masquerading as a Microsoft Word document. The device was thus able to download the payload twice, from two separate endpoints.

By masquerading as a Microsoft Word file, the threat actor was likely attempting to evade the detection of the endpoint user and traditional security tools by passing off as an innocuous text document. Likewise, using a Windows Installer user agent would enable threat actors to bypass antivirus measures and disguise the malicious installation as legitimate download activity.  

Darktrace DETECT identified that these were masqueraded file downloads by correctly identifying the mismatch between the file extension and the true file type. Subsequently, AI Analyst was able to correctly identify the file type and deduced that this download was indicative of the device having been compromised.

In this case, the device attempted to download the payload from several different endpoints, many of which had low antivirus detection rates or open-source intelligence (OSINT) flags, highlighting the need to move beyond traditional signature-base detections.

Figure 10: Cyber AI Analyst technical details summarizing the downloads of the PurpleFox payload.
Figure 11 (a): The Model Breach generated by the masqueraded file transfer associated with the PurpleFox payload.
Figure 11 (b): The Model Breach generated by the masqueraded file transfer associated with the PurpleFox payload.

If Darktrace RESPOND was enabled in autonomous response mode at the time of the attack it would have acted by blocking connections to these suspicious endpoints, thus preventing the download of malicious files. However, as RESPOND was in human confirmation mode, RESPOND actions required manual application by the customer’s security team which unfortunately did not happen, as such the device was able to download the payloads.

Conclusion

The PurpleFox malware is a particularly dynamic strain known to continually evolve over time, utilizing a blend of old and new approaches to achieve its goals which is likely to muddy expectations on its behavior. By frequently employing new methods of attack, malicious actors are able to bypass traditional security tools that rely on signature-based detections and static lists of indicators of compromise (IoCs), necessitating a more sophisticated approach to threat detection.  

Darktrace DETECT’s Self-Learning AI enables it to confront adaptable and elusive threats like PurpleFox. By learning and understanding customer networks, it is able to discern normal network behavior and patterns of life, distinguishing expected activity from potential deviations. This anomaly-based approach to threat detection allows Darktrace to detect cyber threats as soon as they emerge.  

By combining DETECT with the autonomous response capabilities of RESPOND, Darktrace customers are able to effectively safeguard their digital environments and ensure that emerging threats can be identified and shut down at the earliest stage of the kill chain, regardless of the tactics employed by would-be attackers.

Credit to Piramol Krishnan, Cyber Analyst, Qing Hong Kwa, Senior Cyber Analyst & Deputy Team Lead, Singapore

Appendices

Darktrace Model Detections

  • Device / Increased External Connectivity
  • Device / Large Number of Connections to New Endpoints
  • Device / SMB Session Brute Force (Admin)
  • Compliance / External Windows Communications
  • Anomalous Connection / New or Uncommon Service Control
  • Compromise / Unusual SVCCTL Activity
  • Compromise / Rare Domain Pointing to Internal IP
  • Anomalous File / Masqueraded File Transfer

RESPOND Models

  • Antigena / Network / Significant Anomaly / Antigena Breaches Over Time Block
  • Antigena / Network / External Threat / Antigena Suspicious Activity Block
  • Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block
  • Antigena / Network / Significant Anomaly / Antigena Enhanced Monitoring from Client Block
  • Antigena / Network / External Threat / Antigena Suspicious File Block
  • Antigena / Network / External Threat / Antigena File then New Outbound Block

List of IoCs

IoC - Type - Description

/C558B828.Png - URI - URI for Purple Fox Rootkit [4]

5b1de649f2bc4eb08f1d83f7ea052de5b8fe141f - File Hash - SHA1 hash of C558B828.Png file (Malware payload)

190.4.210[.]242 - IP - Purple Fox C2 Servers

218.4.170[.]236 - IP - IP for download of .PNG file (Malware payload)

180.169.1[.]220 - IP - IP for download of .PNG file (Malware payload)

103.94.108[.]114:10837 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

221.199.171[.]174:16543 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

61.222.155[.]49:14098 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

178.128.103[.]246:17880 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

222.134.99[.]132:12539 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

164.90.152[.]252:18075 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

198.199.80[.]121:11490 - IP - IP from Service Control MSIEXEC script to download PNG file (Malware payload)

MITRE ATT&CK Mapping

Tactic - Technique

Reconnaissance - Active Scanning T1595, Active Scanning: Scanning IP Blocks T1595.001, Active Scanning: Vulnerability Scanning T1595.002

Resource Development - Obtain Capabilities: Malware T1588.001

Initial Access, Defense Evasion, Persistence, Privilege Escalation - Valid Accounts: Default Accounts T1078.001

Initial Access - Drive-by Compromise T1189

Defense Evasion - Masquerading T1036

Credential Access - Brute Force T1110

Discovery - Network Service Discovery T1046

Command and Control - Proxy: External Proxy T1090.002

References

  1. https://blog.360totalsecurity.com/en/purple-fox-trojan-burst-out-globally-and-infected-more-than-30000-users/
  2. https://www.trendmicro.com/en_us/research/19/i/purple-fox-fileless-malware-with-rookit-component-delivered-by-rig-exploit-kit-now-abuses-powershell.html
  3. https://www.akamai.com/blog/security/purple-fox-rootkit-now-propagates-as-a-worm
  4. https://www.foregenix.com/blog/an-overview-on-purple-fox
  5. https://www.trendmicro.com/en_sg/research/21/j/purplefox-adds-new-backdoor-that-uses-websockets.html
Continue reading
About the author
Piramol Krishnan
Cyber Security Analyst

Good news for your business.
Bad news for the bad guys.

Start your free trial

Start your free trial

Flexible delivery
Cloud-based deployment.
Fast install
Just 1 hour to set up – and even less for an email security trial.
Choose your journey
Try out Self-Learning AI wherever you most need it — including cloud, network or email.
No commitment
Full access to the Darktrace Threat Visualizer and three bespoke Threat Reports, with no obligation to purchase.
For more information, please see our Privacy Notice.
Thanks, your request has been received
A member of our team will be in touch with you shortly.
YOU MAY FIND INTERESTING
Oops! Something went wrong while submitting the form.

Get a demo

Flexible delivery
You can either install it virtually or with hardware.
Fast install
Just 1 hour to set up – and even less for an email security trial.
Choose your journey
Try out Self-Learning AI wherever you most need it — including cloud, network or email.
No commitment
Full access to the Darktrace Threat Visualizer and three bespoke Threat Reports, with no obligation to purchase.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.