← Return to overview

When Updates Backfire: RCE in Windows Update Health Tools

Nov 20, 2025
Martin Warmer
By: Martin Warmer

What if a Microsoft‑tool meant to protect Windows machines, actually opened up remote code execution (RCE) by re-using abandoned Azure blobs?

That’s exactly what we discovered in Microsoft’s Update Health Tools (KB4023057), designed to speed security updates via Intune. While its aim, to help in fast roll‑outs and emergency patches, is good, a flaw in its configuration meant many devices were exposed: attackers could trigger arbitrary code execution remotely.

In this post, we’ll walk you through how we found this issue, how Microsoft has responded, and what you can do if your devices are still vulnerable. We’ll cover the original version 1.0, the attack vector we leveraged, evidence from real‑world telemetry, and how newer versions have tried to plug the gap.

Azure blob storage

After reading WatchTowr’s deep dive on abandoned AWS S3 buckets earlier this year, we started wondering: how many Azure blob storage accounts could be silently dangling out there, just waiting to be claimed? So, we began looking, and started monitoring DNS traffic on our own Windows machines. And we found more than we expected.

Among the pile of findings, which will be covered in later blogs, one stood out: payloadprod0.blob.core.windows.net . This finding kicked off what would become a deep dive into remote code execution through a signed Microsoft tool.

Payloadprod0

Once we registered the storage account (payloadprod0), we began monitoring for inbound requests. Within hours, we were seeing hundreds of HTTP GET requests hitting the blobs, coming from all over the world. These requests targeted structured URIs like:

GET /<hash>/enrolled.json  
GET /<hash>/Devices/<hash>.json

All with a consistent user agent: UHSMAILBOX. What could that be?

Digging deeper, we queried EDR telemetry and found that uhssvc.exe, a Microsoft-signed binary known as the Update Health Service, was actively resolving these domains across multiple customer environments. This service lived in: C:\Program Files\Microsoft Update Health Tools\uhssvc.exe.

Later, we found out that the Azure storage accounts used, followed a predictable naming pattern: payloadprod0.blob.core.windows.net through payloadprod15.blob.core.windows.net. When we checked, 10 of those 15 blobs were still unregistered. So we claimed them and started watching thousands of similar requests flowing in from all over the world.

The obvious next step? Figure out what these endpoints were trying to fetch, and whether we could influence what they received.

Windows Update Health Tools 1.0 (KB4023057)

Screenshot of Windows Task Manager showing various processes running, including 'uhsvc.exe' from Microsoft Update Health Tools.

To understand uhssvc.exe, we first needed to trace how Update Health Tools actually works. Let’s start with the original version 1.0 of the update health tools. After some reverse engineering, we developed a hypothesis that the team within Microsoft writing this tool, probably needed an easy service to check which updates to install. They apparently decided to use Azure blob storages, with a container per tenant and a few JSON files to specify the configuration.

So what does uhssvc.exe do, exactly?

  • A new installation will start by checking if it’s Entra joined/registered. If not, it will simply stop as this is an enterprise tool.
  • The service checks whether this Entra tenant is enrolled into update management by downloading a file from /<tenant_hash>/enrolled.json and checking whether Enrolled is set to true in this JSON.
  • If the tenant is enrolled it will continue the process of enrolling itself. This means downloading another JSON from /<tenant_hash>/Devices/<device_id_hash>.json with only a single field containing the policy ID assigned to this computer.
  • After that, the Update Health Tools will start polling /<tenant_hash>/Policies/<policy_id>/<cpu>_<osbuild>.json .
  • It will then look at EnterpriseActionType to determine what to do.

Opening up the binary in IDA gives us an easy list of actions we can specify:

Our interest was immediately piqued by the “ExecuteTool” option. That sounds like an easy way to get code execution.

Scrolling through the <strong>WSD::ToolExecutor::Execute</strong> function we see our first hurdle

It looks like we can only execute a Microsoft signed binary. Diving a bit deeper, we see that we actually need an executable with an embedded signature that’s signed by Microsoft. These are more rare, since most default windows executable are signed using catalog files. With catalog files you can sign a list of executables instead of signing each executable individually. This allows Microsoft to optimise checking of signatures and saves disk space.

Luckily there’s an easy target on each windows installation: explorer.exe. But then we hit a new roadblocker.

From 1.0 to 1.1

We were excited having found remote code execution in v1.0, and wanted to test it. Unfortunately for us, Microsoft no longer offers version v1.0 from February 2021 for download. Instead, it gives you v1.1 from December 2022. Still determined to get RCE in the latest version, we opened it up in IDA and found a second implementation for getting the config. 😃

No longer content with using simple Blob Storage, the developers apparently decided to implement a real web service in v1.1 at devicelistenerprod.microsoft.com. Furthermore, they added new storage accounts specifically designed for EU customers and a 2nd copy of the webservice at payloadprodeudbX and devicelistenerprod.eudb.microsoft.com. We weren’t able to register any of these storage accounts, nor these domain names.

So apparently we won’t have RCE inside the EU, which means all of our European customers at Eye Security are safe! 😉

After some more reversing of v1.1, we unlocked the option of re-enabling the old blob storage based communication by setting the configuration parameter UHS.ENABLEBLOBDSSCHECK to 1 in the registry. While also allowing us to test from the EU by changing UHS.STORAGEACCOUNTENDPOINTEUDB to a storage account we control.

Remote Code Execution (RCE)

So, for the old-school experience of popping a calc, we created the following JSON as payload.

{
  "RequestId": "00000000-0000-0000-0000-000000000001",
  "EnterpriseActionType": "ExecuteTool",
  "EnterpriseExecutableClientPath": "..\\..\\Windows\\explorer.exe",
  "EnterpriseExecutableClientParameters": "/root,C:\\Windows\\system32\\calc.exe",
  "EnterpriseExecutableClientPayload": []
}

Which produced the expected result when testing:

Overall impact of this vuln

Of course, we didn’t try this on any machine we didn’t own, but we could use the access logs of Azure Blob storage to see how many machines we could have accessed. For this we’ve collected logs for 7 days of traffic to the 10 storage accounts we registered.

In that period, we’ve seen 544.386 HTTP requests from the Update Health Tools. These are coming from 9.976 unique Azure tenants. Of these, we noticed 8.536 tenants asking whether they should enroll. For these requests, we can’t distinguish whether it’s a single machine in this tenant or a whole fleet of machines. The devices looking for their configured policy can be individually identified. These are coming from 3.491 unique tenants and 40.973 unique devices.

Given the enormous install base of Windows, this is of course a tiny fraction of machines still running the old (1.0) version of Update Health Tools or having the backward compatible flag enabled for the newer version.

Responsible disclosure

We reported this vulnerability to Microsoft on July 7th 2025 and they confirmed the behavior on July 17th. We successfully transferred ownership of these storage accounts to Microsoft on July 18th. Therefore all endpoints should be safe now.

Secure design

After seeing what impact this issue had, it’s of course good to reflect how secure design principles can be used to avoid such issues in the future. The obvious way to avoid such issues is of course to not remove azure storage accounts or domains that publicly released software connects to. You can keep storage accounts reserved and linked to your tenant with all data removed and public access disabled. This makes sure no attacker can register the account, while also providing ease of mind that no data can leak and no unexpected bills will arrive.

Looking a bit further into the root cause we see that the developers are confusing transport security with message security. It’s easy to be tricked into believing that since Microsoft owns the storage account and the certificates for the associated, the data received from the server can be trusted. This only means that the data was securely transmitted from public Azure services. What they should have done is sign the messages themselves. That way no matter who owns the storage account or has the ability to generate SSL certificates, they can still verify that the commands to be executed are signed by the correct Microsoft team.