This blog is about how I got access to over 22 internal Microsoft services and how you might be vulnerable too.
After my last talk at the 38C3 conference in Hamburg, this was the top comment on YouTube.

Well, this story definitely falls in the category “someone stumbling around finding horrifying vulnerabilities”. Although this time I was not even having issues, I was just distracted from a boring task.
You see, I was writing some documentation the other day, when my Eye fell on one of those aka.ms links. For those of you who don’t know, aka.ms is the URL shortener service used by Microsoft.
My mind started to wonder. Where would you end up if you would just visit https://aka.ms, so without any shortener link included?

A login screen. This is probably where Microsoft employees manage their links. I did wonder though, what would happen if I simply tried logging in here myself with my own Microsoft account? Surely, that would not work, right?

Of course not! You need an account in the Microsoft tenant, otherwise, no luck. Back to writing that boring documentation. But what was that? An indexing service for aka.ms links? Interesting.

And one of those links points to an eng.ms domain. What is that?

Another login screen! It did not work the first time, but maybe I can login here! The result surprised me.

I got a consent prompt, asking me to give consent to share my basic profile information with this application. Sure, why not? Does that mean this is an application I can access?
After clicking “accept”, I get a 500 Internal Server Error message. I’m pretty sure that was this server trying to tell me that this application was not meant to be accessed by me. But now I’m intrigued, what is eng.ms? I still had that documentation to write, but that had to wait.
I started subdomain enumeration on the domain eng.ms and found an interesting subdomain: rescue.eng.ms. This gave me another login screen and another consent prompt. But this time it gave me access using my own Microsoft 365 account! Notice my name in the upper right corner?

I quickly realized that I should not have access here. This Engineering Hub was some sort of portal for Microsoft Engineers. I must admit, my knee-jerk reaction as a red-teamer was to search for the phrase “password”. What can I say, I could not help myself. It returned 13,252 hits, all referring to internal Microsoft procedures and product groups. I was clearly not supposed to access this.

I stopped here and reported this to the Microsoft Security Response Centre (MSRC). But what had happened here? Why was I able to access this internal portal and might other services be vulnerable in the same way?
To answer that question, we need to have a closer look at how Entra ID handles authentication and authorization for multi-tenant applications.
Multi-Tenant Applications
When developers want to use Entra for authentication to a web application, the application needs to first be registered at Entra. This is called an “App Registration” or “Application Object”. These applications can be set to accept users from the tenant where they are registered (single-tenant), or users from any tenant (multi-tenant). They can also be configured to accept “personal Microsoft accounts”.

App developers are then tasked to configure the authorization server (login.microsoftonline.com in most cases) in application logic and all four options above have a corresponding desired value for the authorization server URL.

So single-tenant applications should be configured to use https://login.microsoftonline.com/<tenantid> or https://login.microsoftonline.com/<domain> as the authorization server, while multi-tenant applications that also accept personal Microsoft accounts should use https://login.microsoftonline.com/common.
The Engineering Hub was configured as a multi-tenant application and redirected me to the /common endpoint.

This allowed me to login (step 1 & 2) with my own “work or school account”, which triggered a Consent prompt (Step 3), as the application was not used in my own tenant before. When I gave it consent, it was instantiated into a Service Principal or “Enterprise Application” (step 4) in my own tenant and returned an access token (step 5).
This access token was issued by my own tenant. The “iss” (issuer) and “tid” (tenant ID) claims were set to values corresponding with my own tenant. Any conditional access policies or user assignment took place in my own tenant according to my own configuration. I was authenticated & authorized, only by the wrong authority. And nowhere in application logic was this checked.

This made me wonder. What other Microsoft services could be impacted by this misconfiguration? Let’s find out!
Mapping the Microsoft Attack Surface
I enumerated subdomains for microsoft.com, azure.com, azure.net, office365.com, office.com, office.net, windows.net, and any .ms domains owned by Microsoft.
This resulted in 102,672 (sub)domains, of which 70,043 resolved to an IP address, 41,890 responded to HTTPS and 1,406 used Entra ID for authentication.
For all 1,406 applications, I checked the URL and parsed the “client_id” parameter to get the Application ID. Any multi-tenant application can be looked up at the Azure AD Graph API at
https://graph.windows.net/myorganization/applicationRefs/<APP_ID>/?api-version=1.6-internal
Single-tenant applications will not give any result, but multi-tenant applications do. It turns out that of the 1,406 applications, 176 were configured as multi-tenant.
Interestingly enough, most of these redirected visitors to the /<tenantid> endpoint. Remember that this was supposed to be used only for single-tenant applications. Developers were probably not even aware their application was configured as multi-tenant. But this URL is just a client-side setting. The only effect is against which tenant you are authenticating. For /common and /organizations, you will authenticate against your own tenant, for /<tenantid>, you authenticate against that tenant.
Previous Research
In 2023, Wiz discovered that for any multi-tenant application, if you replace /common or /organizations with /<tenantid> during authentication, you will receive an access token issued by the resource tenant.
Microsoft patched this and stopped issuing tokens if the user does not exist in the resource tenant. Today, we are going to do it the other way around, we are going to replace /<tenantid> with /common to force authentication against our own tenant. This can easily be done in Burp with “match and replace rules”.

But there were still some more hurdles to take. Some applications required user assignment. This was easily done now in my own tenant.

Other applications gave error messages during consent.


Both of these errors occurred when application objects define resources (other application objects) they require access to. These other applications could be configured as single-tenant applications, which can not be instantiated in my tenant. Or the application is configured in a way that it already expects the service principal to be there. Required resource access can be found at the Azure AD Graph API at
https://graph.windows.net/myorganization/applicationRefs/<APP_ID>/?api-version=1.6-internal

Luckily, there is a workaround here. By skipping the consent flow and instantiating a service principal directly without consent, we can at least instantiate each application individually that is configured as a multi-tenant application.
New-AzureADServicePrincipal -AccountEnabled $true -AppId $app_id `
-AppRoleAssignmentRequired $true -Tags {WindowsAzureActiveDirectoryIntegratedApp}
This creates a service principal without asking consent or checking availability of required resource access. You do still need to assign the user to the application afterwards and give consent to all resources that are available.
Internal Microsoft Applications
We now had all the required tools to consent & compromise. It turned out that 22 internal Microsoft applications were vulnerable and exposed internal data. Some examples.

This “Risk Register” contained [redacted on request by Microsoft].
Another interesting application was the Security Intelligence Platform. This application contained security intelligence datasets with names that speak for themselves.


Unfortunately for us, access to these datasets still required admin approval. I wondered how those access requests work. It has a button “Ask Sippy”. Maybe Sippy knows?

At least this application allowed me to search the entire Microsoft tenant for Service Principal Names and user accounts.


I even found a logfile that contained Authorization Codes for all the users that had logged in.

How unfortunate that these can only be redeemed once…
The next application was actually a really big forest of connected applications.

This turned out to be the “Media Creation” service. It promised great things to come. Well bring it on.

What media is it actually building? Is that… Windows?

Let’s dive in. There are logfiles.

And one of these even contains some private key.

ESD stands for “Electronic Software Distribution” and is used to generate licence keys. Could it be we can now generate licence keys?
This application also gave us access to an interesting API.

And finally gave us RCE on their build infrastructure by defining a new tool.

Next to these services, we also got access to several more internal Microsoft applications.
- Engage ACE Hub
- Responsible AI Ops Platform
- Billing Account of Microsoft Internal (BAMI) portal
- CPET webservice
- HxSDK Documentation
- Hardware Inventory API
- Electronic Label Management
- Quality Checkpoint
- Ready to Deploy app
- Bing ads SA Diagnostic Tool
- SBS tool (Copilot Human Correlation Tool)
- Secure Devices Portal
- Azure Subscription Hub SLM API
This research highlights the risk a single misconfiguration in a large environment can pose. The problem lies in a shared responsibility when deploying these applications. When the application developers rely on other teams to register their applications in Entra, both will think they have set up their part correctly.
Are you vulnerable?
So is this vulnerability still out there? Yes! 2% of our own clients were affected. What should you do?
- Only use Multi-Tenant applications when necessary
- If using Multi-Tenant applications, make sure to check the “iss” or “tid” claim in the access token in application logic
We have written a small PowerShell script to quickly identify all Multi-Tenant applications in your own Entra environment and their respective redirect URIs.
> .\ListMultiTenantApplications.ps1
Potentially vulnerable App Registrations found:
DisplayName AppId RedirectUris
----------- ----- ------------
Eye Security Secret App 8123db1e-3ae6-4068-abcd-f45acafee99c https://somepath.eye.security
Eye Security Research Blog 74561b55-4eee-4db9-dead-c80ababee56d /rest/oauth2-credential/callback
Check if any of the redirect URI’s is reachable from the internet and make sure all listed applications are checking the “iss” or “tid” claim in the access token in application logic.
The PowerShell script can be downloaded here.
Timeline & Reward
I submitted the first four cases to the MSRC in November 2024 but got distracted by work for a while. Microsoft scaled up a project team in the meantime and this resulted in a race between the Microsoft internal Azure Security Variant Hunting Team and me to submit more vulnerable services. I won 17 of the 18 submissions I did in January 2025. Almost all cases were resolved within two months and this got me third place on the MSRC Q1 leaderboard.

Now as you can imagine, this research made me rich! The total amount of bug bounties was…

What?
After my last talk at the 38C3 conference in Hamburg one of the comments on YouTube said

So, what happened? Bug hunting at Microsoft was supposed to be an infinite money glitch!
Well, we’re not done yet.

This final application titled “Rewards Support Tool”, allowed managing rewards. And I think I do deserve a reward for this, not?
So, let’s navigate this interesting menu.

And go to the “Rebate” page.

Now just enter any amount, currency, PayPal ID, phone number and code, and hit “Payout”.
Turns out, hacking Microsoft still is an infinite money glitch.
About Eye Security
We are a European cybersecurity company focused on 24/7 threat monitoring, incident response, and cyber insurance. Our research team performs proactive scans and threat intelligence operations across the region to defend our customers and their supply chains.
Learn more at https://eye.security/ and follow us on LinkedIn to help us spread the word.