In previous articles (Part I and Part II) I’ve described how an Azure web application can be secured and isolated from Internet traffic using Private Integration Endpoints.
If the security requirements are not draconian, a simpler solution can provide the same results, meaning public internet access to web application is forbidden and traffic between web apps is routed only in the VNET.
This is a simple use case, where I have a frontend web app and a backend one. Frontend web app is publicly accessible but the backend web app must be accessible only from the frontend and traffic to it must not be routed over public Internet.
Note: If you want additional protection for the frontend web app (like WAF, DDoS protection, rate limiting) an Azure Application Gateway can be placed in front of it and in Access restriction you will allow traffic only from Application Gateway IP.
The setup looks like below.
Traffic from frontend web app to backend web app is no longer outed trough Internet but on the Azure subnet from within a VNET. In the same time, backend web app will have access restrictions in place to only allow incoming traffic from the VNET (so no Internet traffic allowed). In this way, the only publicly accessible web app is the frontend end one and traffic from the frontend to the backend will go only on Azure infrastructure.
Setup is pretty simple.
Create a VNET, if don’t already have one.
In the above mentioned VNET, create a subnet with enough IPs (/24).
Go to created subnet and setup the following:
Service endpoints: Select Microsoft.Web from the list of service endpoints.
Subnet delegation: Select Microsoft.Web/serverFarms from dropdown
Click Save
4. If you have an NSG for the subnet make sure you have the appropriate Allow rules in place to allow inbound/outbound traffic from the VNET you have created and also to allow outbound traffic to the Microsoft.Web service point.
Once you have created the VNET and subnet, you can proceed to web app creation.
Create the frontend web app and the associated service plan
Create the backend web app and use the same service plan as in the frontend one
For each of the web app, go to Networking->VNET Integration and integrate the web app with VNET created at Step 1 and subnet created at Step 2.
Once created the web apps, proceed to access restrictions for the backend web app. From web app -> Networking -> Access restrictions create a new Allow rule and select Type as Virtual Network. Select Azure subscription in which you are working, select the VNET created at Step 1 and subnet created at Step 2. Give it a name (VNET Allow rule) and a priority (100) and click Save. Now, traffic to backend web app is restricted to only traffic coming from the VNET (when you create an Allow rule in Access restrictions, a default Deny All rule is created so all other traffic is now denied).
In the same time, because both frontend and backend web apps are integrated with the same VNET and subnet, traffic from to another will not go over Internet but over the integrated VNET and subnet.
If you have an Application Gateway in front of the frontend web app, then you can also restrict traffic to it by creating an Allow rule to permit inbound traffic only from Application Gateway IP. Frontend web app will no longer directly accessible from Internet, all traffic to it will have to pass by Application Gateway, providing additional protection (WAF, etc) and load balancing.
Setup is simpler, backend web applications are isolated from being directly accessible over internet, traffic is being redirected over Azure infrastructure (VNET and subnet).
UPDATE:
When I needed a VM to access to web app backend application, I’ve seen things getting a bit complicated. You cannot deploy the VM in the same subnet created above, where you have integrated backend web app, as the integration at web app service plan level kind of “blocks” the subnet for other deployments. So, for a setup like below:
You need to create a second subnet and deploy the VM inside. After creating the subnet, also add Microsoft.Web service end point to it.
To force traffic from the VM to the web application on the web service points and subnets, VM should have a Deny rule for all outbound internet traffic. Otherwise, traffic will go over internet and it will never reach the backend web app because we have configured it to Deny all incoming traffic, except one coming from the VNET (and it’s subnets).
As OAuth 2.0 offers different flows, or grant types, to cover multiple authorisation scenarios, it can also be used to protect APIs published on Azure API Management Gateway (APIM). The flow that can be used to allow OAuth 2.0 authorisation between applications is the client credentials flow.
When exposing APIs on Azure APIM, we usually have service to service communication, without any form of user interaction, where APIs are consumed by other applications. The writing below covers how to use the client credentials flow to protect the APIs, how to cofigure Azure APIM with OAuth 2.0 to also allow to the the published APIs from Developer Portal, using OAuth 2.0 authorisation and also C# sample code to connect to an API and authorize using OAuth 2.0.
These two methods are the most common in Azure AD and are recommended for clients and resources that perform the client credentials flow. A resource can also choose to authorize its clients in other ways. Each resource server can choose the method that makes the most sense for its application.
Instead of using ACLs, you can use APIs to expose a set of application permissions. An application permission is granted to an application by an organization’s administrator, and can be used only to access data owned by that organization and its employees.
The current setup will contain an Azure APIM instance, a test API published (the out of the box Echo API), a policy for the API and two applications that will be created in Azure AD tenant.
First step is to deploy an Azure APIM instance, using Azure Portal. After deployment is complete, we already have a standard test API, Echo API, published as a test.
For making thing easier on the testing side, I created a simple policy in the inbound processing step to return 200 OK message if the client call reached the API.
Using VS Code and Rest Client Extension, writing the following request:
We will get the following result:
Which is perfectly fine, API is alive and responsive.
Now I had to create two application registrations in Azure AD, which enables the creation of the OAuth 2.0 authorisation with client credentials flow. Follow the instructions from Microsoft documentation here, as in the following steps I will need this setup to also enable authorisation for API from Developer Portal of Azure APIM.
I’ve created two apps registration, one representing the API Proxy (backend) and the other one the API Client.
ago-apiproxy-oauth-app
ago-apiclient-oauth-app
For both applications take note of application ID, as it will be needed later.
Now, for the API client application, a secret needs to be created.
Navigate to the API client app, in my case ago-apiclient-oauth-app.
Navigate to certificates and secrets and add a secret (I kept the default expiration date, as this is just a test setup)
Take note of the generated secret
Now, test the setup in VS Code. In VS Code, these are the settings for the API calls.
Now we are ready to get a JWT token from the token endpoint. The request looks like below (VS Code):
The result of running the above request is a JWT token, like below:
Next step, as we have the apps registered and can generate a JWT, is to configure Azure APIM to validate the JWT and its claims to see if the client is authorized to call the API.
Granting permissions – Above mentioned documentation explains how to grant delegated permissions, which are applicable when we have a signed in user, which is not the case here, when we are dealing with app to app flow. We need to use the application permissions, applicable in app to app scenarios.
Enable user authentication in APIM developer portal. If we do that then a some functionalities of the flow will no longer be working for app to app scenario. In the APIM developer portal we are dealing with a signed in user. I will explain later.
Microsoft documentation suggests using a policy in the API inbound processing leg to validate the audience claim (aud)
The above definition means the audience claim should only be used be used to validate that a token was issued targeting our application. This does not imply there are any permissions granted for the caller app.
Now we can test this by adding a validate-jwt policy to the API operation we are testing.
For the test to work, when we call the OAuth 2.0 protected endpoint, we have to add the Authorization header along with the bearer token generated when we called the token endpoint. The request would look like this:
We should be getting a 200 OK response. As we have not given any permission, yet, to the client app to call the proxy (backend) app, validating only the audience claim is not enough. An approach is to also validate the azp claim to check whether the caller is authorized to call the endpoint.
After testing this, we should get a 200 OK response.
A more advanced scenario is when we would add Application Permission for the client app, to fine grain control of what APIs and API operation the app can call. But if we want this setup to also work in Developer Portal of Azure APIM (users being able to try and test the API from the developer portal)P, then the roles claim is no longer working anymore. In an app to app scenario the token contains the roles claim, but in APIM Developer Portal we have an user already signed in (or asked to sign in) and token is generated based on this user so no more roles claim available.
How to configure APIM Developer Portal is described in details in Microsoft documentation, here. As we already created the apps, continue from “Grant permissions in Azure AD” step. Once configured, for settings to take effect in the new portal, it must be published again. The result is showing the Authorisation field in the Developer Portal when you go an try an API.
Now we have the API published on Azure APIM procted by OAuth 2.0 tokens and also, from the developer portal, we can test it with authorization. As the policies are implemented at API level, without configuring APIM with OAuth 2.0, it would not be possible anymore to use the Azure Portal, always getting an authorisation error.
Visual Studio C# code for accessing the API with token authorization
Code is pretty straight forward. To help with REST calls and JSON objects, I’ve used RestSharp and Newtonsoft.JSON packages.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using RestSharp;
using Newtonsoft.Json;
using System.Net.Http;
using System.Net;
namespace PCBQPCAPITokenAuth
{
class Program
{
static void Main(string[] args)
{
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls;
Token tok=GetTokenRestSharp();
Console.WriteLine(tok.AccessToken);
RestClient client = new RestClient("https://agotestauth.azure-api.net/echo-clone/resource?param1=sample");
RestRequest request = new RestRequest(Method.GET);
request.AddHeader("Ocp-Apim-Subscription-Key", "your subscrition key");
request.AddHeader("Authorization", "Bearer " + tok.AccessToken);
request.AddHeader("cache-control", "no-cache");
IRestResponse response = client.Execute(request);
Console.WriteLine(response.Content.ToString());
Console.WriteLine("Hit ENTER to exit...");
Console.ReadKey();
}
private static Token GetTokenRestSharp()
{
var client = new RestClient("https://login.microsoftonline.com/your azure tennat id/oauth2/v2.0/token");
var request = new RestRequest(Method.POST);
request.AddHeader("cache-control", "no-cache");
request.AddHeader("content-type", "application/x-www-form-urlencoded");
string gtype = "client_credentials";
string cid = "f73ba7f2-5bf9-44b5-b82c-e4b191a80c41";
string csecret = "your api app secret";
string scope = "api://4a41d081-77b5-4936-b87c-52b76b36dbc2/.default";
request.AddParameter("application/x-www-form-urlencoded", "grant_type=" + gtype +"&client_id=" + cid + "&client_secret=" + csecret + "&scope=" + scope, ParameterType.RequestBody);
IRestResponse response = client.Execute(request);
Token tok = JsonConvert.DeserializeObject<Token>(response.Content.ToString());
return tok;
}
internal class Token
{
[JsonProperty("token_type")]
public string TokenType { get; set; }
[JsonProperty("expires_in")]
public string ExpiresIn { get; set; }
[JsonProperty("ext_expires_in")]
public int ExtExpiresIn { get; set; }
[JsonProperty("access_token")]
public string AccessToken { get; set; }
}
}
}
Well, everything described in Part I works perfectly fine when your VNETs are using Azure provided DNS Server.
But if you VNETs are using customs DNS servers, as in my case, things get a little tricky. You use custom DNS servers because you are living in a hybrid cloud solution and resources from Azure must also find resources from on premises data centers adn in this case, standard Azure provided DNS servers are of no help.
In this case, when resources are trying to resolve name for your web apps with private integration activated, no DNS server will know about them. There at least two options available:
A simpler approach (tested and working) is to add the IP of the Azure resolver (168.63.129.16) in your list of custom DNS servers for the VNET. Don’t add it on the first position, as this might bring trouble in resolving on premises resources and also not on the third position or lower because this will increase the time needed for name resolution and potentially can throw timeout errors. Adding it on the second position looks like a reasonable compromise.
As all Azure web applications are directly available over Internet, hence public, most of the times I need to have some form of protection for them, like an Web Application Firewall. And in Azure, I have at least two options for that:
Using an Application Gateway with WAF in front of an Azure Web App
Using Azure Front Door with WAF in front of an Azure Web App
Both options suffer from the same basic problem. The web application is still public and can be directly accessed using it’s URL (appname. azurewebsites.net).
I know that I can restrict access to the web apps by IP addresses but in a complex setup, with one web calling another and so on, I would like first to have inter web apps traffic not going over internet and second, a more elegant solution to restrict access to the web apps themselves, something like as in the diagram below:
In preview, Azure now has the “Private endpoint connection” functionality, which allows the creation of a private endpoint for a web application. This means that once created, the web application is no longer accessible from Internet but only from Azure networking resources (VNET, subnets, and so on).
Also, traffic from the web app is directed over Azure Private Link, the private endpoint being assigned an IP address from the VNET to which the web app is integrated to and so there no traffic over Internet. More than this, if you have an Express Route or VPN connected to your on premises resources (such as a database) then traffic between Azure web application and on premises resources will go trough Private Link and Express Route or VPN. Advantages:
The web app can be secured, by eliminating public exposure
Secure traffic flow between web application and on premises resources, over VPN or Express Route
Avoid data exfiltration from VNET
Private Endpoint created is used only for the incoming traffic to your web application. Outgoing traffic will not use this Private Endpoint, but the VNET integration feature.
Note: The VNet integration feature cannot use the same subnet as Private Endpoint, this is a limitation of the VNet integration feature.
Ok, I can secure the web application by not allowing any public traffic to it, but I still want it to be Internet accessible and protected by WAF. In this scenario, you simply put an external facing Application Gateway in front of the web application, as in the diagram above and the web application can accept traffic from Internet, traffic which is filtered by Application Gateway WAF.
If the web app needs to be accessible from other Azure VNETs or on premises networks then, instead of a public facing Application Gateway, you can put an internal facing one, with the same results (web app can also be accessed internally, from other VNETs using VNET peering).
This Private Endpoint feature I find especially useful when I have more than one web app, which are called one by the other. Instead of setting IP restrictions to each web app (and making sure that IP from the calling web app is whitelisted by the called one), I can integrate them all with private endpoint, so making sure that traffic from one to another is allowed (because they are usually on the same VNET) and having all public traffic denied. And in front of the entry point web app I can put an Application Gateway with WAF to be able to securely access it from Internet.
Note: Using an Azure Front Door instead of a public facing Application Gateway will not do the trick. Web app will still reject traffic coming from Front Door.
Setting up Azure Private Endpoint integration is quite simple:
Assuming there is already a VNET and the corresponding subnets created, first step is to integrate the web application with a VNET and a subnet.
As highlighted above, you will need a different subnet for the Private Endpoint than the one you have integrated the web app with.
Create the private endpoint and from that moment on, incoming traffic will be restricted to only Azure as source traffic. Incoming traffic will go trough the Private Endpoint and outgoing traffic will go trough the VNET integration subnet.
Once cretaed the Private Endpoint, DNS provided by Azure will cease to work and a Private DNS Zone will have to be created. Microsoft has details on it, here. Basically, a private DNS zone with the name of privatelink.azurewebsites.net will have to be created and registered with the VNET in which the Private Endpoint has been created
Once created the Private Endpoint, then you can proceed with an Application Gateway creation and web app will be secured.
Running less software sometimes is more when you consume it as a service.
“Run Less Software: If a component has become a commodity, we shouldn’t be spending precious development time on maintaining it, instead we should be consuming it as a service. In the history of enterprises this is controversial, but even containers are now run and operated as a service. If your engineers aren’t building data centers any more, why are they building container platforms? “