Have you ever been confused by all the load balancing options available in Azure? What’s the difference between regional and global load balancing? Why are there so many load balancing services? And… how do you setup an Azure Load Balancer completely from scratch?
This video was created to answer these questions.
If you’re pursuing Microsoft’s Azure role-based certifications this concept will keep coming up over and over again, so it’s good to understand the foundations.
Full Transcript:
Mike Pfeiffer:
Networking by far is one of the biggest challenges for anybody pursuing Azure certification. Most of the exams include some level of networking. When it comes to doing things like load balancing, that can be really confusing because there’s multiple load balancer types, there is global and regional load balancing. And it’s like, “When do I pick one over the other?” In this video, I’m going to break down the differences, and I’m going to actually show you how to implement the load balancer to load balance couple of Ubuntu VMs across availability zones in Azure.
Mike Pfeiffer:
What’s up everybody. It’s Mike Pfeiffer, Microsoft Azure MVP, and host of the CloudSkills.fm Podcast. This video is actually from a training we did, the AZ-104 certification training we did with Tim Warner and myself a couple of weeks ago. In this video, I spend almost an hour taking you through load balancing in Azure. Like I said, I’m going to break down the differences between regional and global load balancing, all that kind of stuff.
Mike Pfeiffer:
I actually walk you step by step through implementing an Azure Load Balancer to basically load balance a couple of different VMs running in two different availability zones in an Azure region. If you’re studying for Azure certification or you’re just trying to ramp up, either way, I think this content will really help you out. So without any further delay, let’s start the training.
Mike Pfeiffer:
Overview of load balancing options in Azure. Just wanted to point this out real quick. Number one, global versus regional. This is a fundamental thing that you want to know off the top of your head. This isn’t hard. Global load balancing services distribute traffic across regional backends, and regional load balancers, essentially, just working within a single region. Global would be, “Hey, want to route or load balance across West US and East US.” Regional would be, “Hey, I want to load balance across availability zones potentially.” I’ll talk about the services here.
Mike Pfeiffer:
But coming up, we’ve got HTTPS versus non web-based workloads, So there’s application-aware load balancing. So layer seven load balancing as well as layer four. And so, anybody that’s not a networking person, the OSI model, but different layers of the network stack, that’s what we’re referring to talking about layer four, layer seven.
Mike Pfeiffer:
Essentially, layer seven, at the very top of the OSI model is the application layer. That’s the layer where we can see things about the application. So if we wanted a load balance a workload and be able to see the path in the URL that the user is trying to get to, on the layer seven load balance we can see that, and we can do things like path-based routing. On something that’s not HTTPS, optimize a layer four load balancing solution is just really looking at the ports, the protocols, the source, and destination addresses.
Mike Pfeiffer:
Okay. Looking at this table here, this is a nice one just to know off the top of your head. Azure Front Door is a global service, and the recommended traffic is web-based traffic. So Azure Front Door is a global… You can think of it as a DNS load balancing services. It’s a little bit more sophisticated than that because it acts like a layer seven load balancer, and you can basically have a single endpoint load balance across two different regions, which is really interesting.
Mike Pfeiffer:
There’s also other backend types that are supported like external end points. So you could load balance between on prem in Azure. Basically, have a single end point for your web-based applications. And so, Front Door is a newer service. Before we had Front Door, all we had for a global load balancer was Traffic Manager. Traffic Manager is simply an Anycast to DNS load balancing solution. And so, there’s no layer seven capabilities with that one.
Mike Pfeiffer:
Front Door, you could do things like SSL termination, cookie based affinity to keep people connected to the same endpoint for the entirety of their session, things like that. Traffic Manager’s just DNS load balancing. And so, you’re completely reliant on the client’s ability to check in with Traffic Manager, query, get the response for the right end point. For both of those, there’s routing policies and all kinds of cool stuff that we could do. But what we need to really zoom in on today, and for this certification, is the regional load balancers.
Mike Pfeiffer:
When you go into the architect level, they’ll hit you up on these other global ones for this tier. The admin level, we need to understand Application Gateway and Azure Load Balancer. Application Gateway is the layer seven regional load balancer. So if I wanted to load balance across availability zones and I’ve got a web workload, and I want to be able to do things like web application filtering, and I want to be able to do cookie-based affinity, so per session persistence. If you want to do any of those nippy things, Application Gateway is going to be the way to go.
Mike Pfeiffer:
Azure Load Balancer is the layer four regional load balancer. The nice thing about this one is there is a free tier. There’s a SKU that’s free, but it doesn’t work across availability zones. If you wanted to load balance across availability zones with either of these regional gateway options or load mounts or options, you’re going to be paying for this infrastructure. So those are the major differences.
Mike Pfeiffer:
And so, when you get into working with Azure Load Balancer and Application Gateway, a lot of the constructs are very similar. There’s a front end configuration, there’s a backend pool. The stuff I’ll show you today is going to be specific to Azure Load Balancer, but there’ll be a lot of parallels. And I’ll also show you the deployment process for Application Gateway. But the demo that I’m going to show you is going to be based on the Azure Load Balancer.
Mike Pfeiffer:
Like I said, an Azure Load Balancer operates on layer four. It does rely on health probes to determine the backend status. And you see that in concept with all of the load balancing solutions. We’ve got to make sure that wherever we’re sending connections, those backend systems are healthy. From there, as long as the backends are healthy, we’ll route traffic to those based on the load balancing rules that we configure. All right. So very simple so far.
Mike Pfeiffer:
Talking about Application Gateway, step further, we already mentioned this. Again, it works the same way. Outside of the fact that this one’s a layer seven load balancing solution, it still needs probes, it still needs the backend pool. And so, once you understand Azure Load Balancer, you’ll understand the high level architecture for the Application Gateway as well. Here’s a visual on a little bit of a difference with Application Gateway. The architecture itself, having the load balancer between the users in the backend pool, same kind of concept, regardless. But notice here we’ve got multiple backend pools with an app gateway.
Mike Pfeiffer:
Nice thing about this is we can do things like path-based routing with a layer seven load balancer like the Application Gateway. This is commonly done, especially in the world of microservices. Notice that we’ve got some paths, the slash image path. So under the contoso.com domain slash images takes the user to a different set of servers. If we go to contoso.com/video, that takes us to a different set altogether. So being able to route different paths to different groups of servers, very commonly done in the world of microservices.
Mike Pfeiffer:
By the way, yesterday, when spun up that AKS cluster, we were using an Azure Load Balancer by default, but you can also use the Application Gateway. So as you’re coming into your Kubernetes implementation, it might be nice to be able to use Application Gateway in that model because the path-based routing can send people to different services. Okay. So let’s move forward here and I’ll show you just a couple other diagrams, some architectural components, and then we’ll start building this stuff out.
Mike Pfeiffer:
When you’re thinking about this, regardless of Application Gateway or Azure Load Balancer, you can deploy these load balancers, either internet facing or internally facing. Tim showed a couple of reference architectures so far where we’ve seen that, a public load balancer for the web tier, and then on the internal networks like load balancers for… Or the internal VNet, I should say, load balancers for things like a database tier and stuff like that. Not a whole lot to that. You’ve got a public interface on the load balancer.
Mike Pfeiffer:
Cool part is you don’t have to put public addresses on all the machines behind it. When you go into the Azure portal, I think we’ve all seen it just drives you down the path of when you build a VM, just putting a public address on every single VM. Obviously, we don’t want to do that in real life. So with something like this, we can have a public load balancer. We can have machines behind it in the backend pool. And those machines can just have private addresses, and we can get to them by going through the public interface on the load balancer.
Mike Pfeiffer:
The internal scheme is just the opposite. This is a good diagram on the bottom right there, showing the web tier, connecting to an internal load balancer, take it to the database tier. If you ever had to get into the situation of doing SQL server always on availability groups in a backend pool like they’re showing here, I wouldn’t recommend it. It’s a lot of work. But if your DBAs need root access to the systems, then you’ve got to do it.
Mike Pfeiffer:
I’m a fan of Azure SQL, to be honest, but when you’re doing SQL clusters like this, and always on availability groups, the best practice is to actually use an Azure Load Balancer. Windows, of course, machines are Azure virtual machines. And so, we’d load balance the availability group, listener IP addresses with an Azure Load Balancer. But you might have other workloads. And since Azure Load Balancer’s layer four, you can load balance any TCP or UDP traffic. And so, it’s not just restricted to web workloads. And so, the scheme when you’re deploying these solutions, when you’re provisioning it, you can say public or internal.
Mike Pfeiffer:
All right. Last thing here and then we’ll jump into the portal. There’s two SKUs, public… Oh, sorry. On the Azure Load Balancer, two SKUs, basic and standard. Basic is the free version and works really well. But if you want to span availability zones for ultimate high availability, the best regional high availability you can get, go with the standard SKU. You get paid for that. But then you can basically target any virtual machine and any availability zone. All right?
Mike Pfeiffer:
Let’s jump into the portal and we’ll build this out from scratch. Let’s jump over here. I’ve cleaned up my environment, so I’m going to step through this manually, one thing at a time. Let’s first build the VNet. So we’ll say virtual network, and then get this new thing. We’ll call this the Webfarm resource group. The reason I’m building this manual is just so you guys could see how all this plays out. So we’ll see Webfarm VNet, and we’ll put this in West US two because I know West US two has multiple availability zones.
Mike Pfeiffer:
Coming to the next screen here, Tim mentioned this. I too like to design VNets using a slash 16 because it gives you the ability to have lots of space for multiple subnets. So it’s a good design pattern. I was just going to go with the default. So the 10.0 network for the entire VNet address space and then just a slash 24 for the default subnet. So we’ll just go with that. That looks good. As we get into this, I think Tim touched on this. We got DDoS protection at the network level.
Mike Pfeiffer:
The basic tier is on by default, so you’re just getting that for free. You can go into the standard version for DDoS protection and that will light up all kinds of cool controls and some different offers like include adaptive tuning, notifications, and stuff like that. But you’re getting it no matter what. And then also Azure Firewall is basically the managed network virtual appliance.
Mike Pfeiffer:
When Tim was showing an architectural diagram of having a VM in a special subnet that was routing all the traffic in the middle of the on-prem world, in the middle of the backend systems in Azure, forcing the traffic upload to that network virtual appliance, you do the same thing with Azure Firewall. So we’ll come back to that. Let’s build the VNet. Then what we’re going to do is we’re going to build a network security group and we’re going to pin that network security group to the VNet subnet, just like Tim was talking about.
Mike Pfeiffer:
So let’s create a network security group and drop this in the Webfarm VNet. We’ll call this the WEB-NSG and put it in West US 2, and which review and create that. I could tell you guys from my own experience on AZ-104 beta, that I got hammered with questions about NSGs. So this is good work for you to do with some double duty here and see it again. Here’s the default rules Tim was talking about.
Mike Pfeiffer:
Eventually, what I’m going to want to be able to do is have my load balancer establish a connection to the backend pool. So we’re going to kick this NSG connected to the subnet that the server’s going to be in. So we’ll go to subnets, we’ll associate this with the Webfarm VNet, the default subnet. And right now all we’ve got is the default rules. So connections, I’d have to think about that. So if I’m looking at the default rules, there’s Allow VNet Inbound and things like that. This is one of those things we’ve got to think about.
Mike Pfeiffer:
So it does say about Allow Azure Load Balancer, any source. Let me zoom out a little bit. Source any or source Azure Load Balancer, destination, any allow. But we’ll have to validate and check that. So let’s see how this goes because as we build an Azure Load balancer and create load balancing rules, that’ll actually open up port 80 on the outside of a load balancer. But what about the inside network? So we’re going to come back to this, okay, because I want you guys to see what happens here.
Mike Pfeiffer:
But right now this NSG is connected to the subnet. So let’s deploy some servers into the subnet, okay? So we’ll go over. And what we’re going to do here is I’m going to use Ubuntu so it’ll spin up quickly. We’ll put this Webfarm resource group. This will be WEB1. Put this in West US two. Then for infrastructure redundancy, again, Tim showed this yesterday, when we’re talking about HA and stuff like that. We’re going pick availability zone, that availability zone model, and we’re going to put Web one and availability zone two. Okay?
Mike Pfeiffer:
Then we’ll just use Ubuntu. I don’t need that much firepower for this guy, so we’ll just do one CPU, two gigs of RAM, and then we’re going to do password based off. All right. So down here, this is the interesting part. Once you start going through the portal, the build begins. It starts steering you down the road of doing things where you might not want it to. In this configuration here, if I’m letting it open ports, if I’m configuring this, this is configuring an NSG on the NIC. I’m not going to do that so I’m going to say none. I’m going to control all of my firewall rules with the NSG.
Mike Pfeiffer:
So we’ll hit next here. Default disk config is good. Notice that on the networking screen, we’re going into the Webfarm VNet. It just pick that up by default. Knows the right subnet. I am not going to put a public IP on here because this is going to sit behind the load balancer. So I’m going to say no public IP, and I’ll show you how we get to this in a little bit, all right? No security group on this NIC. Nothing. Next I’m going in here. I’m not going to turn on boot diagnostics for those. Just turn that off and let’s go ahead and review and create that first VM. So that’s in availability zone one.
Mike Pfeiffer:
Then the second VM, we’ll put in availability zone two. All right. So let’s just, while that one’s running, let’s jump back over, go to Ubuntu, put this guy web two in that resource group. We’ll say West US 2, infrastructure redundancy, availability zone. Put this guy in availability zone 2, same virtual machine image. And of course, as you might suspect, I would want to be thinking about, how do I get my application code on this server? Is it part of the image? Is it part of a bootstrapping script, like a custom script extension? All that kind of stuff.
Mike Pfeiffer:
I’m going to install web server manually, and I’m going to show you how we do that here in a minute. So let’s make sure that you get the same credentials. I’m going to do the same thing as before, no inbound port configuration. We’re going to rely on the NSG on the subnet. On the networking page, we’re going to turn off the public IP address. Then over here we’ll say, “No boot diagnostics,” to match the other one, review and create.
Mike Pfeiffer:
All right. So I’ve got two systems coming up right now. Let’s go and build the load balancer. So if we come over here, load balancer. By the way, in the marketplace, there’s so many images from other vendors, Cisco and f5 and tons of… Kemp load balancers. There’s tons of stuff that runs on VMs, if you guys are interested. So let’s put the load balancer in the Webfarm resource group. I’ll show you the process for Application Gateway here in a bit. We’ll say this is our web load balancer. This is going to be in West US two.
Mike Pfeiffer:
And so, here’s the type. Is it going to be public out on the internet or is it going to be an internal load balancer? Well, in our scenario, this is going to be a couple of web servers we’re hitting from the internet. We’ll make it public. For the SKU, remember, the basic version is free. And if we want zonal redundant configuration, we want standard. And so, notice what it says here when we pick that. When you pick standard, it’s like, “Hey, standard load balancer is secure by default.” This means network security groups are used explicitly to permit and whitelist allowed traffic.
Mike Pfeiffer:
Then this is an important footnote. If you do not have an NSG on a subnet or NIC of your VM resource, traffic is not allowed to reach this. So there’s nothing stopping you from building a VNet and a VM with no NSGs. And if you do that, what they’re saying is it’s not going to work. We already have that, so we’re good. Now, coming down the lists, we need to get the public IP address the name. So this is like our Webfarm public IP address, if I have a naming convention there.
Mike Pfeiffer:
Now here is the interesting piece, what availability zone do you want to put the load balancer? One, two, or three? Well, we want to load balance across one and two. So we’re going to configure this to be zone redundant. You’re probably thinking, “Well, how the heck can this thing be in multiple availability zones if those are actually different data centers?” Well, this thing’s got some magic behind it. It’s got interfaces and these different…
Mike Pfeiffer:
This is a managed service. So Microsoft’s doing some stuff to abstract that for you. So let’s go ahead and create the load balancer and we’ll have a fair amount of work to do still. We need to configure a backend pool. You need to configure probes to make sure that the application’s healthy, which that won’t work initially because the servers don’t have any code on them. So we’ll figure that out as well. All right. The other thing that is confusing about Azure Load Balancer that still comes up on this exam, potentially, and it’s been around forever, is this concept of NAT rules.
Mike Pfeiffer:
Let me just explain this real quick, I’m going to show this to you live. That was not the topic I was looking for. But basically there’s this concept of port forwarding or NAT rules on the Azure Load Balancer. This is not for application traffic. This is more of a tutorial. This is for getting into an individual resource. There was an article I used to find on the first page of Google, but it’s not showing up now, about NAT rules, but I’m going to show you this in practice. Anyways, let me just check this one last thing. Not showing up on here.
Mike Pfeiffer:
Anyways, I’ll show you in the portal. All right. The appointment is complete. Let me head over here. What I was getting at there is load balancing rules like you see here in the settings, and then there’s these inbound NAT rules. The load balancing rules, of course, is for your application traffic. I want to load balance 80 in 443 for my web workload across all my VMs in the backend pool. Cool. The inbound NAT rules are a way for you to basically poke a hole in the load balancer. So you as an administrator can get through the load balancer and target a backend in individual backend instance. So you can RDP or SSH directly through the load balancer to a backend instance.
Mike Pfeiffer:
This pattern has been around since the early days of Azure. And in fact, when Azure virtual machines first GAD backend like 2013, I think it was. The default config was, when you spin up a VM, it was sitting behind a load balancer. We had to set up NAT rules so we could RDP in and do stuff. And so, this construct is still around to this day. But we need to set up a backend pool. You need to set up a health probe. We need to set up load balancing rules. And then finally, I’m going to need to set up a NAT rule setup so I can SSH in and install a web server because I didn’t have any automation preconfigured. Doing all that will teach us the basics to this stuff.
Mike Pfeiffer:
Let’s go to backend pools. Click on add. Also note, based on the conversations yesterday, virtual machine scale set, when you’re scaling out horizontally, a VM scale set will auto attach your instances to your load balancer. So I’m doing it manually here registering these instances with a backend pool, but if you’re doing a scale set, that could be dynamic. The attachments and the disconnection of your VMs on scale out and scale in, doesn’t have to be micromanaged by you. So we’ll call this Webfarm backend. Virtual network is our VNet. And we can just pick our virtual machines from the list.
Mike Pfeiffer:
Web one, notice that we have a IP configuration option. Just like Tim is talking about, we can do multiple NICs or VMs that have multiple NICs. Those NICs also can have multiple IP configurations. So just like on a machine, you can have a primary and a secondary IP. So there’s different configurations you can connect to. These servers are pretty vanilla, right? Easy to set this up. So I just pick the only NIC that’s on there. Then notice this thing does work with scale sets. So let’s go ahead and add that.
Mike Pfeiffer:
One thing I’ll say about Application Gateway as well, and the reason that I didn’t spin that one up as a demo, it does take some time. It’s not as slow as it used to be, but it’s definitely not as quick as provisioning of load balancer. So I’ll show you the build process for Application Gateway, but it does take a little bit of time. So I’m not going to go through building it from scratch.
Mike Pfeiffer:
The other thing to keep in mind is any time you’re configuring the Azure Load Balancer like this, when I add something to a backend pool, I can’t do anything else. Even though this is an asynchronous process, going over to another menu and trying to do something else while this thing is running, generally doesn’t work with this specific device. But this should be done any second. I hope Badger doesn’t make a liar out of me. Seemed like it was running a little bit slower yesterday afternoon. Let’s see what happens here.
Mike Pfeiffer:
In the meantime, while I’m waiting for that to run, just a couple other things. Once we post all the slides and stuff, like I said, I would go through these in the days leading up to your test, days and weeks, and just some footnotes here about your backend pool and the SKUs on this Azure Load Balancer. When you’re on standard, that’s up to a thousand Azure VMs in the same VNet, including availability sets, scale sets, or across AZs. And the basic tier is just a hundred VMs. That’s something to keep in mind.
Mike Pfeiffer:
Let’s see here. Do I have a better picture about NAT rules in here? I don’t. Multiple front ends is also an option, so you could have multiple front end IP configurations. You might have different public IPS on a load balancer. With the Application Gateway, that’s also something that’s commonly configured, multiple host names as well. All right cool. So we’ve got our backend pool finally. Let’s do a refresh right here. That looks scary. We’ve got two machines in the backend pool.
Mike Pfeiffer:
Let’s go to health probes, and then we’ll click on add. We do need to make sure that these backend instances are healthy before we ever send any traffic over there. Taking a look here, we can tell Azure, “Hey, every five seconds, go out and make sure that you can get to port 80 on each server in the backend pool. And if that doesn’t work two times, if there’s two consecutive failures, let’s take that sucker out of service and we’ll check again until it’s healthy before we add it back.”
Mike Pfeiffer:
Now, here’s the big difference on this screen. Notice when it’s on TCP, you’re just doing a port ping. If you pick something like HTTP or HTTPS, you can actually check a path on the web server. And this is pretty useful in the world of software development and building applications where you want a deep health check from a probe, and very common to do… Well, to implement the deep health check pattern. It’s not a Microsoft thing. For example, like this, you could say healthcheck.aspx. So you could have a server side page, maybe this is a .net application. You can have a server side script that the probe pings and the developers can control the response.
Mike Pfeiffer:
So the health probe, that means load balancers, whether it’s Application Gateway or Azure Load Balancer, is expecting an HTTP 200 okay response on the probe. If you’re the developer wanting to be able to control, take the server out of service without having to mess up the Azure configuration or go in and try to rip a server out of the backend pool, in the app config, they can just say whenever we request this page on a certain server, we send a 500 error, and then that will effectively take the system out of the backend pool and you’ll be good to go right until the [inaudible 00:26:58] change that config.
Mike Pfeiffer:
So anyways, I’m just going to do a TCP port ping here for this probe because I don’t have anything installed yet. Let’s let that build and that’s done. All right. So next step, load balancing rules. When people hit the public IP address and the load balancer, what’s going to happen? Let’s click on add, and we’ll call this rule HTTP. We’ll just stick with IP version four here. Notice our front end configuration is already defined because when we created the load balancer, we told it we wanted a public IP. So it set that up for us.
Mike Pfeiffer:
In this configuration, we’re going to listen on port 80 on the outside and target backend port 80 as well. So you could do different port mappings here. That would work just fine. If you did, if you decided to do 443, that’s cool. You just have to have a certificate on the backend servers. There’s no SSL termination on this layer for a load balancer. But anyways, outside and inside it’s going to match. If we’re doing 443 as well, and we did have a certificate, but we also wanted port 80, we would just have to load balancing rules.
Mike Pfeiffer:
All right. And so, coming down here for this rule, for port 80, we’ve got our backend pool configured. It’s using the health probe that we set up. Then down here, session persistence, is an option. This can be tricky because… Actually, I have a slide on this. Yeah, session persistence. With a Azure Load Balancer and this a layer four capability, session persistence isn’t… You’ve got to be very careful with this, because take a look at the diagram down at the bottom. When we’re talking about session persistence in a load balancing world, we’re talking about keeping the user with the same backend server that they start with for the entire lifetime of their session.
Mike Pfeiffer:
So if the client on the left there on the laptop hits the load balancer and connects to VM1, if we wanted them to always be on VM1 for that session, well, then we can try to do session persistence or sticky sessions. But in the layer four world, really the only thing that we can do that based off of is the client’s IP address or the client’s IP address and a combination of the protocol along with that. This isn’t bullet proof because we all know that NAT is a thing. And if there’s a hundred people sitting behind the same internet, or they’re saying…
Mike Pfeiffer:
Go to a corporation. All the people are sitting on the same local network. And when they go to the internet, they’re seen on the outside world as the same client IP address. Well, then you now have giant groups of people being persisted to a backend instance because we can’t uniquely identify them. Session persistence in a layer four world isn’t really 100% practical in every scenario, and you’ve got to watch out for this. A better alternative is just tell the developers to build stateless applications, so where I can, as a user come through the load balancer and just bounce around any backend server and the app just works.
Mike Pfeiffer:
As cloud native computing continues to mature, session or stateless applications are becoming more and more common. So you can push back on that. If you need a better session persistence, Application Gateway can give the user a cookie. If you’re doing web-based application, you can use that layer seven, manage load balancer, and then it’s much easier to deal with because every user can get a unique cookie specific to them. I know that I’m preaching to the choir to a lot of my friends that are in the exchange world that have joined us, that have done lots of load balancing over the years.
Mike Pfeiffer:
So anyways, I’m going to leave session persistence to none. The load balancer does use a routing protocol or a 5-tuple hash kind of algorithm that they’re showing here to decide how to connect the user to these backend instances. So it’s not like round Robin. It’s a combination of the source, IP, source port, destination port, protocol, all that kind of stuff. All right. So the session persistence to none down here. Then you have this option for floating IP, direct server return. And I’m not going to get into the weeds of that, but essentially, you would turn this on when you’re doing always on availability groups and your load balancing SQL server clusters. That’s more of a trivia nuance type of thing.
Mike Pfeiffer:
And so, we’re going to go with the defaults there, the load balancing rule. This would basically make it to where we can get to those servers if the web server’s up and running, but there isn’t one yet. So the big question is, how are we going to get Apache installed in these Linux servers? And so, that’s where NAT rules come into play. Later today we’ll talk about VPN and stuff like that. We could VPN into the VNet and then just SSH over the private network to these guys and install the web service. But if you need to go through the load balancer, and it’s good to understand this pattern, especially if you might get a question on a test, you can create these NAT rules.
Mike Pfeiffer:
I can create one to where I can already… Or sorry, SSH to the public address on the load balancer on one port for web one and an alternate port for the other server. I’ll show you in a second and it’ll make more sense when I draw it out. But thinking about it this way, there’s only going to be one public IP address in this scenario on the outside. And I can’t listen on port 22 on the outside for both of my machines. So I’m going to have one IP, two different ports on the outside that I can ultimately get to on SSH on these backend servers, if this ever updates.
Mike Pfeiffer:
The other thing that can be slow to update, and if you’ve ever messed around, you might have noticed, NSGs. Sometimes you can add an NSG rule to open up some kind of access. It might take a minute before you see it kick in. So be advised that if you’re messing around with NSGs and you created a rule to open up something, if you immediately go and test it, it may not work. You may need to let it burn in for a minute or two. This seems like it should have been gone already. Let me see. Do a hard refresh here. Yeah, there we go.
Mike Pfeiffer:
All right. So let’s create a NAT rule. This is going to be for the first server. So for web one. On the outside, on this public IP address, we’re going to basically set up a custom service. And what I’ll do here is I’ll just use 5,000, which you see in the documentation. On this public address, on port 5,000, we’re going to map those connections to web one on this IP address. Then the default is not going to be what we want. We want to map at that port 22 on the backend. I want to SSH to port 5,000, get the 22 on the web server one behind the load balancer. Hopefully that makes sense. So we’ll do that for this one. That’ll take a second. Then we’ll create one for the second server as well.
Mike Pfeiffer:
Then after we’ve gotten that rule, we should be able to SSH in, install a web server, and then go and take a look and see if it works through the load balancer. So let’s set this one, port 5,001 on this public address. Then we’ll map this over the virtual machine two, to his private IP address. Then the target port on the backend, if you’re playing along at home, which is going to be what? 22. Perfect. Now we’ve got to port forwarding rules, basically. And it’s funny because you would think that none of this stuff would work because the health probe would fail and all that kind of stuff.
Mike Pfeiffer:
That’s true for the application traffic, but NAT rules don’t care. We can just get through the load balancer, because they’re not checking the health. If the servers aren’t up, or we can’t get to the servers, that’s an issue. But these should work. We can go to the public address of this thing. Looking over here, we see the public address. You can even navigate over to the address and you can move this to another resource later if you needed to. Let’s copy the address and then let’s see.
Mike Pfeiffer:
Now this may not work, so we have to test us out. So I’m going to SSH sysadmin@publicipaddress. Now I need to do this on an alternate port. So minus P 5,000. When I hit enter, here’s the thing. It’s splashing cursor. This was good for administrative kind of stuff. We always say it’s DNS when we’re troubleshooting issues. The second thing is usually NSGs in the Azure world. So if this isn’t working, what do you think that tells us about the NSG?
Mike Pfeiffer:
Let’s go look at the NSG. So web NSG is connected to the subnet. There’s no NSGs on the NICs. We know that. But we can definitely see that this isn’t working. So let’s control Seattle this. So in theory, it seems like we need a port 22 rule, if we could ever get this thing to load up. Let’s go to inbound rules and let’s add, let’s say, source, any, destination, any, because any destination inside the subnet destination port range, 22 TCP. Let’s go and add that port rule.
Mike Pfeiffer:
Like I said a minute ago, sometimes when you’re adding those, it can take a minute. It can take a second. Even though it says green here, created security rule, sometimes you’ll find you go off, you test it, and it still isn’t working. You give it another minute or two, then it starts working. So that’s something to watch out for. Let’s take a look here. Let’s clear the screen. Let’s try it again. All right. Now we’re getting prompted. That’s a good sign. Type yes. Password. All right.
Mike Pfeiffer:
And so, now we’re on web one. So we’ll do a sudo SU, the sudo up. We’ll do an apt, get install, apache 2, minus Y, don’t ask me for confirmation. Just like when you install IS on a Windows box, it does that, default splash page. That’s what we’re doing here. So while that’s cooking, let’s open up a new tab here and we’ll do SSH sysadmin@publicIPaddress on port 5,001. That’s going to go to web two. So type yes. All right, cool.
Mike Pfeiffer:
Also, on the note of Linux, I don’t know if you guys have heard about that, but somebody put out a statistic a couple months ago from Microsoft where they’re like, “Yeah, over 60% of our VMs are a Linux at this point.” And by the way, MVP summit last year, I was sitting in a SQL session right next to the Tim Warner and the SQL team was talking about how they’re not doing anything on Windows server anymore. All their innovations from here on out are going to be on SQL. So I thought that was kind of interesting. Apt, get install, apache2 minus Y.
Mike Pfeiffer:
Yeah. And so, if you’re thinking now would be a good time to start learning some Linux, You’re definitely right. All right. On the first server, Apache is installed. So let’s do this. Let’s do a nano on var, www/html/index.html. What I’m going to do here is I’m going to put the name of the server. I’m going to violate the rules of web development and put the header tag at the very top of the page, because I want to be able to see what server I’m hitting when I’m going through the load balancer. We’ll save that, and then we’ll do the same thing over. This dude, he’s almost done.
Mike Pfeiffer:
All right. Clear the screen. We’ll do a nano, var, www/html… Oops, html/index. Same thing again. This is web two. I think I did the right one last time. Control X. Yes. All right. Let’s just double check that. That’s not the right one. Sorry. I got lost. I’ve got too many tabs open here. Let me do another nano. Okay. I said web one there. All right. So everything’s cool. I’ve got a web server running on each server now. And then we’ve got the public IP address on the clipboard. So if we do http, public IP address, head over there, and it’s not working. So what does that tell us? Something’s blocking the traffic coming inbound, not on the load balancer itself, but into the subnet.
Mike Pfeiffer:
So let’s cancel this. Let’s go back over. Let’s add an NSG rule. Source, any, destination, any. Anything in the subnet in this case, based on the way this is scoped. I’m going to say port 80. And again, adding that and being patient enough to know, all right, that may take a minute to burn in. So let’s let that run. But in the real world, these VMs, I would either have a combination of a custom image and a bootstrapping script, or just a bootstrapping script. We can do the cloud init bootstrapping script that Tim showed yesterday.
Mike Pfeiffer:
You could do the custom script extension. These VMs could be in a auto scaling group that can be targeted by some other kind of CI/CD system. But in other scaling, typically, there’s quite a bit of automation built in. There’s a lot of times a custom image involved and stuff like that. And it’s actually becoming what I’ve noticed is people are doing less and less virtual machine groups, auto scaling groups, virtual machine scale sets, and they’re getting into more managed services.
Mike Pfeiffer:
But anyways, that didn’t keep my IP address. So let’s try it again. All right. There we go. Give it a try, you guys. Let me throw this in the chat. When you hit that URL, I could copy it here. You guys should all get a variety of different… Some people should get web one, some people should get web two. And on hitting refresh over and over and over and I’m just getting web one, because it’s not necessarily round robin, it’s that 5-tuple hash on the Azure Load Balancer.
Mike Pfeiffer:
All right. That’s the big idea. I’m seeing, okay, some other people are getting web two, so we know this thing’s load balancing, and that’s good. So let’s say that I, for me, it’s just I’m stuck on web one. Now I want to see it fail over. So put your architect and administrator hat on for a second. Think about all the things that you’ve learned over the last two and a half days or whatever. How can I break web one to make it fail over here to web two? So if I hit refresh, I’m going to see web two.
Mike Pfeiffer:
There’s a few things we could do. You know at this point, well, you could go to the NIC and put an NSG on the NIC and not open [inaudible 00:42:25]. That would break it. You could shut the machine off. You could terminate it. You could go into the machine and shut down the web service and kill the Apache web server. The more ways you can think of to break stuff, the more you’ll be able to troubleshoot later because you’ll be able to reverse engineer that. So I’m always thinking about, how could I break this sucker?
Mike Pfeiffer:
All right. But anyways, if I want to make sure it’s going to say web two, I can kill web one by just going over to the web one SSH session here, which is this guy, right? Yeah. On port 5,000. And we could do a service, apache 2 stop. So service is dead. Come over here, drum roll, hit refresh. Web two. Kind of an anticlimactic demo, but there you go. So let’s take a look at this. If we go back over to the portal, create a resource, and then we go to… Let’s just look for Application Gateway. You could probably find it on networking over there.
Mike Pfeiffer:
A couple of things to point out about this. Now that you’ve seen the Azure Load Balancer, the high level of mechanics, for the most part, you understand. When you’re doing Application Gateway, it’s very, very similar. You come in, you deploy it, you give it a name, you pick a region, all that fun stuff. There is these different tiers. So there’s standard, standard V2, WAF, WAF V2. So web application filtering is insanely interesting when you’re doing web-based applications, things like SQL injection and cross-site scripting and all those weird malicious patterns.
Mike Pfeiffer:
If you do the WAF version, you get that capability. Couple of different performance tiers or pricing tiers here to think about. There’s also the concept of auto scaling in these scale units, basically Application Gateway, and even Azure Load Balancer, behind the scenes, obviously this is managed service, but it’s built using virtual machine infrastructure and Microsoft’s custom networking stuff.
Mike Pfeiffer:
Anyways, when you’re doing this guy, you’re really going really big on probably your deployment. This is a fairly, we would assume, heavy web workload, production grade kind of load balancer for something that might be servicing thousands of users. So there is this concept of scale units and having machines under the hood power, the Application Gateway.
Mike Pfeiffer:
I know we’ve got a few AWS people in the house that have been asking questions. So Azure Load Balancer and Application Gateway are very similar to the elastic load balancing service. And if you ever had to work with the elastic load balancer and prewarm the load balancer back in the day, you had to open a ticket and get them to do that for you. And that was a manual thing.
Mike Pfeiffer:
So now with these systems, and now in Azure, specifically, when you’re working with the Application Gateway, you can configure this. You can set up the zone redundancy. You can have this concept of scale units, and you can power an Application Gateway for very large, very busy web app. All right. But I’m not going to go through the rest of configuring this because I only have five minutes. But like I said, you should get a high level architectural idea based on what I did with the Azure Load Balancer.
Mike Pfeiffer:
Cool thing with this is once I build it, I can upload a certificate to this thing to do SSL termination and path-based routing, have multiple backends, all that kind of fun stuff. All right, you guys, that’s it. Hope you enjoyed this video. Like I said, this is content from our AZ-104 workshop that we just did a couple of weeks ago. You could watch the replay by going to our website, cloudskills.io/az104. And if you liked this video, please do me a favor, hit the like button, subscribe to our channel here, and I’ll see you in the next YouTube video.
Top comments (0)