Modernization Hub

Modernization and Improvement
Modernizing .NET Apps for IT Pros Part 4

Modernizing .NET Apps for IT Pros Part 4


Hey how you doing? I’m Elton and this is
the fourth part in the Docker MTA series, Modernizing .NET Apps for IT Pros. In Part 3 I showed you how deployments and updates work using Docker Trusted
Registry as a private store for Docker images, and running containers in a
Windows VM in Azure. A single server is fine for basic system tests but you need
a more realistic environment for UAT or pre-production testing. In this video
I’m going to deploy my app to a staging environment running on Docker using
multiple servers, for a scalable and highly available cluster. The clustering
technology is built right into Docker – it’s called swarm mode and it scales
from a single server right up to thousands of servers. I’ll show you how
to use Docker for Azure to create the swarm, how to deploy and scale
applications in swarm mode, and how you can add monitoring to your apps so you
can see what’s happening inside your containers. You can create your own Docker swarm in
the cloud by starting multiple VMs in the same network, and running the `docker swarm init` and `docker swarm join` commands, but it’s much easier with
Docker Cloud. You link Docker Cloud to your Azure subscription or your AWS
account and then just click to create a swarm, and select your cloud provider. You
fill in the basic details, like the region, the number of swarm manager nodes,
and the number of worker nodes, and the VM sizes – and then Docker Cloud creates
the swarm for you. Docker Cloud provisions a consistent
Docker environment no matter which provider you use, making the best use of
the features from that provider. Docker for Azure creates a virtual machine
scale set for the manager nodes and another one for the worker nodes. You can
manage capacity just by sizing the scale sets. Right now Docker for Azure only
provisions Linux nodes. Windows support is coming soon, but I’ll show you how to
add your own Windows nodes. I’ve created three VMs from the Windows Server with
Containers machine image, which I’ll add to the swarm. Docker Cloud makes it easy
to deploy a highly available swarm running Docker, and it also makes it easy
to manage that swarm from your local machine. Docker for Windows and Docker
for Mac integrate with Docker Cloud. I’ve signed in with my account and I can
see the swarms that I have running in Azure. I just click to manage the swarm,
and that opens a new window where the Docker command line is connected to the
swarm. I can run `docker node ls` locally and
see all the nodes in my remote swarm. These are all Linux nodes, and to join my
Windows nodes I need the secret token. `docker swarm join-token worker` shows me
the command to join a new worker node to the swarm and I can run that on my
Windows VMs. Any node can join this swarm, as long as it has Docker installed and
has a network connection to the other nodes. My Windows nodes are all joined
now. I have three managers, one Linux worker
and three Windows workers. Worker 2 is a special node – it’s running more memory
than the other VMs. You can add labels to nodes in Docker swarm to distinguish
hardware like this, and I’m adding a label to say that this node has extra
memory. You can deploy to swarm mode using commands to create services, or you
can use a Docker Compose file. This is the compose file from the last video
with some extra configuration settings The constraint tells Docker to run these
services as containers on Windows nodes. They’re all Windows images and this is a
shortcut so the swarm won’t try to schedule them on Linux nodes. Windows
doesn’t have all the networking features that Linux has and Docker has different
networking modes to support that. I’m using DNS round-robin for services to
locate each other, and in the web app I’m publishing port 8090 from the container
to port 80 on the host. The database service is using a volume for storage.
That means the SQL Server data and the log files are saved outside of the
container directly on the host. The extra constraint here means the database
container will always run on the node with extra RAM. This means the database
is resilient to failure – if the container fails the data is still there, and when a
new container starts it will attach the existing data files. A distributed
application like this running on multiple containers is called a stack in
swarm mod,e and I deploy it on the swarm with `docker stack deploy` specifying the
path to the compose file and a name for the stack. The images are on my private
registry and the `–with-registry-auth` flag means the worker nodes can use the
credentials from the manager to pull private images. The swarm managers
scheduled work to run on the worker nodes and the nodes will pull all the
images they need and start containers. Docker for Azure creates a load
balancer in front of the scale set for the worker nodes and I’ve done the same
for my Windows nodes, using a health probe so traffic only
goes to the nodes which have containers listening on that port. I’ll browse to
the public DNS for the load balancer and here’s my application. This is the same
v3 image that I’ve used before so it’s the exact same application package that
I’ve tested locall,y and on my single server in Azure. I’ve just got one
container hosting my app so the load balancer will send all traffic to that
node. In swarm mode you can scale services and have multiple containers
running, which are replicas of the application image, all handling traffic.
`docker service update` with a new replica value scales the web application to use
three containers. Docker swarm spreads containers across nodes so I’ll end up
with one container on each Windows node. Any worker could get incoming requests
from the load balancer now, and the container on the node will respond. When
i refresh the site it works in the same way and any one of the containers on the
Windows nodes could be handling the request. Services can run across many
containers, but you can still check the logs with `docker service logs`. That shows
the replicas ID with each log entry so you can see the logs from all containers.
That’s useful for debugging when you have a problem, but for ordinary
monitoring it’s better to have instrumentation in the containers. IIS
applications write a lot of metrics to Windows Performance Counters. That still
happens when you’re running in Docker, and you can see those metrics if you
export them from the container. Docker has a huge ecosystem of partner and
community products including a very popular instrumentation server called
Prometheus. You run Prometheus in a container, and
use it to poll other containers and collect their metrics. I’ve updated my
web application Dockerfile to publish Performance Counters as Prometheus
metrics. I’m using another Docker image from a sample project as the source for
an application which reads Performance Counters and publishes them as
Prometheus metrics. This is the setup for that exporter
program, and I’m using a config file with all the Performance Counters that I
want to capture. These are the main .NET and IIS counters. I’ve also
changed the startup command so the exporter app is running in the
background when you run a containter, but I haven’t changed the website at all.
This is still the same code that came from the Windows Server 2003 VM. I’ll build this
as version 4 of my app image and when I run it locally the app still works in
the same way, but there’s an additional endpoint I can browse to which shows me
the Performance Counter values as metrics – like the number of requests per
second. This is the endpoint that Prometheus will monitor. Prometheus uses
a simple configuration file to specify endpoints, and in this setup I’m checking
the web app every 5 seconds. In the compose file I’ve updated the web
application to use v4 and added the service to run Prometheus. Normally you
wouldn’t make your metrics publicly available, but I’m doing that just for
this demo. I’m still connected to my swarm so I’ll run `docker stack deploy`
again which updates the stack and brings it into line with the new compose file.
This update is a zero downtime deployment because Docker updates the
web application one container at a time, and the load balancer won’t send traffic
into a node that’s being updated. When i refresh the website during the update
there’s no loss of service. This update also adds the Prometheus server so I can
browse to port 9090 and see the metrics being collected. This is the number of
requests currently being handled. I’ll start some load running with a
PowerShell loop and in the graph view you can see that the load on the website
changes over time. You can do a lot more with monitoring in Docker – Prometheus is
just one option – but this is a simple way to get insight into your applications in
a modern platform, without having to change code. At the start of this video my app was
running in a single server in the cloud, which is fine for a basic test
environment. But now in my app is running across multiple servers in a Docker
swarm which I created with Docker for Azure, and which I can manage and extend
using ordinary Azure resources. I’ve deployed my app as a set of highly
available services in the swarm, so if there is any interruption to my VMs the
app will keep running correctly. High Availability works for application
updates too. I added monitoring to my web application by exporting the Windows
Performance Counters, and deployed a new version of the app with zero downtime. I
touched briefly on instrumentation and running in Docker makes a huge
difference to how you administer and manage traditional applications. Every
application in a container has the same shape, and you can manage ten-year-old
.NET applications and brand-new Node.js applications on the same Docker swarm,
using the same tools. In the next video I’ll show you how production looks in
Docker – and it’s the same in the cloud and in the datacenter. I’ll run my app
on Docker Universal Control Plane, which is an enterprise-grade Containers-as- a-Service product, which is built on top of Docker swarm. I’ll focus on management
and security, showing you how role-based access control and content trust enable
you to have a secure software supply chain. That’s coming in the final part of
Modernizing .NET Apps for IT Pros.

1 comment on “Modernizing .NET Apps for IT Pros Part 4

Leave a Reply

Your email address will not be published. Required fields are marked *