Modernization Hub

Modernization and Improvement
Dapr, Rudr, OAM | Mark Russinovich presents next gen app development & deployment (Microsoft Ignite)

Dapr, Rudr, OAM | Mark Russinovich presents next gen app development & deployment (Microsoft Ignite)


(upbeat fast-paced music) (people applauding) – Hello, everyone, and welcome back to Microsoft Mechanics Live. Coming up, we’re joined by
Azure CTO, Mark Russinovich, to look at the future of
application development and provisioning, from the
ability to build portable apps that work across cloud providers and your on-premises
infrastructure, and even devices at the edge with
something new called Rudr, our implementation of the
new Open Application Model, to leveraging event driven,
microservices-based, runtime building blocks
to execute your apps, with new Distributed Application
Runtime, a.k.a. Dapr, that works with any programming languages. And to help get started
with this, please join me in welcoming back Mark Russinovich. (people applauding) – Thanks, Jeremy, great to be back. (people applauding) – [Jeremy] Thank you. All right, so you’re doing
all this work around building and deploying apps with
both Dapr and Rudr. What are we solving for with this, in terms of evolving
application development, and kinda what’s the go
forward direction here? – Yeah, so, the goal of both our projects are to make it easier for developers to create cloud-native applications. And not just cloud applications
that run in the cloud, but cloud-native applications
that can run anywhere, on-prem, on the edge, or in the cloud. – Right. Why don’t we start with deployment. I’ve got it in my Twitter handle, it’s one of my favorite topics. There are a lot of attempts,
I think, in the industry in the past, and people have
probably seen a few of these, where we try to get
applications to run anywhere. But what’s different now,
in terms of being able to run them across different clouds, and make sure they work
wherever you run them? – Yeah, so, one of the key developments in the last few years is
the ubiquity of Kubernetes. Kubernetes works on-premises,
it works in the cloud, you’ve got Managed Kubernetes Services like Azure Kubernetes Service, and ones on other clouds, as well. But, that said, there’s kind of a gap in the Kubernetes stack. It is infrastructure focused. And so, that means when
somebody’s creating a deployment for
Kubernetes, they’re mixing application developer concerns
in with operator concerns, in with infrastructure operator concerns. And so, as a developer authoring
one of these manifests, you’ve got to know all of
these different concepts related to the infrastructure,
which distract you from your core goal, which is
to define the application. – Okay, so we build something called Rudr. How do we make this better? – So Rudr is an implementation of a specification for
describing applications called the Open Application Model. And that model is intended to be portable, and also to separate the different roles that are involved with
creating, deploying, and operating an application
or an infrastructure. The goal, again, for both OAM, is to support the application model that can target different infrastructures, and Rudr happens to be that implementation of OAM for Kubernetes. – It’ll work both in, again,
public cloud or on-premises. How does this, I would imagine,
how does this actually work, in terms of being able
to separate this across the different platforms or infras, like we see here with Azure,
AWS, Alibaba, et cetera, that’ll just work?
– Yeah, exactly. So one of the patterns
that we see, of course, is the full stack developer,
the one that’s doing DevOps, creating the application,
deploying it and operating it. But we also see other patterns, as well. We see the one where there’s
a different operations team, or (mumbles) team that’s
operating the application, so the developer creates
it and hands it off. We also want to support
a marketplace scenario, where there’s a marketplace
of applications, or a service catalog for
an internal organization. – Right, so I can imagine
that, in the future, you might have something
like an adMarketplace. We’ve got marketplaces
on Azure, obviously, but imagine one that really
went across any infrastructure. You can pick the app
that you want it to run. You can make sure that it
could run in Azure, AWS, and we’ve got your Alibaba,
other clouds, on-premises, in a Raspberry Pi device,
all of these things would work, right?
– Exactly. – [Jeremy] How do developers,
then, in organizations actually get on board with
using the model itself? – [Mark] Well, like I said,
there’s different roles involved with the creation of the application. There’s the developer,
the application operator, and the infrastructure operator. – Right, so why don’t we
start with the developer. What does a developer do in this case? – So the developer’s the one
that would author a manifest. In the case of Rudr, it’s
a YAML manifest in OAM, which gives them a
consistent model to describe the different characteristics
of the microservices, or components that make
up their application. Things like which ones are the back-end, which ones are the front-end,
which container images they reference, whether
they need a load balancer, what their auto-scaling needs are, what their health signals are, and so on, that are developer concerns. – Right, and this is good,
because as a developer, you don’t have to know Kubernetes, or be completely fluent
now to write all of that. You don’t have to even
target the infrastructure that you want, at all. – That’s right, you just
specify the concerns from an application perspective. – [Jeremy] All right, so,
why don’t you talk us through what the app operator would do. – So again, when you get this delineation of different roles, the
application operator’s responsible for taking that application
description, and then specifying the parameters and
behaviors that are specific to the particular instance that they want. So they might have one
that they’re deploying in one production environment
that is small scale. They might have another one
in a larger environment. They’re specifying things
like what identity, what ports to use, what
auto-scaling rules, whether the things
should scale up and down, and how far it should scale. – [Jeremy] Okay, so, what about
the infrastructure operator? What does their role then
look like in this case? – So, finally, the infrastructure operator is responsible for, of course, setting up and managing the infrastructure. In the case of a Kubernetes cluster, they would be responsible for setting up the ingress controller, or
controllers that are available in the cluster.
– Right. – The service mesh that’s
there, the auto-scalers. In the case of a public
cloud platform like Azure, Azure takes care of it. We specify the load balancer. We specify the other
infrastructure services, and manage them ourselves. – Okay, so the infrastructure operator is then kind of making it real
on their specific platform. In other words, I can see
how this might come together where, effectively, the
developer would say they want an ingress solution along
with a couple of other traits. The app ops team would
then kind of figure out what they want to specify
from parameters more, maybe more specific, like the
different SSL requirements or ports that they want open. Then the infrastructure
ops team kind of determines the specific services
that they want to use. Like, they might want to
use NGINX, for example, if its on-prem. But if you’re an infrastructure operator that runs Azure, you might want to use the Azure load balancer.
– That’s right. – All right, so it’s super
clear delineation of roles, so that operators really don’t
have to make guesses then about what the developer
would have in mind. That means, truly, that we
can finally write one app that kind of works everywhere. And they don’t need to
worry about infrastructure. Can you show us what this is
like in action, showing Rudr? – Yeah, absolutely. I’ve got here a very simple application, it consists of a front and a back-end. What it’s gonna do is
show us a bunch of cubes with the front-end, the
back-end managing the position of those cubes, the front-end
displaying them in a 3D. – [Jeremy] Okay. – Here’s the code for the
back-end, it’s written in Go. I’ve got the code for the
front-end, also written in Go. – All right.
– And then I’ve got the application manifest
here, that describes how they relate to each other, as well as, the parameters that a
developer would define, for an application operator to make concrete when they deploy it. – Got it.
– We’ve got two of these component schematics, each one to describe the different roles. Here’s the front-end role. You can see down here in the spec that we’ve got some parameters. So, this is where the developer said, hey, I’m allowing this thing
to be parameterized. And one of the parameters
they’re allowing to specify is a texture, and that
texture will be displayed by the front-end on the spinning cubes. – Okay.
– Now, on the, the front-end, also,
here you can see that OAM or Rudr takes those parameters and plums ’em into environment variables, so the code can just simply read it. – [Jeremy] Got it. – Then here on the back-end, you can see that the back-end
has some configuration for the ports that its
gonna be listening on, for the front-end to talk to it. – [Jeremy] Okay. So now, if we want to then configure this to run on another couple
of different environments, how does that look? – Yeah, so, I’m gonna go ahead and run this deployment script. This deployment script is gonna take that application manifest that
references those components, those front and back-end
that are containerized, and deploy them to three
different environments. – [Jeremy] Okay. – The first one is gonna be
Azure Kubernetes Service, or AKS, and here’s the
application operator’s configuration file, where
they are deploying it to AKS. And now, here’s where they’re specifying that they want a value,
AKS, for that texture. So what that’s gonna cause,
is the code to go read an AKS imagine, and
display that on the cubes. – [Jeremy] Got it. – And then, you can see
that here’s a trait. This is where they’re
saying, hey, for ingress, for the ingress controller,
whatever the ingress controller, the infrastructure has, I
want to use this domain name as the domain name for the application. – [Jeremy] Got it, okay. – Similarly, here we can see
another configuration file. This one is for EKS, which is the AWS Managed Kubernetes Service. So we’re gonna have it pick texture that represents the EKS logo. And also use a domain name
that shows us that it’s in EKS. – [Jeremy] Right, but
otherwise its identical, the important part is
it’s, basically, identical to the AKS one.
– That’s right, yeah. And then, finally, I’ve got
this Raspberry Pi cluster here. This is to demonstrate
cloud and edge on-prem. – [Jeremy] Right. – I’ve got another
configuration file for this one. We’re gonna specify a
Pi logo as the texture. And then, we’re gonna specify
Pi as the domain name. Those deployments have finished, I’m gonna go to the browser now. And I’ve got three
browsers open, pages open. One for each of these,
so this is the AKS one. I’m gonna refresh. And we’ve got our spinning
cubes starting up here with the Azure logos we can see. – [Jeremy] Nice. – And then, this is on
Elastic Kubernetes Service. And we can see the AWS AKS logo. And then, finally, right
here on these Raspberry Pis, (crowd conversing in background) there we go. – [Jeremy] Right, so you
basically deployed an app to three different platforms, effectively, with very, very minor changes. So everything’s consistent
across all of them, and this looks like a super-useful app. – [Mark] It’s actually more than it looks. I can actually zoom in, and out, and rotate.
(person laughing) – [Jeremy] Awesome, so
the app operator then, in this case.
– Yeah. – Basically, they wrote their manifest, they defined the required
environment variables, ingress traits, the host names. That means the developer
doesn’t need to know the difference between these
three different deployments. Also, it helps the delineation then, in terms of application
roles, the portability. Now, you mentioned removing
programming complexity. I think that’s another part, another kind of chapter in this book. What’s that about? – Yeah, so what we just talked
about was about describing the app and deploying it,
managing its lifecycle. The other project we announced is Dapr, Distributed Application
Runtime, which is aimed at making it easy for
enterprise developers to create distributed microservice-based
cloud-native applications. This is how you’d write your application. One of the things that we realized, that there’s primarily
two programming models or patterns that are actually,
take away a lot of the work that an enterprise developer
would have to worry about, and put that into the framework. One of them is the functions model. So, functions as a service, a stateless, event-driven microservices. – Right.
– Which the platform will scale out, handle retries and all of the kind of
resilience that you want. If one fails, it will
automatically restart it. – Yeah.
– The other one is for stateful microservices,
and it’s the actor pattern, or actor model. That one is for when
you’ve got a microservice where you want some state associated with a particular instance, and for it to be tightly bound
to it, so the compute, it’s kind of like an
object-oriented microservice. – [Jeremy] Right. – That also takes care of
a lot of this plumbing, and work that an enterprise developer would have to normally worry about. – Dapr then, it’s something
new that basically provides all additional services
here that you can leverage. For example, to be able to call into the various services and
things that you need to have as a part of your app, right?
– That’s right. So Dapr provides a bunch
of building blocks, building blocks that support
the functions pattern, the actor pattern. It overcomes, through
a sidecar architecture, this limitation that you have with those existing implementations, that the frameworks that exist today, which are they are very
language integrated. So, for example, most actor
models only support C# and Java. But with Dapr’s runtime building blocks, you can implement actor
models in any language, if you want it programmed
to Dapr directly. Or you can use a language
integrated model on top of it, if you want to.
– Right. So why don’t we talk about an example of how this might work. I’ll show a current way of doing things. So let’s say, for example, an application that’s running at the edge. It might be a retail store, for example. And the application needs
to track store inventory, it needs to allow customers to be able to check-out at registers. So it needs a front-end, it
needs to be able to scan items, and really communicate
with other services. So then, as you develop an app like this, there might actually be considerations, challenges that you need to worry about. Think common things, like,
where do you save state? How do you configure your queues? How do you talk to or
invoke other services? How do I activate actors
for different workloads? As a developer, I spend a lot
of time then thinking about the plumbing, the different
services I need to invoke, all these different kind
of the spoke things, instead of actually coding the app. But, how does this change then with Dapr? – Yeah, so before Dapr, you’d
have to actually take care of all of the responsibilities yourself, which typically meant
writing a lot of custom code, or leveraging a bunch of different STKs, having to learn each one of them, to put that kind of system together. Now, with Dapr, you’re
basically requesting these kinds of services or capabilities
directly from local host. Because, through the sidecar
pattern, through HTTPC, or GRPC, if you need higher performance, you just ask Dapr for the
functionality, the service get, and you get it. – Right, so can you show us, then, what it would look like to actually build a Dapr-based application,
given that all the, as a developer, you’re kind of giving Dapr the connection info,
maybe, like end points. But not all the different, kind of, all the drilled-in resource bindings and those types of things, right? – Yeah, so one of the goals of Dapr is to make it completely
open, so you can leverage the backing services you want to. So, in the case of, for
example, state management, you can plug-in a state store. It might be Cosmos DB in Azure, it might be Redis on-premises. That way, with all of its capabilities, you can leverage the technologies that are the ones you want in the
cloud environment you’re in, or the on-prem environment you’re in. – All right, so I’d love
to see how this can work, in terms of building out
one of these applications. One thing that I love
about this is, basically, like having your own in-house concierge. You’re writing to local
host, but they’re kind of getting rid of all the different things that you’d have to normally worry about. Because they’re kind of, again, saying, okay, you need cache? We’re gonna give that to you. You need this service? We’re gonna give that to you, as well. It actually attends to all
of your different needs, kind of like a butler, in a way. That’s hence the hat, right, in the logo? – Yeah.
– Ah! (chuckles) That said, why don’t you give
us an example, and show us what it looks like to actually
build out a Dapr-based app. – Sure, so, I’ve got an example
of that retail app scenario you described earlier. I’ve got four different
microservices in this app. One of them is this
check-out microservice. You can see its written in Python. It’s using the flask web framework. It talks to Dapr, again, through the HTTP, and leverages flask to do the
routing for it into the app. And you can see that it’s got
several different methods. Let’s go take a look at the
methods that it implements. One of them is add item to cart. So, this would be when
you’re walking around and you’re saying I want
one of these, one of those. You can see that product and the quantity, the number that you want,
come in as parameters. Now, what this microservice
is doing is updating a cash register here, a register actor, with a register ID that it’s mapped to, calling it at add items method, and passing it the change in the state, and the product quantity. You can see there’s a get cart. Again, it’s gonna be using
local Dapr state store to, to call in to the register
actor and retrieve that state. And then, finally, check-out. Again, its calling the register
actor to do that check-out, and say I want all these items. – [Jeremy] Got it. – Now that, let’s turn our attention to the second microservice,
the register microservice, which is implemented using the C# SDK for actors on top of Dapr. – Okay.
– Those of you that are familiar with Service
Fabric reliable actors, one of the after actor
patterns that are out there. – Right.
– Will see this shape of this interface looks very familiar. In fact, all we have to do, if you’ve got a Service Fabric reliable actor, is change the name space right there
from Microsoft.ServiceFabric to Dapr, and your code
will just work in the Dapr. – [Jeremy] All right, okay, nice. – So let’s take a look
at an implementation of that interface here. Here’s the register actor
we were just talking about. And you can see here’s the get cart API. It’s calling the SDK’s
local get state service, which is a Dapr provided service. – Yeah.
– Here’s the check-out. Now, interesting thing about
this check-out, is that it’s also gonna be notifying
an inventory microservice about the fact that these items are now being checked out of the store. So the inventory service
can update the inventory. – [Jeremy] Okay. – Then there’s the save data, which is how that serializes its
state, again, using Dapr. Finally, the inventory microservice here, also a flask microservice. This one, as we already saw, that the register
microservice is gonna call it. So here’s the get inventory
for a particular product. And it’s gonna call in
to the Dapr state store, where it’s just saying,
Dapr, you take care of saving the information,
I’m gonna retrieve it here. Here’s where it’s setting inventory. So this is where the register
is saying, for this product, add or subtract this quantity. You can see the math for that right there. You can see if the
quantity is less than five, calls this function, notify inventory, we’ll take a look at in a second. And then here’s where you
can see it’s putting together the inventory and saying
to Dapr, hey, go save that. Here’s the state API for Dapr. Just you go take care of the
details, and how that’s saved. And then, here’s the notify
inventory function we just saw. Here’s the Dapr URL. It’s using now Dapr’s pub/sub capability. Just saying, hey, Dapr,
you let anybody know that wants to know about
an inventory change, where inventory is low, that
it’s low for this product, and here’s how much is left. And so Dapr, then, just by this HTTP post, Dapr goes and takes care of that. – [Jeremy] Okay. – Now, what I’m gonna do
is run this all locally. Dapr can work anywhere, it can
work on this local dev box. This run command’s gonna use the Dapr CLI, launch those four microservices. And now, I’ve got them running locally. So, you can use Dapr capabilities, whether you’re building a single service, or a distributed app, all of
these are available for use. Now, let me open up the browser. It should be started here. You can see I’ve got a
local host end point here. So this is where Dapr is
launching those microservices, and this is the front-end for the store. We didn’t look at the code for that, but this is where somebody,
the admin would see, hey, here’s my inventory. – Okay.
– And here’s the check-out app microservice. So this is where I can
scan banana, scan apple, and check-out. And that would update the
inventory, as we saw in that flow. – Got it, so now, we’ve seen kind of how we can build out the
application, calling all the different services from
a local host from Dapr. And now, we see, but, we’ve got also Rudr, it’s about deploying apps, as well. Without changing the platforms, can we bring these two things together, kind of like peanut butter and jelly, or peanut butter and chocolate,
to have not only app built in a way that kind of will work anywhere, but also, deploy it anywhere? – Yeah, so I wanna emphasize, they are two different projects, you can use them completely separately. You can use Dapr in a project,
and not use Rudr or OAM. In another case, you can use
OAM and Rudr, but not use Dapr. But, they obviously can go great together, because you can describe the
structure of the app using OAM, and then deploy onto
Kubernetes using Rudr. But, you can implement the microservices using Dapr capabilities. – [Jeremy] Okay, let’s see it. – What I’m gonna do, is show
you that I’ve got this app now wrapped up in an OAM manifest,
an application manifest. – [Jeremy] Okay. – There’s gonna be four microservices. You can see if I search
for a component schematic, there’s four of them. One for each of those
microservices I talked about. You can see parameters here. This describes the full application. Now, I want to deploy to
two different environments. And this might be reflecting
a real-world case where I’m gonna do my dev test up
in Azure Kubernetes Service. But I’m gonna do my prod here,
because this is in my store, so I’m gonna deploy the
prod environment here. – Okay.
– If I go take a look at the config file for my dev
test environment, you can see that I’ve got an environment
variable parameter here, dev, that the app can read out and know that it’s the dev instance. And then at the bottom,
you can see that I’ve got some meta data here, which is the binding for the state
store, which I’m gonna use when the app is in the Azure,
I’m gonna use Cosmos DB. So those state store APIs I saw, – Yeah.
– Dapr’s gonna use a Cosmos DB.
– Makes sense, yeah. – Now, when I deploy it to prod, though, I’m gonna want it to know
that it’s the prodded version, – [Jeremy] Okay. – And I’m also going to– – Use Redis, it looks like.
– Why not use Redis, because this is what
I’m gonna get on-prem. So I’m gonna kick those
off, those deployments. AKS, and there’s the Pi. And then, in a few seconds here, I’ve got some URLs here, there’s the, the dev URL, so this is gonna be up an Azure Kubernetes Service. Let’s see if it’s up and ready. And there we go, same portal. (crowd conversing in background) And so, what Dapr has done
with some of the meta data, is inject its sidecar
into those Kubernetes pods as they got deployed. And now, I’ve got (crowd conversing in background) the app, again, local dev, one box. If I want to deploy that there. Or/and Kubernetes, using OAM and Dapr. And so, this thing is now portable across different environments. – Awesome, so I’ll have super-portability. It’s really a game-changer,
I think, in terms of being able to
delineate the right roles, and have anything run, whether
it’s local, in another cloud, in your on-premises data center,
all these different things are enabled through this, very cool stuff. – Yeah, you’ve got these two
new cools at your disposal. One’s really kind of helping the developer
interact with an IT pro that’s managing infrastructure. The other one’s, of
course, developer-oriented. I think Dapr really is revolutionary in cloud-native programming. – Right, and it’s a great
first look into what’s coming for developers, and for IT. It also, for cloud-native applications. But, how can folks get started? – So both of them, like I said,
are open-sourced projects. They’re gonna be going
under open-source governance at some point. You can check them out
at their landing pages. For Dapr, it’s dapr.io. And for OAM, which will take you to Rudr, it’s openappmodel.io. – Thanks, Mark, and, of course, you want to keep watching Microsoft Mechanics. If you haven’t yet, hit subscribe. That’s all the time we have for this show. Thanks, for watching, and
we’ll see you next time. (people applauding)
(upbeat fast-paced music) (upbeat fast-paced music)

Leave a Reply

Your email address will not be published. Required fields are marked *