Modernization Hub

Modernization and Improvement
Automating Application Modernization on Google Cloud Platform (Cloud Next ’19)

Automating Application Modernization on Google Cloud Platform (Cloud Next ’19)


[MUSIC PLAYING] ASHOK BALASUBRAMANIAN:
Good afternoon, everyone. If you really look at
it, applications today are the backbone of digital. And in today’s session,
we will walk through on how to create an
application-centric cloud and how do you really look at
an application-centric cloud from our enterprise context. It’s just going through some– yeah. OK. So about Atos. We’re a global leader
in digital services. We’re about 12 billion
euros in revenue. And we have about 120,000 of our
associates working with clients across 73 countries
providing digital services across all layers of the stack. So that’s who we are. And if you look at
it, Atos and Google announced a very strategic
partnership, what we call is an Atos Google
enhanced alliance just about a year back. And we’re essentially focusing
on areas of data, intelligence, cloud, applications,
all integrated in a common secured platform to
provide digital transformation to our customers. And we’ve taken a
very unique approach to take a labs approach. So we have digital and AI
labs all across the world, across Europe and North America,
to help our clients accelerate their digital transformation. And when you look
at the Google Cloud and specifically how enterprises
adopt the Google Cloud, there are six
tracks that we feel are extremely critical that help
enterprises scale when they do their digital transformation. So first one is
application agility. So we’ll be talking
a lot about it today. Second one there is tremendous
amount of ERP and cards and how do you enable platforms
like SAP on a cloud native platform on GCP. IoT is one of those amazing
use cases where you really see the need for
ubiquitous access across billions of
devices providing realtime performance across
both the edge, swarm, and back in the cloud. So investing in our
ecosystem of IoT solutions, partnering with
Google to provide enterprise adoption of IoT. Collaboration, again, a great
use case, along with GCP. So Atos unify platform is
working closely with Google to provide a seamless global
collaboration to bring in a digital workplace. AI, all the data that comes
in from the applications, all the data that
comes in from IoT. So we’re really
collaborating to bring in industry solutions that
connect data and provides intelligence back to
the decision makers. And all of it are
secured through what we call as a secured hybrid cloud. I talk about it in a bit. So if you look at Atos on cloud,
we’re about 15,000 people. A little over 10%
of our associates are cloud experts supporting
our customers globally. And we’ve taken a very
industry-focused approach. While cloud is a
large utility, we have taken an industry-focused
approach to say, what does cloud mean from a health care
perspective, what does cloud mean from a capital markets,
from a trading perspective, what does it mean from
a payments perspective? We’ve invested,
along with Google, in creating industry
solutions, and we’re kind of home to one of the
world’s largest hybrid cloud today. And we continue to
be a leader in cloud, cited by leading analysts
across Europe and the world. And we today are
uniquely positioned as an end-to-end provider
in terms of cloud solutions. And what do we
mean by end-to-end? Atos today, along
with Google, is one of the very
few providers that can provide the entire stack. Because enterprises, if you
really look at it today, have a blend of
legacy infrastructure. You would have some
sort of a private cloud, you would have some
sort of a public cloud that you would want to get into,
so you would need an integrated provider that can provide
that seamless experience for a customer who is embarking
on digital transformation, which essentially says, if you
are going through a journey, do you have the right
infrastructure that can scale, do you have the right
technology patterns that can help you develop faster,
and is that secure, and is it scalable? So if you really look at it, we
are powered right from a legacy infrastructure stack all
the way to private clouds, which we are launching
along with Google– the Google Kubernetes engine
on Atos infrastructure, which is managed– and public clouds, which are
a managed service from us. On top of it, there is
a seamless DevSecOps layer in the middleware,
and on top of it, through our cloud studios, we
provide digital transformation solutions right from
modernizing things that are in legacy to actually
refactoring applications, building cloud
native applications, and from a cultural
perspective, how do you bring in an agile workplace? And all of it are managed
through our automation platform sandbox to provide always-on
applications and infrastructure from a business perspective. So coming to the
topic for today, I think it’s the
time in many decades where technology is core to
the entire transformation that is around us. If you really look at consumers
and digital disruptors, we are re-imagining
the way that, right from transportation to
health to retail and whatnot, people are reimagining their
experiences through technology to say, hey, how do I do
something through mobile? How do I do something
through Omni channel? How do I do something through
a different platform or IoT? So we are seeing this
ubiquitous access across multiple
different ways in which consumers and digital disruptors
are reimagining things. So that’s a great scenario where
technology can play a key role. And on the other
side, if you look at it providers like
Atos and Google, we’re providing an
entire ecosystem that says, right from your cloud
infrastructure to applications, middleware, IoT, blockchain,
AI, user experience– all of it is available today. And if you really
think about it, I think we’re at a crux of, in
a word, changing technologies where, if you could take
some of these experiences and apply some of these
transformation technologies, and monetize them, that is
digital transformation, right? It looks so easy,
but unfortunately, from a large
enterprise perspective, the picture looks
slightly different. So I would look at it this way– where there are a lot of ideas,
and each division wants– some division wants to
go on an IoT journey, someone wants to
go Omni channel, someone wants to do a
real-time payments platform, someone wants to
do something else. So we have a lot of ideas
in the enterprise today, and I think enterprises today
have as rich or better ideas than digital disruptors today. But there is something
called as the legacy, which is the legacy way
of thinking, legacy way of doing app dev, legacy
way of how your technology is. So what we essentially
call this legacy burden, this kind of holding
back a lot of these ideas from becoming digital realities. And within certain groups,
we do see certain breakaway groups which are
simply says, hey, I don’t care about the legacy– I have a phenomenal idea. I’m an independent
digital group. So you would see some untethered
balloons that go away, and create a certain app, or
create a certain experience– but unless it’s all tied
back as a common approach, you would see those balloons
either popping or hitting the ceiling waiting
for a critical mass. So what we’re essentially
seeing is this whole idea and concept of a
legacy way of working, and legacy as a
technology, is kind of slowing down and impeding
the way that enterprises could really transform. But when we talk to a lot of
our customers in this segment, if you really look at
it, the way enterprises have differentiated over
the last 10, 20, 30, 40, and some are even
100 years old, is by understanding their partners,
understanding their consumers, and how they behave. And that is something
that has made them extremely successful
over the last many years. And if you really look at
it, that differentiation is today encoded within
the legacy applications and the legacy rules which
are there within what is called as a legacy burden. So if you were to
really look at it, what we think in
one side of the coin as a burden is also what
truly has differentiated the enterprise today
and what made them win in the marketplace today. So we really looked at
it to say, how do we bring best of both together? You have a beautiful cloud
and all those technologies on one side, and
you have a wealth of information
hidden within what is called as a legacy burden– how do you bring both so
that the enterprise can truly get into what we call
a digital lift off? So we didn’t look at it to say
stuff of multiple ideas trying to float off, and try
to do something well while hampered by this legacy. So we didn’t look at
it to say, first thing, if you really look at
the legacy as knowledge, then you can really convert what
is the deadweight that is there into an equivalent of a booster
rocket that can actually propel an enterprise into
where they need to go? And second, is key part
of it is automation– and we’ll talk about
it in a while– on how the whole
idea of automation can be a bridge between legacy. Because end of the day,
most of banks are 60%, 70%, 80% on legacy infrastructure
and applications today. And it’s not going to go away
tomorrow– we can wish it away, but it’s not going to happen. So we’re really
looking at, how do you leverage automation as
a bridge between that legacy world and the new world? Automation really
becomes a backbone in bringing a unified experience
between the two worlds. And the last is agile, on how
can the payload really keep track of how the
business is changing, and then go where the
business needs to go? So we’re seeing this
combination of, how could you convert what was
your business rules and knowledge hidden
within a legacy burden as a true knowledge
input to the future, automation to bridge gap
between the old and new worlds, and an agile way of working
on the new age platforms to increase the velocity
of how you would deliver. And if you were to really look
at it, what’s stopping them? What’s stopping us
from really looking at applications and knowledge
tools which are there today in legacy, and converting them
into agile ready applications? So there are multiple
layers, and we’ve had different theories. We started off with a
15-factor application, we had then 12-factor,
and multiple factors. We said, let’s call it
X Factor, because we’re competing with each other to
create n number of factors that are really required
to make an application scalable and available. So bottom-most,
if you look at it, if you remove the
infrastructure dependency, you make your application
really portable. So you can put it on any cloud. So that’s the first step of
where you could literally move it to Google Cloud. Then if you look at it to
say, will it really scale? Will it really seamlessly
work with the business as the business grows,
will your underlying apps and infrastructure grow? So which is where we
really are looking at a really decoupled
architecture– how good are your contracts between
their services? Are they consumer-driven
contracts? Are they rigid contracts, and
how can they be decoupled? And top off with our, again,
proprietary frameworks. Over different points
of time, enterprises have invested into different
proprietary frameworks– how could you really get off
the proprietary frameworks into a stateless architecture,
is the next level of maturity. On top of it, I
think two areas– I think telemetry and
resiliency– go hand-in-hand. Can you really measure on how
your application is performing; and if you can measure how your
application is performing based on the different business
needs, can you then architect it from a resiliency perspective
of talking to your past, scaling up and down? And if you really
look at it, it’s all interconnected
from bottom to top. Unless you have
one stack offered, the other stack
doesn’t work well. And very top of it comes
the deployment architecture, which essentially says
can you really isolate a certain business change,
and can you deploy that? And we have had
even customers where it takes two days to deploy
a single application, and then that might not be
a very unique situation, but they do exist. So while you would do
agile, you would do DevOps. You would do a lot of
things, but the DevOps train takes two days to build, deploy,
test, roll back, and go on. So the deployment
architecture becomes critical to isolate
business functions and ensure that you can provide
that real-time deployment from a business perspective. And security,
privacy, is critical– so while in a
legacy enterprise we would have done all that, how
do you ensure that you can actually port it into
the cloud as well, which becomes equally secure and
as regulated as it needs to be? So when do we do it? It’s not a point
in time situation, so when an enterprise
goes through, there are point in time
situations as well, which means you would look at maybe a
couple of data centers and say, hey, this data center’s state
now needs to be modernized. So we go through situations
where we do a business audit and analysis of a
set of applications, and then say now you
would make it GCP ready, you would upgrade it, you
would modernize it, and then go through it. So the one side of it,
the first two boxes are a point in time where you
really look at where you are, how you would need to do an
uplift, and modernize it. And the last one
is something as– how would you integrate
a DevSecOps lifecycle that ensures that anything
that you build new, is that X factor
compliant, so that bad code and bad architectures
don’t get into what we are building for the future? And so I think that’s a fairly
straightforward approach. I think if you look
at the approach, we said simple things. We said, hey, you
need to decouple, you need to improve
your architecture, you need to make it
scalable, and you could do it in a structured way,
but then the complexity comes in because it’s
not one application, it’s not one situation,
or one factor. You have millions
of lines of code, and thousands of workloads
across the enterprise, and tens of patterns– I think if you’re
very lucky, we will get into tens of
patterns versus thousands of patterns that exist today. So idea is to get it
to tens of patterns, and a few hundred developers
who might be trying to run at that speed to
essentially say, hey, how can I get all of this
running into a new cloud native platform? So we have taken efforts
to solve that issue, and to walk through
that I would invite Pawan Trivedi, my colleague, a
principal consultant at Atos. He will walk through on
how we’ve actually made that an automated solution. Pawan? PAWAN TRIVEDI: OK. So continuing from
where Ashok left out, as you look at the problem
of cloud migration, there are really three problems
we are trying to solve. Number one is scale. Number two is complexity,
and number three is flow. What do I mean by that? We spoke about
tens of developers trying to migrate maybe 5, 10,
15 components at the same time. We already know that
the apps are complex. Now given those two dimensions,
imagine all of that thing is also changing
at the same time. The way we have
solved this problem, and we are trying to
present it to you today, is based on creating an
automation workbench. So essentially, what
we would do is say, can I solve these three problems
and these three dimensions of migrating to a
cloud, be it cloud ready or cloud native, through an
automation platform, right? And the sample that
we have chosen today is a typical heavyweight to
a lightweight container demo application, where
typically, you would have changes around
the [INAUDIBLE] spec or some configuration changes. And we’ll show you
how an app that can be taken from an
on-prem environment, and can automatically
be ported to the cloud. So the way we have
solved this problem– and Ashok spoke about– we call it the cloud
lifecycle migration platform. A typical series of steps in
doing a brownfield application is starting with, where
is your code base sitting? What are the configuration
elements that you need to do? What is the target
environment, whether it is a containerized environment,
whether it’s an absolute VM environment, what
do you do with that? You take all of that,
and then once you begin preparing the application
environment to migrate, you design and say, look,
these are the things that we need to do in order
to migrate to the cloud. So it may be five
things, 15 things, you may need to look
at what the code is doing, what the configuration is
doing, and so on and so forth. So what we’ve come up with
is a very unique approach– and some of the components
are patented as well here– is we run a set of
rules on the code base that articulate the complexity
dimension that I spoke about. Meaning we would have
reference architectures, we would have
non-functional requirements, we would have
compliance toll gates. So all of those things are
captured within the code analyzer, and then once the code
analyzes sniffs and introspects the code, it would then tell
you what is the effort required in order to migrate it. And then, once that is
complete, you get a sense of– here is the effort
required to do it, and here maybe 25,000 lines
of code that need to change. What we have also
billed as a next step is an auto-remediation
framework. So not only we define
and design what is needed to change,
we also [INAUDIBLE] the capability to automatically
remediate that code so that you can deploy to the cloud. And once those two steps
are completed we then execute the deployment routine. So we are building adapters
with the GKE environment with all of Google
Cloud infrastructures, so that once your
application is changed, it can actually directly
be built and deployed to the cloud. And the whole thing– all of these steps, all
of the automated steps that I just spoke about– are orchestrated by a
Process Orchestrator. We call it the
SyntBots [INAUDIBLE].. And I’ll actually
try and showcase some of the elements
of the demo here. So essentially
what we start with, we start with modeling the
whole process in a workflow. So what we do is that,
from left to right, you can see the
process that I spoke about in the previous slide is
modeled in a formal workflow, right from the time
the code is loaded to the time an application
is checked out, and the configuration
rules are being provided– up to the time the code is
built and deployed to the cloud. Now what it does is, this is
again addressing the scale dimension where you’re looking
at, not just one component, you’re looking at
multiple components that can be deployed by multiple
teams at the same time, right? So mechanism like
this is very useful to standardize the approach to
cloud in a typical enterprise. So the first step–
now this is the first– this is the automation
workbench we spoke about. We begin– the first
step is loading the code and the configuration
required for you to figure out whether the application is
suitable to go to the cloud, right? So in this case, we go
select the application and say, hey, what are
the number of applications that you need to select? And in this case, we have
taken a sample application. So the application is
selected from the trunk, and the first step is to
actually load the profiles. So Profile is an interesting
concept that we have created– it’s a central concept within
the workbench that defines about– we have about 802,000
standard rules that encapsulate all the
12-factor principles. Number one, standard
cloud target principles that are required. In addition to that, any
basic elements that you would need to do from a software
engineering practice perspective. Now typically, it’s
like– you know, you should not
have hardcoded IPs, you shouldn’t have
file data, you shouldn’t have in memory cache,
and those things like that. Some of the samples
are already there, and then once the profile is
loaded, you can always see– I have a good example
that we always see is the FTP for example, right? So this is an example
of a rule in [AUDIO OUT] there are about 800 rules that
the platform will introspect your code with. And once we run it through,
we can select and configure each of the rules and reduce the
priority of each of the rules that you want to apply. So for example, if it’s a
client-facing compliance application, the intensity
and the severity of the rules can be increased. If it’s an internal application,
you can obviously reduce it. Now the second step
is actually selecting what is the type of migration. What we have done is we have
encapsulated some patterns that are available to you. In this case, we would query
and select Dev WebLogic. We select what is the
type of binary that is going to be
used, so we go ahead and select the Windows
build for this one. And then once we select that,
the last element remaining is the configuration that is
required for me to connect to GKE, in this case. So we could then
configure elements like number of clusters,
number of nodes required, service end
points, security– and all of those
things are present and can be configured as part
of the deployment script step. Now once the
configuration is complete, the next step is actually
going into figuring out how all of these elements
can now [AUDIO OUT] be applied to the code base. So now this is the
second element– I wanted to introduce the
concept of a blueprint. So as you can see, I’ve started
executing a blueprint, which essentially is a mechanism
where you can visualize a team of 10 to 15 people,
or even 300 people, working across migrating
multiple components. They can each have
their own profile for an environment,
for production, for UAT and so on and so forth. And the blueprint also
gives you an ability to work around your
team constraints. So maybe you want to
schedule it in the morning, maybe you want to
schedule a UAT run in the afternoon–
a blueprint allows you to do all of those steps
in a very consistent manner. So as you can see the process
of scanning the code has begun. And as you can see, there
are about eight steps that are required by the tool. So this is just a
log– so what you see before was the progress
button, making sure that it’s working there,
and this is the log view in which you can
actually visualize and monitor all these steps that are
required for migration. Now the remediation
is the key step– so it is trying
to figure it out, and as [AUDIO OUT] as
the remediation begins, each element of the code
base is then shown to you in a nice little pie chart. So what we have seen here is
the number of code violations are about 22 in this case. So 22 violations is
something that we have been presented with– now we have two options– whether we fix it
manually or we try to do– and you can actually
drill down to the file, and I’ve picked up one
file to showcase to say, the amount of changes
required– and it actually points to the very specific
line and the type of rules that are being violated that
are required to be fixed. Now in order to
automatically remediate, there are two options. One is you check in the
code, so the next run of the code check-in
typically would initiate the
automatic remediation of all these standard stubs
that are required to fix it. So usually what we have seen is
the success rate is about 80%. So 80% of the standard
changes required for you to go to the cloud are already
modeled in a code base, and that can be given to you
in a standardized pattern. And we also provide
an option to say, no, you know what, if there’s
a complex change, if there is something custom
you want to put in, you can then make
a manual change, and then check in the code. So typically, at
the next check-in, the remediation is triggered,
and it automatically goes and fixes all the code
based within the application. So in this case, I have
initiated a checkout, and the checkout actually
starts the scan process. So what we are expecting here is
that once the rules and the 22 violations that were listed,
they go through the next step. You should be able
to come to a point where there are no
violations found. So I think it should not– OK, fine. So I think at this point of
time, it is the code violation, because in the
application– obviously there’s the line of code–
is about 20,000, not so huge code base. But the code rules
violations have been fixed, and that is a
signal for us to say we’re good to go for
the cloud deployment. OK, so at this point in time, we
again check in the application. So understand till
now, what we have done is we prepared the
application, we have configured the environment. We have run through the entire
set of reference architecture principles, compliance
principles, to the code, and we have got those
violations fixed. So that now we set that
this particular module or an application is ready
to be deployed to the cloud. So we check the back, check
the application back end, and before the
code gets deployed, there are two components–
the application server itself, and the database
server, get migrated. And in this case,
it has shown you about five or six
configuration rule fixes that are done on WebLogic. OK, so I think now it
will initiate about a build process that
would deploy and run it directly on Kubernetes. Right, so the build
is initiated now. Obviously the build
engine in this case, we have chosen it to be
Jenkins, but the platform has come up with adapters
with most of the standard CI platforms, and you
can trigger the build with any other device or
any other pipeline mechanism that you need to. And once the build
is done, typically now we should be able to switch
back and see the application in the Google console. Yeah, there it is. So now, from on-prem,
the third concept that I wanted to introduce
was the sense of adapters. The platform has built-in
adapters for every cloud platform, including
the containerized and the VM-based
deployment patterns, and now the apps should be
deployed onto the Kubernetes environment, including
all three layers. So in some cases, you want
to just deploy a service end point. So that can be configured–
in the case of this demo, we have gone through a cluster,
the workload, and the service endpoint deployment. And hopefully, this should
show up in a bit– yes. So the bank application is
there at the workload level, and then the last step remaining
is the service end points. When the service
end points are done, the application should be
available for checkout. OK, so I think the apps being
deployed in the service end point is coming up. All right, there it is. All right. So switching back to
the scan– if you see, the entire process
has been executed, and it’s a clean run,
a happy workflow. Now, obviously there is no
happy workflow in real life– maybe it’s 10% to 20%. Each one of the steps
you can actually solve. There is a user-based
profile– an administrator that can stop at
each one of the steps and say, where is the problem? Can it be introspected? There are logs
available that you can go in and stop and start
every step as you want. So each one of these steps
are modeled uniquely, that you can control for a
team, or for a large portfolio of applications. So once a quick smoke
test is run– yes, we get the standard UI
screen that pops up– and that proves at least
that the service endpoint is working, and the
application is deployed. So I think that essentially
completes the deployment process. So in summary, I’ll just
summarize the entire process. Starting from applying the
profile of all the rules that you want to
encapsulate– architecture, reference architecture
principles, compliance principles, and all
other enterprise principles– right into a central console. Run a scan to understand
what is the effort, what is the amount of change that
you need to make to a system. And then once that
change is sized up, you can then deploy
and remediate the code using our
standard stubs, which typically we have
seen about 70% to 80% of the changes required for
[INAUDIBLE] applications to go to the cloud
are already built in. And then the third part
of it is the integration with the underlying
cloud providers. We have an automated
mechanism where, based on a DevOps
principle, it will run the entire
sequence of CI/CD test to deploy to the
GKE environment. So I think that hopefully
gives you insight into our approach in
terms of managing scale. Managing scale is with respect
to the ability of the platform to manage the
multiple blueprints. Complexity, again, being
managed by the amount of rules that you can
articulate, and put it as part of a central
system that can be managed by this centralized
architecture team, for example. And then the flow,
which means I should be able to make
those changes when I want, because the
workflow that you saw just one happy path, right? But the same thing
is done day in and day out whilst the ultimate
application is going from UAT to staging to production. So what we see in
real life is there are multiple runs of the same
automation workbench that help developers to make the
changes in a consistent manner. So imagine doing all
of this manually– most of the customers that
we see do all of these steps manually, and then
at every step, they end up introducing
some manual errors. So a workbench, an
automated solution, will help you make the same
changes in a repeatable, in a consistent, fashion. [MUSIC PLAYING]

2 comments on “Automating Application Modernization on Google Cloud Platform (Cloud Next ’19)

  1. Ashok, Great Presentation!!! 
    Liked the explanation of Legacy burden and an excellent strategy to help customer in their transformation journey!!!

Leave a Reply

Your email address will not be published. Required fields are marked *