Java-Based Microservices, Containers, Kubernetes - How To

Ray Tsang & Arjen Wassink

Recorded at GOTO 2016

Get notified about Ray Tsang & Arjen Wassink

Sign up to a email when Ray Tsang & Arjen Wassink publishes a new video

my name is
yeah I know yeah so I'm not here to talk
about this PPC honey I'm not here to
convince you one way or the other or for
you to be doing but if you do want to
explore and microservices architecture
you know a few things i like to share
again this is going to be mostly a
how-to and the changes that we are the
first public I won't allow you to is
that composing or I system to be so many
services if you become plug system or
application to city services back yet
and that's what really freaks consist of
your application in half a minute rather
than just one in of course more
redundancy me since you only have about
four instances of each of those services
well and now you're looking at a
multiplication problem
you have three different services you
have employed to each that's the
60-minute well it's probably not enough
you might scare some of these waiters
out whatever the other and eventually
you're going to see maybe thirty you
fourteen hundred instances that you know
have to wait a minute and that in the
traditional way of managing services
working while technically what's gonna
happen they sat down before your party
even starts maybe even now you have to
request the servers to be secured
relieved I became when I was doing my
project as a consultant I have to order
the number of servers even before the
project starts between animal import
deposit finishes available for coming L
what I can do is do one act right like
the noodle documentation that's very
very low of how you are able to do stove
the servers and put the right components
come to them and then finally the Ouya
here application so like company system
the you to teach me happens though how
you want configurator hostname to then
work the firewall and then you pump you
lay down like your music job gives you
calculate with logic whatever and then
you configure it effectively you deploy
your application onto it and they very
quickly you find out like me that woman
called we doesn't work the very first
time how many people have that happy for
ya I'd also if the other problem you're
going to find out is eating without
seeming garnet you are going to run into
the trouble we're in it yeah a
complicated procedure is either dumped
who you know Mabley where somebody
actually policy to create an environment
and its number any person ages 50 also
we have to write out strength to make
any mistake that's right
then your troubles in those which may
even run differently
so one of the first thing that we need
to solve is how do you actually employ
the same application multiple times and
it's zero fashion in the first value in
a sequential order and this recognition
and sound
yeah and that is self activity in terms
of what you want to run and that makes
can be deployed anywhere else and you
very very quickly the other problem
you're going to see is that that if you
have so many services you don't want you
to know of them individually one
unmissable machine right because we have
the precipice you don't want to have one
of the machines that we need to do is to
be packaged as efficiently as possible
into a single or the fewest which is
possible so that you have less
compressibility and we do that then one
two issues were potentially you may have
multiple senses on the same machine I
personally enjoy
the port complex and you want to avoid
those as much as possible in other
challenges as well how do you make sure
that they're up and running
there's so many to monitor how do you
how do you do that make it out a little
bit of course you need to be a
protective help these systems in
individual services so that when they're
having issues maybe you need to start
them in the give you know market
especially if I want to as well and one
big assurances that I say and if you ask
me it's not madness
if you have say 1320 services in your
garden that you have to be managed
have you to report on these environments
and reproduce them in a consistent
fashion remember the point 1/2 occasion
is hard enough but you don't have the
right to do so here with your head now
you're going to be dealing with
countries on there and you have to think
about the sentence use so today is about
the poor son come to share and many of
tools to do this including Google of
course and it is partially what makes
micro-services architecture successful
if you actually go into this occupation
without quite knowing the many isolated
lead by yourself in the situation that
you just simply cannot manage any of
these at scale so a Kudo just a moment
is everything I do own all of the
services that we offer including search
one year use some of these things
they're open containers within Google
and without using docker containers but
we're using with the fundamental
particle technology in fact Google
considering the technology that makes
cutting impossible which is called CBC
groups into the Linux kernel many many
years ago and that is the nominal
why because this is mm is that yeah
there's a world of difference here but
we managed to continually learn how to
manage all services SQL this is what we
do at Google as engineer we cannot
possibly deploy content especially the
traditional way I guess right this is
what we do if I have a service rather
than to point in into individual
machines ourselves work the basement
specifically designed for those machines
we equate to a target of a stem a cell
is ready to say cluster of machines in a
single cluster of machines and you go a
10,000 dishes so we don't deal with each
individual because I want to see too
many but what we do is we want to us to
do this for me would you say how many
videos the posture would you go figure
out we specify
these are potentially static binaries
that can just be copied and deployed on
any of the service we can specify
arguments but most importantly we are
able to specify the resource
requirements and this is where
containerization is going to be really
helpful because we are running multiple
services multiple applications you don't
want them to step over each other's toes
when they are fighting and competing for
resources right you don't want one
runway service to take a hundred percent
of the CPU and like all of the other
applications just don't work anymore so
with resource isolation you are able to
contain them so that they don't exceed
their boundaries and then we can say how
many instances of this service do we
want to run we can say that I need five
replicas that means five instances or at
Google maybe we need a little bit more
and it's super popular because that's
the only thing I can write we want
10,000 instances would you say 10,000
and this is oh they can rent it you just
say how many you want you say what you
want to deploy and we deploy it and this
really works behind the scenes we will
copy the binary into a central
repository of some sort like a shared
file system where other machines can get
you okay and then we're going to send
that configuration file into our
internal tool the internal tool is
called Borg and there is a master node
which is called the Borg master and the
world master understands how to receive
the Yama file or the configuration file
and knows how to deploy it once the
board master receives it is going to
console the scheduler which then will be
asking all the machines the same
question do you have enough resources to
run this application and it's going to
check all of the apps all the available
server nodes to see who's available to
run this application if you cannot run
it it will just skip you if you can then
what it's going to tell you to do is to
go ahead and download that image and
then start it okay and we can do this
very very quickly and very quickly
you're going to see my hello world
application running in the Google Data
Center simply by the point is one
descriptor and we can do this so fast
that we can deploy about 10,000
instances in about 2 minutes and 30
seconds and that's partially because we
also have very very fast internal
network that we can actually copy these
large images around very efficiently
and that you do get that kind of
benefits when if you're using the Google
platform as well so that's how we do
things within Google but but that's not
enough right because that's that's all
if you want to try this out yourself
well this is what you need to do this is
where where that community's coming
right kubernetes is the open source
project that is designed to orchestrate
containers so if you're running your
applications inside of containers and if
you need to deployed a scale then you
can use criminales to orchestrate them
across multiple machines just like in
the similar fashion that we do within
Google as well and kubernetes is
actually based on the experiences and
the skill that Google had had for the
past many many years and the point so
many containers we know how to do this
at scale we know how to deal with the
coming issues so then we actually open
source discriminatees project it's all
open source is written go the important
takeaway on this slide is that it will
run in many different environments in
multiple clouds and also on frame and
that's super important as we go into the
demo in a few minutes the community is
very very vibrant we have we are on
version 1.2 I think 1.3 is coming out
very soon we have many many contributors
and many companies and many stars well
basically the gist is if you want to try
it out please get involved with the
community as well they have a slack
channel they have a forum they have
their very sponsors on github as well so
please go check it out if you like to
contribute or learn more about
communities in detail also but today I'm
going to just give you a taste of it so
this is how it works it's a very easy
style for me to make if you haven't
noticed oh I have to do is copy and
paste the previous one yeah and do a
string replace yay so so this how it
works rather than a static binary we are
building a container image which is
inherently a static binary anyways and
it's a docker image and I can actually
push this image to a central repository
and in the darker world this could be
known as the darker registry right you
can have a registry online you can have
a public one or a private one you can
store it wherever you want it's not lies
the machine can't get to it then you're
going to be able to write like a similar
configuration file that says what you
want to deploy
you push you to the master the Masters
risk against the scheduler and then it
checks with all the machines in the
cluster and say if you have enough
resources to do it to run the
application and then it's going to pull
down the continent image and start it
easy right it's very simple concept but
it's very powerful because because well
it first of all you allows you to
describe what you want to deploy in a
simple file just like that you can
specify the resource limit as well but
you can also specify how many instances
of something you want very similar to
what we do internally but here's the
here's the catch or the the most
important thing that you want to
remember with kubernetes you're really
configuring your viewing your entire
cluster as a single machine in a way all
the resources on individual machines all
those CPUs and memories that's available
to you you'll manage them through a
single pane of view through kubernetes
there's just one kubernetes cluster to
you you deploy it to this one single
managed cluster and we'll take care of
how to put it on the actual machine for
you behind the scenes so enough the the
talks I'm just going to the demo very
quickly now that one so I just want to
show a hand how many people here are
Java developers oh I'm in the right
place okay fantastic
so so you probably already know how to
continue write your Java applications
you either use the doctor file to do it
or I just want to point out one thing
which is there is there are some really
really nice plugging so you can actually
use if you haven't used them already let
me see here so Spotify Spotify actually
produced a couple of plugins for
continued rising Java applications with
maven they have to their is a docker
- it's better question for them but
actually I know why but they are both
very good and they do the things a
little bit differently
but the beauty here is that you can
actually capture your doctor image
creation process inside of your palm
down even prompt XML and the beauty of
this is you can actually tie this in
into the execution execution phases so
whenever you're packaging your jar files
you can also put use the container image
at the same time this is really useful
and then you can also tag it with the
version numbers of your java application
- or if you want you can also tag it
with your kid hash also so let me go
back to this uh service right here and
now another question is how many people
here heard about companies before all
right well before I talk about it
obviously all right
but sorry but how many people here has
used 800 well I'm glad you're here so
how many people here have singing action
sing it quite a few okay so many of you
haven't seen this so this is gonna be
new ok so first of all what I'm going to
do what I have done already is that I
created this image and that push it into
a registry and now I have this image in
the registry somewhere I just want to
run this SQL on many different machines
and the way I can do that is not delete
keep CTO run so I'd only if you can see
this on the top that yes you can
hopefully so to run the image a
container image in a cluster of machines
this is very easy with kubernetes here I
have a cluster of machines in the Google
cloud platform I have four different
notes that can actually run my workload
for my application so I have to do is to
say keep CTO run the you can name this
however you want I just can't code this
hollow or service and then I can specify
the image this is the image that I want
to deploy and this can be located
anywhere that the machines can get you
and I'm using a private docker registry
that comes within the Google cloud
platform itself so that I just push my
image there and then I can download it
from my projects in Google cloud and
then here's the important part that
actually L following that I can specify
a bunch of key value pairs and this is
very important because these key value
pairs are labels and that's a very
important concept synchronous because
everything in communities can be labeled
and the label the key and value pairs
you can name it however you want for
example if you like you can say I want I
want to label this deployment version is
I can say that this is the environment
of staging maybe I can also say that
this is for the come
go to Amsterdam right I can't I can name
this hardware I want the important
takeaway here is that with labels in the
future you can curry kubernetes and say
please tell me all of the applications
who has the label of app is equal to
hello world service and of the version
is equal to one you can query this later
via the API and it's very so very
important when you want to route the
traffic to these application instances
ok so that's what I have to do I'm just
gonna run this coming in line to start a
service in my cloud in my crib and a
Lee's cluster and as you can see very
it just got deployed and I haven't done
any manual scripting here I haven't done
anything else to say which machine to
deploy to I got four different notes
here and I can see that this got
deployed to one of the notes T PLA the
two boxes here are very very important
the boxing gray is what we call a pod a
poppy OD a pod is the atomic unit that
kubernetes can actually manage now you
may be asking hold on a second I thought
we're all talking about containers here
I thought containers are the atomic unit
that we should be managing by increment
this is called a pod now what is a pod a
pod can be composed of a single
container or multiple containers okay
and they're guaranteed to have the same
ip address a pod has a unique IP address
they are guaranteed to live and die
together so when the pod dies all the
containers within the pod will go away
and they're guaranteed to be scheduled
onto the same physical machine now what
would you actually wrong with in the
same pod of the different containers if
you have an application with the front
end and the back end
are they tightly coupled together that
you want to run inside the same pod the
answer is actually no you probably don't
want to do that why because if you do
that you cannot scale the front end in
the back end independently from each
other okay so you want them to run in
separate parts in this case so what
would be a good use case for the pod
well maybe you have a java application
that's exposing matrix via GMX
and your system has another collector
that needs to collect the metrics with a
different format well rather than
writing and changing your applique
to talk to that format what you can do
is to run a sidecar container in the
same pod that's able to understand GMX
matrix and also being able to push it to
the metric server the way that the
matrix ever understands so you can
actually compose your application with
multiple tightly coupled components if
you want to now the boxing the in in
blue that is called we call this a
deployment and it does a few things for
you and very importantly is that you can
tell the deployment - well first of all
deploy the application why was running
this I was actually deploying the copies
of the pods
what you can also do is to scale you can
say qct of scale the deployment
HelloWorld service and how many do we
want we can just say replica Z is equal
to and I can say for and it's going to
tell the deployment that I need for
instances now the appointments going to
say check against something else is
called replica I said to say hey do I
have four instances now - I have four
if I don't then I need to spin out more
these instances goes away it will
actually notice it and say oh no I have
three by 94 let me go ahead and start
another one for you so that's very easy
to do and we're gonna see deployment in
more detail in a second now notice that
every part here every box in grey has a
unique IP address now these IP addresses
are unique to the pod they can talk to
each other even if they're done on the
same machine but they come and go they
are in femural and then the question is
how do you actually get to these
services how do you actually consume it
if you need to know all the IP addresses
that's probably not the best way to do
it typically what you do today in your
infrastructure is you create a load
balancer in front of this right and then
you configure it a little manager to
know which are the back-end endpoints in
what these back and endpoints can come
and go the IP addresses could change so
you don't want to configure those
manually but in companies we have the
first class concept which is called a
service okay and a service it's really a
almost like a load balancer once your
provision it you'll give you this stable
IP address that will then be able to low
balance your request so to expose all of
these parts as a service
oh I have to do is to keep CTO expose
sorry expose here we go
keep CTO exposed the deployment which is
the box in the blue and I can say the
port number 80 80 that's what the
external world I want to expose at and
then the target board is 8080 because
that's where my application is running
up so I can expose this application by
putting a load balancer in the front and
that is the Box in green and the way
route the traffic to is by using labels
so if the traffic request comes in into
this green box in in the service it's
going to see well which one of the the
grey boxes which one of the parts match
with my label descriptor that says well
it has to be route through the
application that's called hello world
service and now I can actually get to
this service now another very big
question that you're going to be run
into is how do you discover your
services if you have multiple of these
things how do you actually know the IP
address that you need to connect to well
many people actually run a separate
registry of some sort well companies
actually have this right out of the box
as well so there are multiple ways to do
this the first way is potentially using
the API so I can access everything that
I do from the command line they all make
API requests behind these things so even
to know which services are actually
available I can use the API I can
actually get back either yellow or JSON
payload I can see all of the services
that's running here so if I go and look
for hello world service I can have all
the details about it and I can also get
its IP address right but you don't want
to do this every single time crap
enemies actually expose the service as a
DNS host entry for you run out of the
box yeah that's really nice so for
example let me just do one more thing
here if I want to get inside of the
cluster and the way I'm going to do it
is by running a batch script directly
inside of the kubernetes cluster and I'm
going to do a cube CTO exact TI the name
of the container and bash ping and if
I've internet this will connect there we
go it's really slow get off stop
watching YouTube videos please yeah so
so I'm inside the kubernetes cluster
right now and what I'm going to do is
I can curl this URL of course I can do
the IP address 8080 hello ray
and that worked that was a little slope
but like I said you don't want to do
this all the time with the IP address
and like I said we actually expose the
DNS name so this becomes very very easy
to do so I can say hello world service
and there we go it just resolves it
right behind this thing for you so you
don't really have to run a separate
registry and know if you don't want to
when the instances come and go they will
actually update the endpoints behind the
scenes and it will always route to the
right and available instances now if you
really want to know what endpoints is
behind the scenes for this particular
service what I can do is that this is
Kohala or service I can actually do a
cube CTL key at endpoints in the name of
the service and I can actually see a
list of endpoints that's available and
ready to serve so if you don't want to
do a server-side load balancing if you
want do a client-side load balancing you
can still get to these IP addresses also
yeah that's okay it's pretty easy right
it's very easy to do but what I have
just done is really just deploy a
stateless service and of course your
application probably have state and so
to to show you a little bit more of how
with state I like to invite my
co-speaker Orion watching to the stage
yeah where's the mic
getting a bird and migrate already
you never really know
yeah okay
songs that are on that
yes this being certified
okay no Robbie no I see it
yeah so so remember it this is the
beauty of kubernetes iran's in many many
different places it's not something
that's limited to the Google cloud
platform of course although it's
probably the best place for you to run
it but if you want to you can also run
not a very powerful machine but you can
actually simulate the entire data center
with similar components at a small scale
and play with it so let me let me do
this deployment with my sequel which
actually has stayed and typically if you
started talking container without the
value mount that's not good for you to
run my sequel why because when you start
my sequel it's going to write some data
when you shut it down
it's gone can we start it again it's
going to start off fresh without any
anyone yeah that could be a problem if
you try to run a stable application
without keeping state
yeah great so with the Elvis round or if
you are in your own data center you can
actually share different drives
different volumes in different ways
whether via NFS I Scotty RDB cluster
staff there are so many different
options and many many of them actually
supported with incriminate ease and the
first thing I need to do is to register
the value instead of kubernetes okay so
even if you have that physical device
available somewhere you have to register
it because Koopa name is actually need
to know how much storage is being
offered for this particular value which
is in the capacity column right here and
the second thing is how do you actually
connect to the volume how do you connect
you wait so different type of shared
file system has different way of
connecting to it in this case were using
NFS so I'm going to say the server and
also the path if you're using something
else like say cluster FS you're doing
differently okay and we can support many
different ways so the first thing I need
to do is to register it so I have
created this thing and I'm going to say
cube CTO create - you have the volumes
price volume llamo and that is going to
oh it already system is check check get
PV okay so let me delete that so it's
really easy to just delete the volume as
well so I can say delete PV yeah let me
do that so I'm going to say delete the
first one and also delete the second one
okay and now they're deleted but the
data is still there it's just that
they're not registered with communities
anymore I'm going to register it so that
it will work properly for me okay so I
have created the volume here and it has
the capacity of one gigabyte and it is
available now the second thing I have to
do is to to lay down a claim because all
of these volumes are just resources to
communities and these resources could
potentially be shirt or reused right
when the volume is actually being
released you don't want it to just sit
there without being used if it's not
important anymore so you want to be able
to reuse this the the disk with somebody
else if they need the capacity and so
now what I interview is to I need to say
lay down a claim to say I need to use
the volume please give me one that best
describe my need and to do that I need
to create what we call a persistent
value claim a PVC okay persistent while
you claim and notice he
all I need to do is to describe how much
capacity do I need and the type of
access money to do whether it's
readwrite from just a single container a
single pot or is it the multiple rewrite
for multiple parts so but notice here
I'm not specifying which volume I want
to use as you say how much I need
why because kubernetes will then find
the right volume with that best fits my
need and assign it to me so if I need to
lay down claim and say hey I need to use
the volume that you create this PVC I
can't do keep CTO create that you have
my sequel PVC and that's going to make
the claim and once I do that if I do a
get PVC what I can see is that it's
actually bound to one of the volumes
that's available that best fits my need
and now my sequel PVC can can access
this volume now then what I need to do
is to update my deployment ok to make
sure that I am mounting the right
volumes and that's really easy to do or
I need to do is to specify it say here
volumes is referencing to the the PVC
claim that is created right it's like a
ticket for me to use the physical volume
and then I need to specify the manpack
which is here that I need to mount this
disk into bartlet my sequel and the
beauty of it is that kubernetes will
actually take care of mounting this
volume for you behind the scenes so if I
were to run this my sequel server right
so create my sequel service and also the
deployment you don't know what service
are now and if I do that what this is
going to do is there we go let me
refresh what this is going to do is to
run my sequel on one of these notes and
before you start the application start
my secret is going to mount that
interface value in for me as well and
then start the container and then make
sure that the volume is mounted into the
path I specified so now this my sequel
server has to write data
okay oh yes yeah that sounds we have
sound yeah yeah you can access adults
also yeah I can see but I can also
access it directly by the point the
front in the back end okay will I want
to see the back
and running yeah so remind just remember
all of these things are running instead
of the Raspberry Pi cluster so I'm going
to do something very quick because of
the time I'm going to just create F dot
what that's going to do is to deploy
everything for me in one shot oh sorry I
think I used the wrong one so now my
sequel is actually actually deploy here
there we go so now I have my sequel the
front and in the back here okay now when
I created the the front a how do I get
you a um yeah we have two services
running here on the Raspberry Pi but
normally the services uses an internal
cluster IP address which can be use but
it can't be used outside of the cluster
so you want something to expose the
service to the outside in Google cloud
you have to load balancer for that Rea
already showed you one easy thing to do
that on this small micro macro data
center is to use a node part and what
qnet is dust then when creating a
service is regenerate dynamically assign
and a port number to each note for dead
servers so we have a certain port of
port number at which we can reach that
service in the coastal so in this
particular case it generated at the
language generated a phone number for me
and so you can avoid all sorts of poor
conflicts if you were to run ensure
directly from the host right so it's on
this port so if I go there alright now
that works pretty good so the
application stop running
can you see it is already data audience
CB DB database so he likes air
I don't know air but uh yeah there we go
and you can click into it and then you
can see all the tracks when it comes
about it sounds wrong on the raspberry
pi since a little slow but you can
modify these things as well so we have
the full application running fantastic
that's nice that's too easy yeah too
easy what else you got well a product
owner just came and we have
new version we have to ship so marketing
has decided that the coloring scheme was
not that that nice so they have they
offered a new new styling so we have a
version 10 available for you and we
wanted to roll that out into production
ok so typically what you may do is to
you know deploy the new version and then
shut down the old ones but what I to do
is to do a rolling update for this what
I want to manage to do for me is to roll
out the new version one instance at a
time or multi-point since the other time
I shut down the old instance want a new
instance is up and running and you can
do this very easily with kubernetes -
and remember deployment the boxing blue
that can also manage the rolling update
for me so all I need to do is to do
chief CTO edit the deployment and I can
say the cddb front end and then down
here I have the image that I can change
and if I want to update this to version
number 9 I just updated I save this file
now look this is really cool I'm looking
inside the state of companies we are my
local text editor and I can modify the
state just by saving it now as soon when
I save it what is going to do it's the
performer running up and you can see the
new instance is coming up and then once
it's ready it's going to shut down the
other instances and while oh this is
running because we have readiness checks
and live news check set up in criminales
you can actually just refresh the page
and oh and it was rendering the real the
running update yeah did I use the wrong
version I think I did is not a different
color here yeah it doesn't really work
with the rest of the color scheme here
so I think we should probably yeah we
have a problem now in production so yeah
customers who don't like the new color
scheme so we want to roll back yeah so
fast as possible
the robot is really easy I can actually
see a history of my deployments as well
so I can do your cube CTO rollout
history deployment cddb front end and I
can actually see a list of deployments I
have made now if I actually deployed
this thing with the - - record you can
actually see what caused the change what
caused the new deployment you can roll
back to any of the revisions that you
see that stealing history which
awesome of course you don't keep you
know all of the histories there's a
limited amount of that but if I need to
roll back this wide view I can say I'll
I don't do just to go back one
appointment and let me do that and you
say I don't view my diploma cddb from
the end and what is going to do check
this out it's going to you another
rolling update as well and now you're
super simple to do and again because we
have the health checks and readiness
check as we're doing this rolling update
all the connections will be routed to
the instance that's ready to serve
yeah and now we just roll back that's
that's also too easy
okay what else do you got for me I'm
really really curious we have now our
maker Micro data center and how many
people are being have been in a data
center themselves quite a few yeah and
you want to wanted to pull a plug
somewhere in the in the data center and
see what happens oh yeah yeah you love
to no actually not here at Google yeah
no I can't any talks to be poor today on
this Roxbury park closer or is that what
you want to do yeah so we had already a
volunteer yeah so come up and stay on
stage alright give you this gentleman
hand brave person trying to break my
cluster back here so people can see idea
to make it to make it really nice I want
sequel is running way you want to see my
set up so happen interaction has
happened before and it passed I will be
called yes some pointing who's going
wrong and it's in the middle of the
night and I don't want that you don't
want that okay so let me just show you
that my sequel server is up and running
as you saw in the air so I can see my
sequel - P - host 10.0 does the other
one twenty okay and what is the password
route yeah of course
everything is ruined it's not Superman
it's true alright so I can show
databases and we have the data here blah
we have the Queen tore database which
has audience favorite zones and it is
running on oh four so it's running on
the fourth note the fourth note yes
whatever you do do not pluck the the
first one that is the for the bottom one
yeah there we go
are you ready for this yeah all right so
it's gone it's gone and now and nothing
nice game so what could be ladies has
been configured to do is to check the
skills of the machines as well and we
configure it to check every 30 seconds
or so so in about 30 seconds which is
right about now yeah if it ever works
you're going to see note 4 turn red
turning red wait wait that's too easy
that's too easy of course it turn red it
went down but what's happening now check
it out he actually started my sequel for
me as well
yeah it's not bad my secrets now up and
running yeah very good and and and
actually this is what companies actually
do behind the scenes your mother
remember the value amount well the
volume is no longer just mounted on the
machine that died because it went away
kubernetes managed the value mounts for
you so now if I go into that node SSH
into root at 10.1 50.4 oh boy dari by
the way I get really nervous about this
demo because I'm plugging my sequel it's
not something that you should be doing I
do not recommend just trying this at
but if I watch you go back here
definitely don't do wait what what oh
yeah sorry
it's 0.44 isn't it yeah that's the name
oh sorry for this concert sound to me
now yeah so if I go there check this out
this is really cool if I can connect you
wait there we go if I go root yes invite
you in factually mount if I see em face
mount it's actually here so that's a
good sign the other thing I want to make
sure is that I can actually connect to
it now remember my secret just got
rescheduled to a different machine but
I'm going to use the same common
to connect to it with the same IP
address because it is using a simple IPS
expose as a service now if I go to root
it connects that's not so far as the
database still there yeah yeah oh it did
not be still there let me see so not
used so sure databases yes is he still
there but we have the right data are we
cheating do we have the right data a
different volume yeah there we go so I
just refresh my application as you can
see it's connected back to this right
database because I have to retry it
reconnect so it reconnected and we got
all the simulator here so yeah very good
I guess I'll work thank you very much
thanks for our real-life chaos monkey
yeah and if you if you plug it back in
they will be marked as scheduled to be
ready to redeploy as well well we have
the application now running on my mike
margaret dead center and really nice but
we can't go into production with that
yeah if you don't want to use Raspberry
Pi sure but if you want to run it on
frame with a more powerful machine you
can but we can also running on Google
cloud as well like I showed earlier yeah
and the beauty of it here is that if you
want to achieve a state where you want
to not only be able to manage your
services officially just like what we
have shown but you also want to have a
hybrid deployment across multiple data
centers or multiple different providers
whether it's cloud or unfriend you can
actually use the same set of descriptors
like here this is running locally on my
machine I have the same deployment llamó
files which is nice because you can
check in your architecture the only
difference the thing I'm doing here is
the value mount why because in the cloud
I rather than using NFS I can actually
mount a real disk from the cloud right I
can provision you disk I can mount it so
all I have to do is to provision that
volume register it with a different
persistent value and here I'm just
saying that I want to use a GC disk and
so I can go ahead and create that and so
I can register it and then I can go
ahead and mount
lay down the claim so I can't do the my
sequel PVC right and once I have done
that by the way this is all happening in
the cloud now and what I can do finally
is to deploy this application I can't
create dot and there's one one very big
difference here which is in terms of the
globe answer because rather than
exposing on the note ports directly on
each individual machine I can actually
be instructed to create a real load
balancer directly from the llamo file as
well and now the application is being
deployed let me just go back and take a
look the same application the only thing
that I really had to do was to make sure
don't work in the x86 environment so I
had to change the base image so that
rather than using arms
Java binary I'm using an x86 Java binary
and once this is up and running we can
actually go and see it
now what is this doing right now it's
just waiting for the external load
balancer to be created so if I say get
serviced what this is going to do is to
create a real load balancer with a real
external IP address and there we go so
that's the external IP address and I can
go there this is too hard and there we
in the cloud with the same descriptors
it's very easy to do so all of a sudden
you can just deploy to multiple
environments with exactly the same way
and that's beautiful yeah it's really
nice yep one configurate said of Convery
configuration files and you could use it
for different environments to do that so
to set them up so that's really nice so
yeah so if you're interested in this
technology please give it a try
if you want to learn more about
communities go to communities IO and if
you want to try Google cloud platform
well and you can provision kubernetes
cluster it's very very easily by a click
of a button well install everything for
you and manage everything for you as
well if I wanna try it on the Raspberry
Pi cluster check with our Gian we have a
really really good blog that he wrote so
we can buy the right components and play
with this as well
so thank you very much for your time
all right thank you very much yeah we
have time for questions there's no time
for questions but there are very
interesting questions so we will make
some time for okay all right before
anyone decides not to to wait for the
questions and leave please vote I see
that we have massively enjoyed this
presentation but I also see that the
actual headcount is much higher than the
number of votes so please vote for this
this session key yeah there's many
questions and I we can to handle them
all but since you're a Google guy here's
an interesting one
Oh No how well does qu Vanitas fit
Amazon Web Services how well does it
work or it actually works in fact in one
of the conference that I've been to
about the more than half a year ago one
of the attendees came over during lunch
time and said this is so awesome I want
to show my boss how to do this Mikey I'm
sure let me deploy this are using cloud
yes I mean I know so what do you need to
deploy this on Amazon so uh but you can
actually do it over a lunch time with
the right tool set up downloading the
right services they actually just
installs and you can provisioning
service there as well and it actually
works with their load balancers in their
disks as well so we can keep it a try
but if you're running on Google cloud
platform of course we also have really
good support with a click of button with
a single company online you can also
provision the services for you yeah
all right last question what's your
experience with database performance
when running on NFS volumes yeah that's
a great question it was very nice hobby
she naughty ID now keeping just remember
that interface is something that we're
using for the demo some people still use
it for variable of things but if you
want to use something faster you can you
can use our TBI Sky Z and a bunch of
other things as well yeah all right
great well thank you very much thank you
that's that's it thank you very much