The Big Friendly Monolith

Frans van Buul

Recorded at GOTO 2017


Get notified about Frans van Buul

Sign up to a email when Frans van Buul publishes a new video

[Music]
let me very quickly introduce myself in
exonic just a few minutes so my name is
Thomas from bill I'm officially known at
our company as not the sales guy as it
says on my in my mug which kind of means
that I am responsible for sales in some
sense and telling the world about our
products but I'm a developer as well I
spend about half of my time and doing
Java coding and the rest of the time
telling the world about what we do
so exonic is a new company started in
July of this year so just a few months
ago and the core of exonic is excellent
framework so excellent framework is an
open-source framework that by itself is
not new has been around for about five
years and the reason for starting this
new company around it is that it's
adoption is increasing rapidly the
download chart that we get from maven
Central's exponential last month 50k
downloads and the main reason why it's
becoming so popular now is that people
have kind of rediscovered some of the
principles behind axon framework in the
context of micro services but we were
talking a lot more about that later so
the business model about from exonic is
to provide services around excellent
framework consulting training that kind
of stuff and we provide a number of
commercial software products in addition
to the open source that's all I wanted
this talk will just be about the open
source stuff that we do so that
basically as not that much to do with
the company jeanna for today first thing
I want to do is to go over typical
layered architecture phenomena the
problems associated with those some
stuff about DDD and how that relates to
micro services and after that I want to
jump into CQRS
an event sourcing and how these concepts
are implements the Nexon framework and
then finally I will show you some code
examples just to give you a flavor of
what it looks like to work with
excellent framework let's start by
considering the layered architecture
very familiar picture I guess everybody
has implemented this a ton of times so
you have a number of layers the main
ordering principles that dependency
only go one way and then they have to
share language that we call the domain
model sometimes it may actually be
business logic in the domain model but
more often than not those are just very
anemic logic free data odors that are
really processed by the service layer
now let's zoom in zoom into that domain
model a little bit domain model start
simple in the beginning of the project
and then they evolve as you add more
classes to them and to make that
concrete let's have a look at your
typical web shop example everybody knows
that your can place orders orders have a
number of order lines or the lines
relate to particular products and refer
to a quantity of that products that
you're ordering products have a price
order lines a quantity quantity times
price edit overall the order lines gives
you the value of the total order simple
will work now if you have a web shop
then it's not just about orders it's
also about maintaining the list of
products that people can buy right it
said as a like a set of product
categories so it's a product catalog
perspective now interesting thing
happens if prices change no prices will
change go up in the web shop now in this
model once you start updating that price
what will happen is that you will also
modify already existing orders which is
not what you want because they have been
ordered that particular price points so
this model very quickly brings you into
trouble so you need to solve that and
one way of doing it I'm not I'm not
recommending it but one way of doing it
would be to say well actually products
have multiple prices because there's an
historical evolution of the price of
those products each order line refers to
a product and a historical version of
that price and then my products of
course also have a current price the
price that I'm currently displaying in
my store which might work but it
spectacularly complex for something that
seems so easy right it's a very simple
use case so if this simple use case
already brings you logically into this
kind of mess what happens if your
program really starts to evolve is that
you end up in something like this
also known as the big ball of mud and my
colleague a lot always says that's the
architect sitting on top now
interestingly if we go back to this
picture we've seen the complexity in
this domain model the main thing that
we're doing here in this model to to
manage complexity is to use those layers
but those layers fundamentally have
nothing to do with the complexity so you
can add layers all you want but adding
more layers is not going to solve the
problem because the problem is not in
now this is I think it's where
micro-services have a very interesting
role to play because they are a
fundamentally different approach to
managing the complexity instead of
say I'm going to manage complexity by
breaking my system up into very small
units that by themselves don't carry
that much complexity so it's an
attractive idea and if you are there you
can also scale them separately and
deploy them in all kinds of different
ways upgrade them separately so it's
it's a good proposition but there are
some interesting stuff going on with
micro services projects these are some
quotes from Martin Fowler and he says
well many successful microservices
projects have started as a monolith that
was done broken up into microservices
later on with on the other hand if you
start out with a microservices
architecture because it's such a great
pattern chances are that your project
will fill so what should you do you
should actually start with that monolith
and then break it up but at the same
time this is more like a quote from
myself having hurt for many people is
that it's really difficult to do that if
you have this legacy monolith there is
no easy way to split it up into
microservices so what you would like to
do ideally is start simple and then
evolve that but avoid the big ball of
mud scenario and instead go to this
structured monolith that you then later
on can split into micro services or even
more micro services as needed
so this thing that's what we call the
big friendly monolith it's a monolith
but someone on earth that's man
and that can easily be split up into
microservices as needed so this is the
kind of thing that we want to support
with axonal framework and that many of
our users actually have successfully
done and we have some case studies on
that scenario in our booth as well so
what is excellent framework open source
framework started out as an open source
seacrest principle something we will
discuss in more detail and it's actually
the thing that will make this
evolutionary market service as possible
the essential parts of the framework is
that its messaging oriented so you have
components that message to one another
and we recognize three different types
of messages events commands and queries
and not just defense that's very
important part of it it enables location
transparency so components don't really
care where other components are and
that's the thing that makes it easy to
split up an existing application and
it's highly customizable so you can
start out by using very simple
technology if you have this monolith you
don't need fancy message message buses
or anything but then later on if you
want to integrate it with Kafka or
rabbit or whatever that's also possible
without changing your business code the
core pattern behind it is secure s
command query responsibility segregation
and as the name suggests that means that
you have separated your application in
two sides one handling commands one
handling queries commands are things
that are that change state so it's a
request to do something but they don't
provide any information back maybe a
confirmation that they have been
executed but that's it no more
information queries are exactly the
opposite there are questions you get
answers from them but they don't change
the states they leave this state
unchanged this side of it is called
Akimov model those sides are called read
model or projections is a term you would
often find in the literature about this
now of course this can only work if the
projections are up-to-date with the
state of the command model which means
that there is a third component here and
those are the events so whenever
commands lead to a change
the crow mouth model they will raise an
event the event will be picked up by the
projections and then you get good
answers from your next queries so this
is a pretty complex thing to do right if
you if you compare it to having just one
model and processing that for
transactions and for queries this
introduces a huge amount of complexity
so a very good question you could ask is
why would you do this well there are
many technical reasons one of them is
for instance that the throughput
characteristics may be totally different
on those two sides if we consider again
the web shop example then changes in
your product catalog happen quite
infrequently compared to the number of
times that this product catalog is
actually being accessed by by you by
consumers so you may wish to optimize
for those two things in a very different
way and if you optimize your your right
side for processing transactions you
would probably normalize them and that
might lead to very complex queries on
your read side to actually produce the
data again which will be bad for
performance but so there are many
different technical reasons but do
overarching reason why you would want to
do this is simplification in the long
run so if you look at a very simple
application in the beginning it
introduces some additional complexity
because you have those commands those
events and everything but in the long
run you will find that these individual
commands and verse and event handlers or
simple objects that are easy to maintain
and expand zooming in a little bit on
those two sides this would be the
command model very important aspects of
the command models that you start
thinking in aggregates so aggregates are
like units of consistency and
persistence and you split up the command
or the number of different aggregates
where you stop having connections
between everything if you do a typical
data modelling thing in a database you
might have foreign key relations between
all objects making it one big thing
where you think in aggregates you use
small units and to make that concrete
here if we go back to this order and
catalog example we might have an order
aggregate that has a number of oral
which can reprise and everything and
then separately from that you might have
a product aggregates which has the
category and the current price so what
happens now is that when you place a new
order this aggregate gets created it
records the price at that point and this
product aggregate is being updated when
the price changes in the webshop but
this would not directly affect an
existing order so you have to solve your
problem of modeling in a lot more
elegant way looking at the reed side key
thing here is that you have many
different reed models segregated
typically it might be one per use case
or one per screen in your application
just optimized to very efficiently very
simply deliver the data needed for that
particular use case you may also use
very different technologies there in a
webshops
example again you might want to have a
full-text search in your product for for
people to find what they need so you may
use an elastic search engine for
instance behind that all the stuff may
better fit in a relational database
model all the stuff may fit better in a
document store with this concept of CTRs
and reed models makes it very easy to
implement those various technologies
alongside each other without your
application architecture becoming one
big mess so if you put those things
together this is kind of like the axle
architecture you would have come and
handling components that have their own
storage model domain model if something
changes there they would publish events
they just get picked up by find handling
components they do update on a separate
storage and that's where you would do
reads of course the application user
interface connects with both parts of
that so it's still just one applications
looking at that again as a sequence
diagram the product manager would create
products that would raise an event like
a product rated event that would end up
in the product projection so that's when
it becomes available for shoppers which
show up because humor would do a gap
product info query get at info including
the price and then the product would be
added to an order which would end up in
your projections that's the kind of
now one one thing where you have a
choice in this model is how to persist
aggregates so these aggregates are
persisted in the commode model and
they're being read back of course
whenever a new command needs to be
processed now the traditional way of
doing that would be just to persist the
current state into a database and in
Java many people choose JP a other
approaches are possible as well of
course it's always the current state now
there's one interesting alternative that
you might use there and which is used in
fact by most excellent framework users
although it's not mandatory but it's
very popular and that's event sourcing
so with event sourcing you don't store
the current state but what you do
instead is store all the events that
have happens that would change the state
and then whenever you need to read back
this aggregate you just replay all those
events which again seems like some
complexity and there are some
performance hit of course as well
but there are some great benefits in
doing that and the benefits fall into
two categories for some business
benefits and those are the most
important ones motivating organizations
to do this and those are there there's a
lot of value in capturing all that
history that may be value for compliance
purposes if you ever need to show
exactly what happens in a particular
case for instance we have some online
casinos that use excellent framework and
for them it's very important that if a
dispute arises that it can show exactly
what happened and they're using event
sourcing for that reason and your reason
is more into machine learning and
analytics and data science it's about
exploiting the simple fact that data is
valuable it's very valuable to know what
happens and to optimize your business
for that just to show you an example
suppose again that is that we have this
webshop and it sells the fruit then you
could order two banana
one peach and that would be just the
order aggregate but in an event sourced
world you might actually see that
something else has happened so the
apples for attitude and order and I'm
bananas and then apples were removed and
a peach was added before the order was
actually committed now this is for order
fulfillment the apples are totally
irrelevant because they were not ordered
but from a business optimization
perspective it's very interesting to
learn that I was on the brink of buying
apples and then decided not to
so what could shop do differently to
make me buy this Apple's next time
that's an interesting thing so you don't
want to forget about that and that's
what event sourcing gets you another
reason to do event sourcing is that it's
interesting to capture in tenth so
suppose I have a customer database and
I'm updating the address with simple
update statements will work but it will
not show any information as to why that
address was updated in an event system
what you do is create events that
specify what happens and they would have
as a result that em read models the
address get updated but you may
distinguish between a mere correction of
an address versus an actual customer
relocating to a new address which are
from a business perspective different
things that you might react to
differently if you get the second event
you might send a nice postcard to your
customer saying congratulations on your
new house doesn't make any sense if it's
the first event so it's good to make a
distinction that's what the event
sourcing can do so the Sakura stuff and
actually event sourcing are really
interesting ingredient to this big
friendly monolith because already in the
concept you're splitting off things
these aggregates they have small
boundaries so different aggregates can
really easily live into in different
market services the reed models are
separate you have read models per screen
or per use case read models don't have
to coincide with common models on the
same server they just have to be able to
exchange events so there's a very
natural mapping between this concept and
micro-services there's just one other
thing that you also need and that's
location transfer
and see if those service components
assume that they're all on the same
server then you cannot get into micro
services if they always have to
communicate via a bus or something then
you're kind of in this place where you
have to where you're starting with micro
services from the beginning so you want
to abstract that away and allow
evolution in that dimension so the way
we do that in Axum is that all
communications between those components
they take place through a bus API so
just a set of Java classes components do
not make any assumptions where other
components are and then in the excellent
configuration you can either choose a
very simple implementation of those
buses that just lives inside a JVM and
passes all messages directly or you can
choose an implementation which actually
goes over the network and goes to
different systems and importantly in our
messaging concept and our bus concepts
it's not just about events we see that
sometimes that these architectures are
being proposed where there are so-called
event-driven and and all components just
speak to one another in the language of
events we believe that this is not a
very good idea reason being that events
lead to a particular dependency between
components so in this example we have
a races and events that's being
processed by component B and potentially
also by other components of course and
this creates a dependency from component
B to component a because component a
determines what this event looks like
there are raising events and B has to
adopt itself to the model that a has
created now if events also flow in the
other direction from B to a and the same
mechanism ensures that there is a
dependency from A to B so the two things
are now interdependent you have the
cyclic dependency and the chances are
that if you need to change something you
need to change both components which is
exactly what you do not want to do in
the microservices architecture one of
update them separately now consider what
happens if you use commands
if a sends a command to B then a has to
adopt itself to the command language
that B has defined it's a client so it's
dependency from A to B and then if
events are being sent back from B to a
that's again a dependency from A to B
because a has to adopt itself to the
events produced by B so the dependency
is now only in one direction and you
can't change a without making any change
in B that's a fundamentally better place
to be so this is one very important
reason why we believe it's not just
about events the other thing is that if
you look at these three concepts
commands events and queries they have
quite different routing patterns so
command always has to be processed once
and you need some confirmation that this
happens ideally you want some consistent
routing if you have many instances of
command processors it's a good thing to
send commands that are targeting the
same aggregates to the same instance
because then their caches will be warm
and it will work a lot faster events is
totally different routing thing so
events are being spread to everyone you
don't get any confirmations on them but
it's very important with events that
they're being processed in the right
order suppose that you're writing this
read model and you get an order created
event and then an item added to order
events but you have multiple instances
of the read model and the order created
event is being handled by one of them
and item added to order will be ended by
another instance it doesn't work if the
order of events doesn't match anymore
because first you get the item added to
order before the order was created some
things will not work smoothly either so
they have some specific events routing
requirements and their queries are again
different so usually you have one query
handler giving an answer to a query but
there are some cases where you have
scatter god or type of patterns or
competing queries think for instance
about a pricing service where you put a
second pricing service next to it for
special promotions or Christmas actions
and through discounts and everything
then your logic may be with queries
you want the best which is the lowest
answer to your price query there are
some specific routing possible with each
of those things that's again a reason
why we believe these kind of
architecture should not just be about
events messaging is richer than that
let's have a look at a few examples and
if you use this style of architecture of
course you need to define a lot of
commands and events as classes and they
all tend to be immutable very boring to
write so even though you can write them
in Java directly all but it almost
nobody does that it's quite popular to
use a project Lombok for that but you
can annotate your classes and then
generate a lot of the stuff that you
need to create this immutable classes
automatically but it's even more popular
nowadays to use Coughlin for that so
even if you don't use Coughlin for your
entire excellent project it may be very
useful to use it just to define those
data classes you can define many of them
just in a single file you can just
declare all the variables that you need
and then you will get a bunch of
immutable classes with equals and
getters and everything and constructors
so that's the most popular thing to do
another thing that maybe that's relevant
to note about this example is that here
in this add order line commands we're
using this annotation target aggregate
identifier so that's the thing that
helps accent to know which aggregate
you're targeting with your command it's
not needed when you are creating a new
aggregate because then it doesn't have
an idea yet of course if you need to add
need to specify which order it should be
processed in so the flow would be that
if this command is being processed by
axon it would start reading all the
events related to that existing order
replay them then you have the state of
an order in memory and then you can
actually execute the command so that's
how it would work
sending commands in the client who used
to look like this so you have this
command gateway object which is your
entry point into axon based on input you
get from the user you would set up one
of those one of those commands here
create new order commands we choose an
ID for that most of the time people use
your you IDs because they can be
generated without any concerns about
uniqueness etc and then it's being sent
through the command gateway so that's
where x1 will pick it up and depending
on the configuration that may mean that
it's actually being handled in the same
runtime the same JVM or it may go over
the network but this code wouldn't know
about that at all
[Music]
inside the aggregates you quote may look
like this
so you have of course an aggregate
identifier that you need to annotate so
I acts on those what you mean with that
and then there will be command handlers
bunch of checks so they will determine
whether or not the command is allowed to
be processed but we will never ever
change state directly because change
will always go through events so what it
is ok they will call the apply method
which is a static method in external
framework and include all the
information in the apply method that's
needed to update the average actual
updates will be done inside an event
handler so the order line edit events
may be used here to change the state of
the aggregate now what's interesting and
a little bit counterintuitive is that
you don't need to store all state in the
aggregate the only reason why you would
store state and in aggregate is if you
somehow need that states to make command
decisions later on if you just want to
remember it for querying it to show that
data to users you don't need to have it
in aggregate because that will be done
by the read models and read models will
get a chance to process these events as
well so in this case the only business
logic implemented here is that we
validate a quantity on a command cannot
be less than 1 but nothing else is being
validated and if that's truly my
business logic then I wouldn't need to
store
anything in my aggregates we could even
drop this event handler altogether now
suppose that I have a little bit more
complex business logic and we would
check here that the total order value
can never be bigger than 10k then of
course you need to start maintaining
some state and to maintain that state to
do that check the only thing that you
would need to remember is the total
value of the order and that would then
be updated here in the handler but
there's no need to store all the order
lines for instance of course this is
just to illustrate the principle if you
would work this out in a little bit more
detail probably that would be a good
reason to keep those order lines here
but it's important to know that you
don't have to store all information in
the aggregate the read models would look
like this you would have an event
handler there as well
and basically how you do the rest is up
to you so X only in its current form
doesn't do a lot for that so this is an
example with JPA but it could be any
type of technology the core thing here
is that you're capturing those events
and then updating your reads model
accordingly something that will be
included in the new version of axon that
will come out in a couple of weeks is
that queries will be handled over a bus
as well
currently queries would just be direct
invocations of in this case the final
orders method we don't have a way to
easily put those on the bus or create
location transparency for queries as
such but I will be included in a new
version now so far we've seen how to
program with axon locking your domain
model sites and you can mount handling
side of course you also need to set up
that infrastructure now assuming you're
using a spring booth and you just want
to have a local command bus and the
local event bus and use event sourcing
and use all the defaults and that's
essentially nothing to configure you can
start used to spring boots
excellent starter and it will just work
and all these examples will just run now
once you start to evolve if you
or that journey into microservices you
are going to change those things one of
the simplest changes you could do is go
to an asynchronous command bus so the
default thing that you would get without
configuring anything it's a simple
command bus now if you would if you're
using spring then you could define an
alternative Commandos being that would
be the asynchronous command bus and
that's a that's efficient to get all
your commands being processed
asynchronously if you want to actually
distribute it there are various options
one option is to use J groups what you
need to do to enable that is to put the
right dependencies in your project sort
of on the class path and then enable one
configuration property and then it's a
distributed commanders so that's very
easy to do so without making any changes
to your application logic you can evolve
from a monolith into Marc services you
can also use spring clouds that's a
little bit more complex to configure and
I won't go into all the details it's the
same concept create some additional beam
examples that I use are all in spring
most excellent framework users use
spring but it's not tied to spring you
can use it without as well so to
summarize we believe that this concept
of secure s and potential event sourcing
and DDD are really important ingredients
in to having these evolutionary
microservices and helps you to get
started quickly but don't end up in the
big ball of mud scenario
apart from that evolutionary aspect
event sourcing has some really
interesting benefits because of keeping
history exploring the value of data and
compliance so that's that's a valuable
concept by itself an excellent framework
is a really easy way to implement this
in Java you don't have to use it all
mostly there is no magic there you could
do everything that excellent framework
does to yourself it's just at this waste
of your time because it's already there
and it's and it's free if you're
interested and want to learn more about
this we will have some QA of course we
will
Lane and I also be at the booth Medina
has a lot more excellent experience
actually than me so please ask us
anything if you want you can also
register for our mailing list and I will
send you some updates occasionally it's
very we would very much love for you to
do that we have about 50,000 downloads a
month and we only know personally a
couple of hundreds excellent framework
users so it's very important for us to
get a lot more connections into the acts
of framer communities that's why it's so
valuable for us thanks a lot
thank you very much I got a bunch of
questions we have some time left so I'll
try to ask most of them and otherwise
the boot is just outside this room so
that helps can you elaborate on how to
guarantee consistency do you query the
Reid model in the right middle do you
use conversation in fluency stats how to
start simple but a lot of room for
improvements okay so in this
architecture consistency requirements
are usually relaxed so you have eventual
consistency you don't have consistency
at every point in time you can't get
that if you want to you could use
synchronous event bus and command bus
and then you can actually process
everything in a single database
transaction and then you have we would
have guaranteed consistency but a
problem with consistency over an entire
system like this is that it doesn't
scale well you cannot really move to
various micro-services and then do big
database transactions on all of them so
the the key strategy is to use eventual
consistency and simply accept the fact
that your business model will not be
consistent on every milliseconds that's
how it's usually dealt with what do you
think of about having chatty commands in
the context of building REST API is
returning new newly created resource for
example so the the thing that is
directly supported by our command
handling is that it returns the ID of a
newly created object but only the ID now
that may or may not be enough for your
users if it's not enough the way that we
recommend you would do it is to create a
synchronization layer on top of your
actual commands and read models so you
would first create something and then
read it back from the read side of it
and then give that result back to your
users but still keep the same
architecture pattern in the backend this
question might relate a bit to previous
one how do you guarantee events delivery
is there a way to reach synchronize the
model yeah so there are multiple ways of
handling events one would be to just
listen to new events coming in and then
acting directly upon them another thing
is what we call a tracking event
processor which means that events would
get stored in an event store which can
be a database or something else and then
these tracking event processors would
read from the database and keep track of
where they are
hence the name so if they are if they
are not online for a while they could
just read start reading again when they
come online and continue on the last
place they have been and also if you
create new read models you would need to
read back all all the events to
initialize that read model and this can
also be done with those tracking event
processors so the basic way of
guaranteeing that events don't get lost
is storing them essentially how do you
deal with potential time delays between
a command No yeah yeah sorry how do you
deal with potential time delays between
a command executing and being able to
read the data from the query model so
first of all you should evaluate whether
that's a real problem sometimes it is
sometimes it's not in many cases it's
totally acceptable that will take a few
milliseconds before your a read model
gets updated if it's not there are a few
solutions that you could do one of them
simplest one not very elegant is simply
wait and the other thing that's needed
know where you have more stringent
requirement here is that you include
some kind of time stamp with your
commands and this time stamp would then
be replicated by the regional law as
well
so you can get a query to the read model
saying I want to read as soon as you
have processed this time stamp that's
one way of doing it it's kind of evolved
so that's not the default thing you
should always do but it's it's a
mechanism that will work and is
sometimes used wonders pretty relevant
with GTR coming on for event sourcing
how do you handle the fact that you may
need to delete some data for example for
privacy reasons which may lead to
inconsistency so that we pay some for
someone for that question or that we pay
the funny thing is that what we are told
in the beginning that we have a number
of commercial software products on top
products is exactly designed to deal
with this issue so the issue is that if
on a gdpr you have to erase data
sometimes if people ask that and events
are immutable so it's hard to erase data
from events
now one way of dealing with that is
exploiting the fact that events are
conceptually immutable but physically
just data so you can actually change
them which goes against the model what
will work there are some big
disadvantages there it's kind of hard to
do diminishes the value of your event
stream as an audit log for compliance
reasons so one other thing you can do is
use cryptographic erasure
so you could encrypt personal data
fields and then store the key somewhere
else not an event stream done a separate
key database and then you could throw
away that key if you get a request to
erasure and that will make the personal
data fields effectively erased because
you only have them in an encrypted form
so that's what we either you could
implement that yourself what we have
implemented our GDP our module on top of
axon to enable the specific scenario can
you elaborate on potential downside
experience using event surging it's a
good question what do you think about it
I think it is applicable for a lot of
applications but I have to say yeah one
downside might be that your database is
getting really big and that's what
customers might experience of course
there's not a commercial product an
event store but yeah it's really big
sets of data but that might be but for
the rest of its I think it's pretty
useful that's actually a pretty good
point so if a DC fan stores they tend to
grow and relational databases don't
really like I think tables with billions
of records so as long as you are
producing
thousand events per day or 10,000 per
day that it's a total non-issue but some
of our users are producing ten thousand
events per second and then these tables
were very fast and there are some
managing issues there yeah and of course
if you have applications that are that
simple that you don't need all the
events or don't not gonna used events
yeah it should not apply event sorting
it's not like perfect for every
application but most applications would
fit is it possible to migrate existing
history is data not events tours to an
events or to an event source in
retrospect if yes how so yes the only
thing that you cannot do is is suddenly
discover history that you haven't
persisted in the past of course it's not
magic but you can migrate what you would
probably do is create a special set of
events that represents the migration and
then replay your your old store into
these new events and then ensure that
your event handlers can deal with normal
business events as well as these
migration events that just works can you
compare accent to other seekers based
frameworks for creating micro services
like Largo honestly I can't I'm aware of
log I'm of course what but I don't know
a lot about the details what if I talk
to people not using Exxon or not half
used axon but but went to something else
it's usually occur so people move to
Scala akka I think that's closely
related to the logon stuff and that
that's honestly just a different
abstraction so there is an actor
events and if you want to do seeker as
on top of that that's just a lot more
involved that's all I really can say
mr. built-in monitoring tool for the
message bus an axle an odorous not but
there will be that's again one of those
commercial products that we're
developing yeah it's a if you look at
the entire roadmap of what we're doing
then of course we will continue to
evolve the framework itself and one of
the things in there being a better
support for queries great relocation
obstruction any other things we're
developing is an event store for
processing super large numbers of events
and new routing platform so you can do
all the routing that's needed for
commands and events and messages with
traditional technology like rabbit or
Kafka or what everyone's and that works
but to implement those more advanced
messaging routing use cases it kind of
gets evil involved it's not that easy to
do so we're developing this new
messaging platform which will allow you
to do that very easily and also give you
better monitoring capabilities I think
that's the questions from the app do
anyone else has a question as we still
have time thank you very much thank you