Rugged Reperimitersation

Chris Swan

Recorded at GOTO 2015


Get notified about Chris Swan

Sign up to a email when Chris Swan publishes a new video

good morning everybody so I'm Chris one
CTO at cohesive networks spent about 15
years of my life working in financial
services and before that the military so
I've dabbled around with security a fair
bit over the years I sometimes talk
about security as being the Hotel
California of IT so I occasionally check
out but it always gets me back
eventually if you've got questions then
please use the don't do that use the the
application to to ask them and hopefully
Adrian all be keeping an eye on those so
that we can answer the questions at the
end of the session so I wanted to start
out with this sort of news flash from
earlier on in the year that the big news
was that Google was saying it was going
to move all of its corporate apps onto
the internet and if you get into the
detail that's not quite exactly what
they were doing but the point was Google
had kind of looked at the whole concept
of having an intern intranet or an
internal network and said this just
isn't working for us anymore that were
just as capable of securing applications
that are public facing and that are
hosted on the internet and in their case
their data centers as we are
applications that we consider right now
or kind of one this different thing that
we call our internal network and so
they're going through this process of
saying there's no such thing as the
internal network anymore it is all just
going to be applications hosted on our
platform that we selectively choose to
open up whether those are accessible to
just Google employees or just google
users or just the whole of the world so
I want to set the scene a little bit in
terms of what I'm talking about here so
I'm going to start out with a
traditional kind of application
architecture here so we if we look at a
typical application and I see our
customers just with thousands of
applications like these there
they're generally a business application
is a collection of servers and because
we do stuff on the cloud we normally see
virtual servers and so you'll have a
bunch of service at the front end doing
webs here in sort of typical n-tier
architecture and another bunch of
servers in the middle doing the
application tier and another bunch of
servers doing the database and they kind
of squeeze those together on the diagram
because people are normally doing some
sort of clustering stuff or they're
using bigger servers for their database
so they're different from the other
servers and then maybe some message
queues off to the side if you've got
some asynchronous stuff going on there
and this sort of ended up raising the
question of what is the right traffic
that should be going between all of
these things and we've answered that
question quite often in the wrong way by
just saying oh it's all behind the
firewall and it doesn't matter what the
right traffic is any traffic is allowed
to move across our network because it's
the internal network and we're safe and
happy with that and it's secure now at
some point in the last few weeks
somebody changed the title on the the go
to website and said something about
micro services and I never really
planned to talk about micro services but
I'll kind of I'll take that diversion i
would say in many ways modern
architectures don't actually change very
much of these aspects especially when
we're thinking about stuff in terms of
network traffic so if i look at a micro
services based application rather than
being a collection of servers it's a
collection of services and I might I
might name them different things so I
rather than the database tier but it
kind of ends up still being the same
diagram and we we still end up asking
the same question of what is the the
right traffic going from one thing to
another now I think the key difference
here is if we look at exemplars of
microservices architectures and we've
got Adrian here in the room we will talk
a great length about went on at Netflix
the way that they ended up building
these things dictated a different
to network architecture and hence a
different approach to network security
so if we take something like the
persistent services those persistent
services never had a life on netflixs
corporate intranet they only ever had a
life on Amazon ec2 in the context of the
virtual private clouds and the security
groups that existed there and so it's
ridiculous that those services would
have ever been opened up to anything
other than the intended traffic so the
the whole sort of premise of a cloud
security architecture where we're
whitelisting things rather than an
enterprise network architecture where
we're kind of saying good guys inside
bad guys outside the firewall protects
us is entirely different and and really
it's that change in premise I want to
spend a good chunk of this talk
exploring and the very rapid evolution
that we've had over the course of less
than a decade in the approaches that we
take to doing security in particularly
security at a network level and I'll
explore why we do security at a network
level and what's good about that and
what can be bad about that so if we look
at an enterprise data center I'm a
moment ago kind of said well here's a
typical enterprise application if we
look at an enterprise data center then
it's just X by X of typical applications
lots and lots of them and if we look at
the spending that happens on security
then eighty percent of that security
spend goes on the perimeter and about
twenty percent of that goes on all the
rest of the stuff on the interior things
like identity management the directories
that go along with that data leakage
prevention and so forth so there's
there's a huge sort of miss balance
there and I think this was touched upon
in the introduction to the rug attract
by a number of the speakers in terms of
we're spending our money in the wrong
place and so that the hard on the
outside soft on the inside model has
consequences
when we have a breach and you know I
think to a certain extent breaches are
inevitable and a big part of rugged is
being able to recover from the
inevitable breach in in a sort of same
manner then with this network security
model one penetration then becomes the
stepping stone to the sort of east-west
reversal and yeah I'm not going to name
in shame but I'm sure everybody in the
room can think of countless examples
over the last year even of organizations
where you're an application has been
penetrated and then you know they've got
into the email they've got into the CRM
they've got into the crucial business
applications and they've taken all of
that sensitive data and posted it on to
placement and it's very embarrassing and
you know everybody keeps saying
something must be done about this what
should be done about it so cloud
architectures as I kind of already
mentioned with enough Netflix example
cloud architectures have been different
and they've evolved very quickly and I
want to take a quick journey through
that evolution to kind of describe where
we where we've got to so 2006 amazon web
services which had existed already by
that time for about a year they launched
this thing called Elastic Compute cloud
survivors and web services went from
being an actual web service where you
could do a soap request or an HTTP
request and you could you could ask
about a book and it would return some
data based upon a unique identifier and
you could do different searches and
stuff they had this thing where you
could now have coin-operated VMs on
demand so ten cents an hour bought you
and em once more and the the network
security model for that was that every
single one of those VMs SAT kind of
lonely and proud out are on the internet
and the only thing that stood between
that virtual machine and the big bad
internet was a set of zen security
groups in the hypervisor for that and
that turned out to be actually enough to
do some really useful stuff
so the zen security groups were tough
enough yeah we didn't have countless
examples of people just kind of busting
straight through them and taking over
virtual machines there were lots of
security issues with ec2 of that day
most of them actually revolved around
sort of key management and stuff being
left behind on storage and all that kind
of thing so that's the firewall stood up
but a single VM was kind of difficult to
work with as soon as you trying to build
applications out of that so if we if we
go a couple of years on and people were
starting to try and do bigger better
more adventurous things with Amazon then
we saw the arrival of overlay networks
so you'd get one of those VMs and you'd
say you're my special vm that's creating
me a network and then you'd sort of plum
all of the rest of the VMS into that
using encrypted network connections and
it ends up forming this kind of
conceptual thing where you've got a
single network virtualized across the
cloud so all of those VMs can talk to
each other make a functional application
out of many VMS and also secure the data
in motion so as these VMS are talking to
each other nothing's going around over
the public internet there's not at this
point a cloud network concept really
everything's going over the public
internet it's not going over the public
of standing at the front of that that
starts looking like a bastion host so a
concept that we've had from
demilitarized zones and sort of layered
network security models in the
enterprise so it immediately gets
plumbed in there so 2009 amazon got to
the point where they were like okay
actually something does need to be done
about how we do networking in the cloud
and they introduced this concept of
virtual private cloud and virtual
private clouds is a whole good number of
reasons why they came along the joke at
the time was that amazon was running out
of 10000 /a address space in u.s. East
and that actually wasn't true at the
time so there were 16 million IP
addresses in that Network range
and AWS even in u.s. East was nowhere as
big as that then debatable whether it's
even crossed that line now but it
certainly got a whole lot bigger since
then but the idea of a virtue of a
virtual private cloud was that it would
provide containment that you could have
virtual machines inside of Amazon's
network within a contained private
addressing one of the drawbacks with
virtual private cloud is it sort of
exposes amazon's fault design just a
little bit to intentionally in your face
so v pcs have to match to an amazon
region you when you make a VPC it has to
be in a region and v pcs force you to
have subnets and subnet have to be
aligned with availability zones so that
that fault boundary that you have with
availability zones and regions is kind
of right there in your face in your
network design and if we look at how
some of the other clouds are now doing
network architecture yeah it it's become
one of those things where it's a
graceless aspect of using Amazon is the
fact that you're confronting those fault
boundaries so much with your network
design so people started putting their
VMs into v pcs and you now had
containment of traffic and you now have
the capability as well to connect
between v pcs and enterprise networks
using VPN gateways and stuff like that
but containment often wasn't enough so
you still had traffic going around in
plain text on the cloud service
providers Network and so to that extent
the overlays hang around because people
still wanted to have the encryption of
the data in motion and that gave a
reason for the overlays this day so lots
of people did something like this they
take their application and they put it
inside of a VPC and then they put an
overlay over the top of it to secure all
of their traffic and they maybe have the
overlay network manager at the edge of
that providing connectivity in two
different places or sometimes
acting as a bastion host some even did
this so this is a typical example of a
fault-tolerant design so you've got a
bunch of VMs in different availability
zones different VPC subnets and now if
you've got a single outage then the
Amazon fault boundaries should save you
if you were really large and as paranoid
as adrian is then you start doing
something like this so you're not going
to just rely on availability zones and
you maybe want to have regional
dispersal anyway in order to you know
serve different customers and different
places of the world with lower latency
so you start to replicate that cloud
architecture across multiple regions you
can even do this as well so you can if
you want to take the overlay Network and
stretch it across multiple clouds
whether that's different regions or even
different cloud service providers the
good news was that thankfully almost
nobody was stupid enough to do this
which is to take v pcs and recreate the
intranet in the cloud because that's a
bad idea yeah intranets I think if we go
back were probably a bad idea in the
first place that's just an accident of
history and so as people have been
moving workload into the cloud lots of
people have looked at that as an
opportunity to not repeat the mistakes
so I'm going to take a quick detour into
the worlds of unified threat management
and application delivery controller and
hopefully answer the question of what is
a perimeter actually made of these days
so initially perimeters were just
firewalls so people would go and buy a
firewall put it in front of part of
their network and and that was it that
would keep the bad guys out and
firewalls turn out to be actually pretty
lousy at keeping bad guys out and
particularly when we reached a point in
time in history where we weren't
connecting together things with lots and
lots of magical new you know pure
sockets based protocols but in fact
everything was coming in and out over
HTTP or HTTPS or a handful of other
well-known protocols and what happens
there is the firewall becomes kind of
useless because you're saying to the
firewall firewall you know to do your
job you've got to let HTTP through and
all of this stuff's gushing passed on
HTTP and you really don't know whether
it's HTTP that's good or HTTP that's bad
so that's why you then start having
things like network intrusion detection
systems and network intrusion prevention
okay let's look at this traffic flying
by and do some analysis of it and try
and look for signatures of things that
we know are bad but yo you've also got
antivirus gets thrown into that make so
let's look for some antivirus signatures
neither of these things work by the way
because signature-based is always just
looking in the rearview mirror and and
it can actually be quite damaging
because you end up not relying just on
signatures you get some heuristics in
there as well but the heuristics get
triggered off by good traffic as well as
bad traffic and now you've got a ton of
false positives to deal with and then
you get specific things I anti-spam come
along as part of your other sort of
email perimeter and then you know people
want to have remote office connectivity
so VPN devices arrive data leakage
prevention is sort of spotting not bad
stuff coming in but good stuff leaking
out much of the same technology
much of the same drawbacks and then you
know things like load balancers were
just there to help as part of the the
network you try and do load balancing as
early as possible and these were all
individual boxes and yeah they were
expensive things and they had to have
very specialized people to own and
manage them I what happened over the
course of the last decade was people
I've fortinet came along and said let's
just tip all of these different
functions into a one box and we'll call
that unified threat management and it
will be much cheaper to just buy one box
and it'll not be much cheaper to manage
because you only need one specialist in
that box to look after all of these
different functions so that that was
kind of nice and then if we think about
the applications that were presenting
out to the world we had a whole bunch of
networking things that we wanted to do
with those so again you know the load
balancer comes into that picture we
SSL was slow 15 years ago people still
kind of go on about ssl being slow it's
not anymore I know Eleanor just showed
you how trivial the code is for doing
SSL nowadays and when you look at the
impact that that has on a modern CPU it
will hardly notice but for a little
while ssl had been slow enough that
people bought special boxes to do TLS
offload compression can be useful on
that edge web application firewalls we
are where we touched on these earlier in
the week as well so the idea that
there's bad types of requests and that
we can get in the way of those and then
filter those out and it's one of those
things that looks very elegant and you
know if you get an open source laughs
you can sort of just get it was rule set
and drop it on there and it'll
immediately sort of protect you from all
of yours bad things and then you'll find
that your application isn't working like
you thought it was going to because you
were relying upon bad things and at that
point you've got a choice you can either
fix your application which is hard or
you can go and fix the west rule set
which is also hard so it's a bit of a
Hobson's choice at that point
multiplexing and traffic shaping sort of
go into the mix there as well
these things were all standalone boxes
for a little while and they went through
the same process that happened with UTM
so they all got squeezed into one thing
and the different company like f5 would
sell you a thing called an application
delivery controller and this has been
the delivery model for UTM and ADC so
somebody sells you a metal box and you
go and rack and stack it in a data
center and the traffic flows through the
box and the problem with that model is
these things become choke points so you
then have to fixate on performance
because this choke point is the thing
that's mediating all of the traffic in
and out of your network so it has to
scream along at like 10 gigs or even 40
gigs these days to be able to cope with
all of that and if we look at it from a
ruleset perspective it has to have the
kitchen sink of rules because it needs
to deal with every conceivable piece of
infrastructure and applications and
supporting components that live beyond
that choke point so it's quite a
constraint but the world is turning so
we now have these fancy new things of
software-defined networking and network
function virtualization and I'll try to
explain hopefully the difference but
there is in fact huge overlap between
these things so I see software-defined
networking as the ability to configure
network through API so configure network
through things had that a software a
network function virtualization as the
ability to make networks out of software
and of course there has to be a huge
overlap there because who in their right
minds today would make a network out of
software and then have a command-line
interface in front of it now needed to
have a human operator you would put an
API on that wouldn't you so that's why
we get sdn and NFV coming together so
when we're doing this what it means is I
can take a whole bunch of networking
functions switch root of firewall VPN
that kind of stuff and I can just place
them onto a virtual machine
I can go further than that as well and
so we've got containers now I could take
a whole bunch of other network functions
I can take my needs and nips and lo
bands from cash and TLS lava flowed and
even a wife I can throw those into
containers I can throw all those
had the core networking functions and
I've now got a networking virtual
appliance I can take that networking
virtual appliance and I can deploy it
alongside of my virtual machines that
are doing my application and make myself
an application centric perimeter and
this kind of looks and feels pretty much
like my old perimeter but it's different
in sort of character and nature so I'm
making lots of these things which gives
me lots of things to manage but that
shouldn't be a problem because that's
why we've got api's that's why we've got
orchestration at scale but I can be much
more fine-grained now about what all of
these things do so I don't have my
kitchen sink problem anymore I don't
have to have a rule set that's dealing
with every conceivable application every
conceivable piece of infrastructure
that's running on it with lots of
conflicts because the traffic going into
one thing might look bad for another
application I can just focus on that app
its data and the risks that go along
with that and that allows a different
concept around your thinking about how i
do rules how I do black lists and white
lists and things like that the other
piece of it is it does not need to be
necessarily screaming fast because I'm
only dealing with the traffic for that
app so although I've gone and made this
thing out of virtual machines and
virtual machines aren't doing 10
gigabits line speed networking functions
they don't necessarily need to because
I'm not trying to run an entire
enterprise behind this just one up
and this stuff refactor is pretty well
into microservices so as we're building
a micro services architecture I can take
those networking functions and either
embed them into my micro services as and
creating them or I can deploy them
alongside of my micro services so the
networking functions are just different
services in my microservices
architecture I'm so they're just
building out a richness to that Mike
reservist deployment so I want to touch
upon a thing now that I've been talking
about for a little while now that I call
the audit paradox and explain some of
when we talk about security you talk to
any security expert they'll tell you
it's better to build in security at the
beginning than it is to bolt it on later
on so just like this elegant piece of
brick work here yeah you should build
the security right into your code that's
that's the good thing to do and so what
does that look like it looks hard so
here is a screenshot of a tiny fraction
of actually the microsoft developer
network guidance for api security of web
services this is like one part of one
getting a developer to follow that
guidance in in the first place is a
tricky process but the earth the audit
paradox comes about because even if the
developer has gone through all 64 pages
of that stuff and done it all completely
correctly how can you tell and then when
you're making changes to the code how
can you tell that you've not
inadvertently gone along and broken one
of those rules you know you got somebody
new onto the team they weren't quite up
to speed with what they were supposed to
do they made that one line change and
now how do you know that that didn't
break what you were trying to do
which is why we've had so much bolting
on of security because if we look at the
mechanism for what does bolting on look
like bolting on looks like exactly those
uTM's and ADCs that we had before and
you know what auditors like about these
it's a standalone thing they can go
along to this and have a look at its
configuration and say yeah I'm happy
with that yeah I watched that
configuration take shape I saw that
there was a change management process
that arrived at that configuration I've
got an external view of what's going in
and out and I think that's why we've
ended up in the situation where so many
of our security controls have been
bolted on rather than built in because
of this thing that I'm now calling the
audit paradox that it's easier to audit
security controls that are standalone
than it is to audit security controls
that are built in platforms I think give
us an opportunity to have a bit of best
of both worlds so if we look at what we
can get out of platform as a service
then that gives us a chance to build in
so the maker of the platform as a
service can take that 64 pages of
security guidance for doing web service
API and they can make that part of the
framework and that can be audited the
hard way but it needs to be just done
once because the developer writing code
on top of that framework has had that
done for them and they're just
concentrating on their application
functionality and you've sort of got
that independent verification piece and
the audit ability of it and if you can't
go to Alexis's talk yesterday then the
problem is nobody wants not nobody a
less large community wants to have an
opinion edge right yeah many developers
do not want opinionated platforms
service docker has set people free or
containers in general have given people
the ability to take whatever it was that
they liked and package it up and move it
around so they yeah popularity of docker
for me shows that there is a distinct
movement against opinionated platforms
which kind of drives us against having
bolting in of security i also just
wanted to touch upon monitoring if a
security event happens that doesn't go
into a monitoring system that i would
contend that it might as well not have
had a security control in the first
place so i've seen situations where the
best case for a security control was
that it was going to tell you what bad
thing happened after it became obvious
by some other means when you went back
and looked and that's really not very
much good at all so you know an hour or
so ago l know is up here saying be nasty
to your logs and have people pick their
way through them and just make sure that
there are logs make sure it is being
funneled into a processing system that
makes it human reactable and that there
is actually a human taking reactions and
put the test cases through to do that so
some challenges remain on this stop
if we think about a concept of SEC dev
ops and I think that's kind of where
we've been driving over the last few
days is to say not just let's have
DevOps but let's do devops securely and
have secure DevOps then it falls out to
this this is my sort of to-do list of
sector Bob's so api's I think are
necessary but not sufficient so I've
talked about being able to do network
security with virtual machines and
software-defined networking and that
yeah the ability of us to have api's
allows us to do that so we can take
network operations and we can take
things that were difficult and expensive
because we bought boxes and we had them
managed by people on the end of command
lines and we can turn those into virtual
machines that are managed by
orchestration systems and that
completely changes the game because my
change to the network now isn't a ticket
into the NOC that takes them a week to
reject for some flimsy reason and then I
have to redo it again my change to the
to the networking environment is part of
my push to production so hey there was
some config in there it went into the
orchestration system the orchestration
be configured by my virtualized network
for me and off I go so the point is it
needs to be integrated into the overall
system so our api's give us the starting
point but api's are only useful when
they are integrated the second point is
about control metadata so I have to be
have visibility and understandability of
my control metadata this is the stuff
that the auditors end up wanting to have
a look at I should also ask about how
mutable my control metadata is as well
so what are the mechanisms that make
changes to control metadata and what
control do I have over those so here at
it at a kind of point level the auditor
needs to be able to come along and
inspect the config of something and
decide whether it's right a management
level the orbiter needs to
confidence over the control flows that
create that config in the first place
and I think this is an area where
there's definitely more work needs to be
done so I was talking a little while ago
about the attraction of you know we can
have a different rule set for every
application and that's gray and then we
can go all the way back to the business
there and say what are the things that
keeps you awake at night about how your
data might be exploited if an attack
against at it and they can tell us then
we can design rules to help defend a bit
against that but that's that's a very
large distributed processing human task
to collect that in the first place to
build the threat models to build out the
rule sets and then to have control over
getting those rule sets into production
and have confidence that we've got the
right rule sets in production and that
were responding appropriately to
messages that come out of the system
when it's seeing things that flow by
that are tickling those rules and then
lastly those security events do need to
be captured and they need to be turned
into ultimately something that humans
can take action on so a lot of security
monitoring systems are dealing with a
flood of false positives one of the
reasons for the flood of false positives
that we cope with at the moment is these
chokepoints and the fact that we have
these kitchen sinks of rules in those so
we can tune our way out of the flood of
false positives to a certain extent by
having more discreet rule set for our
given applications or are given services
but it's still not going to be perfect
so we're still going to have a fire hose
of security event monitoring data and we
still need to be able to determine
whether something coming out of those
that fire hose is actually an actionable
attack or an actionable compromise of a
system where something needs to be done
yeah worst case you're actually going to
take yourself off line because remaining
on line is is worse for your business
then stopping going back and fixing
what's going on so yeah that's the to do
list nothing what I wanted to kind of
conclude with here is the call to action
for us to get to
SEC DevOps is to work on that to-do list
so to make api's for security controls
but furthermore to make those api's
integrate it into an overall system to
have the control metadata and the
visibility over it and to have
actionable security events ultimately
emitting out of our systems that humans
can take the right action on so thank
you very much for listening please as
ever remember to rate the session and I
finished pretty much on time so any
questions yes so if you look at this
threat modeling exercise and I think the
people that have kind of best
popularized this have been Microsoft
with their secure development like
lifecycle and there's there's a lot of
things that I could dislike about SS DLC
because it's somewhat still tied to an
old waterfall model but the threat model
consists of sort of two key aspects at
the beginning as you sort of thinking
about your application so it asks you to
think about the data that you're
managing and it asks you to think about
the roles of the people interacting with
that data and those two things are
unique to every single different
application yeah so what you then get
into is yeah who are the bad guys that
would be wanting to go after this this
data that has less variability in it now
so yes if you look at some of the stuff
that Shania says about you you've got
this spectrum of attackers and he now
goes kind of to to sell by to sell
matrix on it it's kind of you've got the
unintentional and the intentional
attackers so the intentional attackers
are just coming after you for the thing
that you have that they value and you're
on the other side it's kind of they
don't care they're just going to attack
somebody and then you've kind of got
that level of sophistication thing of
you know
Josh sometimes talks about HD Moore's
law of you need to be able to protect
against you know today's version of the
scripting kits and actually compliance
tends to move much slower than that so
the script think it's a revolving day by
day by day and the most basic script
kiddie can just go along and pull those
from gate and have a go at you and I get
the top end of the scale is probably
something like Stuxnet though one of the
problems with Stuxnet is once it got out
there into the wild it provided a
toolkit for anybody else to build a
Stuxnet which is kind of super unhelpful
so you've got that sophistication and
attack ability matrix to to think about
and then you know if we think about how
that then mapped into vulnerabilities
that's just a question of which software
using and you know what aspects of
vulnerability does it potentially have
so that comes from your stack and you're
going to make architectural choices in
how you implement the software which
leads to a stack which leads to a set of
pre-existing vulnerabilities in some
cases and I'm Josh spent a lot of time
saying there's so much stuff that's kind
of known about and some of its patched
and some of its not and of course you
should be trying to be on the patch
stuff but you might not actually have
choices sometimes you might have known
vulnerabilities in your code and you
have to that's why we have mitigating
controls so a lot of security controls
are put in there specifically for the
purpose of dealing with the known
vulnerabilities elsewhere in the system
to try and prevent them from being
attacked and yeah as nevered Nick's
presentation this morning a lot of the
time the vulnerabilities are in the
security software itself so your
controls just introducing extra
complexity and more things that go wrong
go you mentioned that as an idea room
people don't developers people don't
want appears platforms and he's the
answer to change that mindset and if so
how or is the answer to come up with all
of here it comes in convenient tools
that leaders and as a dark I think this
actually comes from a sort of spectrum
of skill and trust if I look at the
organizations that seem to be keen on
opinionated platforms its organizations
that don't have much trust in their
developers because they consider them to
be of lower skill and that can come
about because they're hiring kids
straight out of college or they're
outsourcing development to places where
they aren't actually paying for highly
skilled developers and so opinionated
platforms look attractive to those types
of organizations because they're kind of
constraining the adventure space and
they're constraining the need to develop
skills in order to be productive I think
where we look at more hi trust high
school environments then people feel
more confident allowing their developers
to to go off and do whatever it is that
they think is best because the trust is
there so this this kind of trade-off
between opinionated platforms and you
know less opinionated approaches let's
call them becomes one that at its heart
is about trust and skill level of the
developer now if we're saying there's a
security trade-off there because we can
bolt security into the opinionated
platform but we rely on the developer to
build security themselves into the the
less opinionated approach then we answer
our own question there by saying the
higher skilled more trusted developer
should actually be capable of building
the security in when they're doing that
there's different with a sort of more
disaggregated platform you just have
different security tolling it's in force
in a different way yeah a nap time every
service they get launched at Netflix
gets penetration tested with every new
version automatically so it's like trust
but verify rather than put you in a
straitjacket up front then we let you do
whatever you want to do something stupid
but look all those who stand will shut
you down over time this skill thing
though is does it really take off people
who are in that pocket don't get some of
those are very skilled with someone
about copying people who are skilled
yeah you got to have hair would and do
we just have to write off basically like
ninety percent is this in that so to a
certain extent this only matters in the
face of success so if you've got cowboy
hacks diet together something following
in the footsteps of more skilled people
that then yeah they're able to copy the
pattern and maybe even cut and paste
some of the code but they're not really
understanding what they're doing and
they're introducing vulnerabilities it
only matters in the face of success
where people start really using that
service and they start putting sensitive
data into it or it becomes financially
important and yeah the good thing about
success in that context is it tends to
bring in your more resources and it
tends to bring in more skill which can
be used to deal with the outcomes of
that so I think if we if we look at
these things as kind of ecosystems that
have feedback loops within it you end up
with the feedback loops being built to
correct some of those problems that
might take place early on in the
development of something and again when
Adrian's talking about the evolution of
Netflix it is an evolutionary story you
know they started out with a monolith
and it changed into this and then it
changed into this other thing and then
it kept on changing and the security
story that's buried within that is yeah
I think equally fascinating
actually they began being pretty good at
security but they got better and better
and better and security became a part of
the other chaos monkey yeah there's a
security monkey isn't there as part of
the Simmie simian army so these
automated tools came along as part of
professionalizing and approach to
security now that gives a template for
other organizations that taking a
journey that might have a similar
destination to Netflix to look at and to
try to emulate but I think you can be
iterative and you can rely upon feedback
loops to to be corrective here yeah I
think a lot of the problems come along
we're actually you've got organizations
that have got a portfolio of stuff and
you've just got some really bad things
within that portfolio and especially
with the one perimeter around the mall
model it's kind of they then become the
weak spots that make the other things
more vulnerable than they ought to have
been sort question at the back purchase
it equipped with the reduction in both
positives or chilis and remember
I'd say it's far too early days for that
and also I'd kind of return to a point
that's been stressed a lot over the past
couple of days which is most attacks
aren't apts most attacks are relatively
unsophisticated and usually taking
advantage of known vulnerabilities that
have been in existence for years if not
decades so yeah this makes apts actually
somewhat mythical beasts yeah let's take
a look at maybe something like the RSA
attack that's interesting because that
actually really was an apt and there's a
certain amount now of public on the
record examination of how exactly that
went down in the first place how they
didn't notice it for most of the
duration of the tack going on and how
they did eventually see it with a new
networking monitoring tool that they
ended up trying out and acquiring
because yeah that just happened to be
there in the right place at the right
time and it you know part of the sort of
koda silt of that story is if they've
not been planning that potential
acquisition they'd have probably never
even noticed and so you know once you've
got a whole ton of false positives and
whilst you've got an occasional piece of
forensic evidence about an apt there's a
sort of unknown unknowns piece to that
where there's probably not that many
apts actually going on because of the
degree of sophistication involved to do
them and the cost that goes along with
that but if they're if they're
successful a pt's you're not even
detecting them so we don't actually have
your much in the way of data to go on of
what does the comparative position look
like when you start squeezing out a lot
of the false positives
interesting problem here which is the to
assume that most companies have been
compromised at some level and you're
trying to sell a product the detects
that the company has been compromised
and the CSO was trying to buy this but
it's probably going to get fired when we
run the tool so there's a kind of a prom
here at all problem in the sales process
but near the team that runs the tool
discovers that they've been breached for
the last six months and all that data is
being sent to China or whatever right
there's been a few cases where very fun
welcome part of you from being kicked
out because it worked not visually right
so you have to figure out how to get
through that part of the sales process
so it's actually interesting times or so
and my friend in so my friend Ian Griggs
did a brilliant paper about this in
about 2008 where he described the
security marker as a market for silver
bullets and it was all about information
asymmetry so if we think about kind of
classic markets then the you know in a
perfect market the buyer knows what
they're buying and the seller knows what
information about it on both sides and
you know money changes hands goods goods
change place and we can also think about
what we term lemon and lime markets so
in a lemon market like used cars it's
kind of the seller knows that he's
clocked it and he's put some bad parts
in it and stuff like that and the buyer
is just seeing the car and thinks that
it's okay and things that has got an
honest mileage on it and lime markets
are sort of the opposite quadrant can
happen in things like insurance where
you know I've been feeling a bit ill and
i just bought a life insurance policy
and then i go to the doctor and he tells
me I've got cancer and yeah the life
insurance company took on that risk
where I had some information that they
didn't and instead securities of market
for silver bullets because neither the
buyer nor the seller knows where the
what they're doing works and there's an
interesting sort of 3rd dimension comes
to this which is quite often the
attacker does so a sophisticated
controls they can be straight past
and they're not revealing that
information so yeah a huge chunk of our
present security industry is a market
for silver bullets because money's
changing hands and neither the buyer nor
the seller knows whether the goods are
effective and only the attackers do and
they're not saying questions on the
right let's see apologies if you've
already addressed the question major
difference in network between
traditional three layer apps and those
based on Mike services in terms of
Northwest traffic and weather this
entails more east west and whether
again not a ton of data on this but if
we look at the sort of the destination
back to persistence you know the whole
idea of north-south is we're starting
off at the front end normally a web
front-end and we're ending up in a
database of some kind we might call that
a persistence service and I don't think
there's actually anything fundamentally
different when you go to a microsoft
says architecture versus sort of more
traditional you know maybe monolithic
architecture the Micra versus
architectures are generally designed to
scale and so you end up with more
east-west traffic where you are doing
replication of state but that tends to
actually push the east-west traffic
still into this the state management
layer if we think about it in those
terms and so you might have gone from
having some clustered Oracle instances
to having a bunch of Cassandra that's
now synchronizing across a region in in
AWS but your east-west traffic is still
really just within your state management
piece and everything else sort of top
and bottom from it it's going north side
so I don't think it makes a huge
difference if you model of the car most
of the traffic is inside the app so the
law itself the traffic inside the app is
out
east-west traffic when she was 20
between different microservices so those
additional just request traffic as your
request explodes out into 50 different
services five layers deep to again yeah
either that too that's I think where it
was to get attract attention I've seen
some people starting to worry about that
and trying to figure out rather that
there's so much more traffic you want to
put some kind of 500 length that it
needs to look at things a lot more
detail but it needs to be much more
efficient yeah and I was talking some
idle us engineers the other day about
what can be done at line speed and what
has to be funneled through VMs and you
know they're starting to see application
architectures break upon what cannot
quite be funneled through a vm and needs
to be made line speed somewhere else
that person having trouble moving to BBC
from easy to classic because they they
have some solar system
yeah question about WAAFs surely the
enforcement of our Wars best practices
on applications is good yes absolutely
so you can get the whole sort of all I
was think and as a set of West rules my
point was if you've already written your
application and tested it you might find
that difficult to live with as a bolt on
afterwards if you begin your process
with those waffles in front of it and
everything that you ever test lives with
those waffles you will end up in a
different place and you'll be able to
live with them quite easily the the
notion of sort of taking UTM and ADC and
virtualizing it doesn't that just spread
the complexity yes it absolutely does it
make in terms of complexity it makes it
horrendously worse so you're taking what
has been logically at least a single
choke point which might have physically
just been a handful of boxes and you're
potentially exploding that out into
thousands of VMs the only way that you
can cope with that is by moving away
from a manual configuration model to
basically a DevOps model where you're
relying upon the infrastructure is code
to be able to take care of the
configuration of these thousands of
things rather than this handful of
manage that with the traditional model
so yeah it is kind of you know what one
hand gives the other it takes away but
it does allow progress towards a
different model so that's it on it thank