ArticlesBlog

Mocking made easy | Dave Cooper | #AngularConnect

Mocking made easy | Dave Cooper | #AngularConnect


Hello, everyone. Thank you for having me. I have been to a couple of AngularConnect. But not presented. This is mocking made easy, my talk. It doesn’t mean too much. The probably more literal title we can take
here is making your applications easy to test and run locally so you don’t have to spend
loads of time with mocking solutions and spend a lot of time writing your application. Yeah. Rolls off the tongue. Those are the titles I give to get this. And the title is actually mocking and glocking. It’s hard to find a picture of a solid gold
gun. So, just had to improvise that. Anyway, I’m Dave. And I am a software developer at Uber Energy. And I like open source stuff. Really like testing. Really like mocking which you’re about to
find out. And you’ll probably be sick of hearing about
it after this. But anyway, really, really quick disclaimer. I have massive baby brain. I have a 5 week old daughter who is just here. If I can just be really cringy. Get a photo up on stage and we can all be
collective like, ah! awww… there we go. Thanks. Excellent. Okay. Cool. Let’s actually cover something that you paid
to come for. Cool. So, I want to talk to you guys about testing
and running web applications locally. Specifically mocking and how we do it. And what I think the problem is. And I’ll probably have a few different points
over the course of this. But I think this is a topic that’s been stagnant
for quite a while. So, I sort of tried to think about things
a little bit differently. And explore some of our current sort of mocking
solutions. And what I think a slightly different way
of doing this is. And you can form your opinions and let me
know how much you love or hate it afterwards. But anyway, running a web application locally
on your laptop, whether we’re developing features or improvements or trying to re create bugs
or in my case, cause bugs, that should be a really painless process. And test the paths your application can take,
whether they’re happy or unhappy experiences. That should also not be a pain. If you need to do mocking or feel mocking
is appropriate, that should also not be a pain. And whiles all of this is happening, your
application state, it needs to be able to bend to what you want it to do it and maintain
the integrity of the data you’re working with. Referring to happy and unhappy paths, the
user is signing up. They can do that successfully or make a payment,
they can do that. Et cetera. And you can see things that aren’t quite happy
paths. Customer’s in debt, payment fails. Unable to retrieve records. Maybe your server is throwing back a 500 or
whatever. And I’ll probably interchange refer to happy
and unhappy paths. Or refer to them as scenarios as well. Just to give you a head’s up on that. This should be pretty easy. I’m sure internally you’re all groaning because
it’s not. A lot of places I’ve worked and a lot of projects
I’ve looked at, it’s not really that easy if I come up to someone and ask them to bring
up in a customer in a particular state. Maybe they’re indebted or maybe there’s something
weird or funky about them. Usually people have to tweak around in the
database or they’re pulling stuff from somewhere think probably shouldn’t. It kind of sucks that’s not a snappy sort
of thing to be able to bring up entities in those particular states. So, I guess we can quickly explore how we
typically see this being done. So, you could run your application against
a production service. If you do that, you will get sent to developer
jail. That’s a pretty that’s a pretty sort of cardinal
sin, I guess. Some people do it. I mean, I’ve done it in projects before. It’s a bit naughty. But in developer jail, you sort of have to
argue about spaces and inverse of Emacs and what the best language is. It’s obviously TypeScript. We could run our application against sustaining
a UAT service. Cool. We’re not touching customer actual live data
now. Which is pretty cool. Unfortunately, you’ll also go to developer
jail if you do this. Because typically you’re accessing data that’s
on shared instances or maybe, you know, if you’re accessing something that’s coming from
a database. The schemas are out of whack. Or you don’t have the right data in there
that you want. So, that’s still not really great in my opinion. We could start to go down the containerization
route and run everything against a local copy of us like our production services. Or staging services. And that’s also sort of starting to head in
the right direction. But unfortunately, it’s still a little bit
of a pain and you might go to Dev jail. You know, getting these things set up is typically
an afterthought when you’re setting up a project. You’re not thinking about dockerizing literally
everything and getting that all nice and polished so you can work offline or, you know, work
wherever. So, I’d probably, you know, I still sort of
slight shy away from that when I can. But, you know, we’ve all heard of these things
being done before. I’ve already admitted I’m guilty of doing
these things. I’m sure 95% of you are guilty of it. And some of you are thinking, why we don’t
we run a mock server? We’re getting into cool territory. We can definitely do this. You know, there’s so many options out there. You’ve got things like JSON server, API mocker. That number I’ve got down there is pretty
close to how many alternatives there are on npm if you have a look. Alternatively, if you’re not familiar with
those, or might not have too many endpoints in your application, maybe roll your own API. Maybe write a small Node service to serve
up some data if you have less than ten endpoints. None of those are bad solutions. And gets you there for most solutions. Especially starting from scratch and building
up. Rather than getting too tricky, you know,
you can do that. A really quick example here is what a config
looks like for JSON server. For those of you unfamiliar, JSON server allows
you to define a JSON file. It’s keyed by the endpoints that you want
your application to serve up. And then the values there are just the responses
for when you hit those endpoints. So, here we have post, component and profile. And you can see the respective responses. That’s pretty cool and incredibly popular. I’m definitely not discounting that. I went to start writing an example for API
mocker and I got a little bit annoyed because there was so much config. So, you can take a look at that yourself. The problem that I find with things that start
to get a little bit more complex like apimocker is that it’s pages and pages of JSON configuration
and then you start to introduce things like JSON path to, you know, define different scenarios. And for those unfamiliar with JSON path, it’s
like XPath, except 600 times worse. What can we do? Now we get to the fun part. We have a little library that I wrote called
datamock. And my fiancé came up with this logo. Dave, I created this logo. It’s two of your favorite things, obviously,
and ransom letters. Which I thought was odd. And someone asked if it was giving or receiving,
and I got even more confused. But, yeah, anyway. We have data mock. It uses a code driven config. That’s nice. You’re not writing loads and loads of JSON. It doesn’t interfere with your existing codebase,
which is really cool. And it’s similar to Angular multi mocks which
is what Ed, in the audience there, he wrote that. But it’s a framework agnostic version that
works with everything, Vanilla JavaScript, Angular. Didn’t know if I could use the R word. I redacted a couple thing. We have blank and blank Native. So, you can probably work out what those are. It works with XHR fetch, if you’re working
on a bit more legacy that uses XHR under the hood, or something popular like Axios that
uses XHR under the hood or the Fetch API, it works with both of those. Super quick and easy to set up. Which is what we’re about to find out. And if supports scenarios which is my favorite
thing about it. And the thing that I think really sets it
apart. So, let’s jump into some coding which hopefully
this doesn’t go too badly. That? Yeah. What I have is a small Angular application. It’s a widget factory. It is contrived. But all this is doing is fetching a bunch
of widgets from a node service and displaying them. And here I have ten widgets. For context here, a widget is just an object
that contains an ID. So, we got some sequential IDs here and we
have a button at the top who allows us to add a new widget. And my application doesn’t work quite as you
probably expect. It doesn’t retrieve something nonsequential. It gets something with an ID in between 1&100. And basically, this application was built
to handle certain types of widgets. And I guess we could pretend that these things
were a little bit more complex. Maybe they have different categories. Maybe they behave differently depending on
what sort of widget is being rendered on the screen. Maybe there are other things. Maybe there are unsupported widgets. Or maybe there’s invalid data coming back
from our server. Or any of these things that we can’t necessarily
predict. And when we’re developing this application,
we want to be able to, you know, make sure that we’ve actually coded this correctly. And sometimes that can be a little bit difficult. So, what we’re gonna do is we’re gonna jump
into our application here. Hopefully we can see this okay. Is that big enough? Yeah. Okay. Just shout out if you need it to be bigger. So, basically, you’ve probably all seen this
screen before in your Angular applications. I’ve imported a random number generator, so
I don’t embarrass myself trying to generate random numbers. What we’re going to do first is mimic the
Node serve and then get scenarios and see how easy it is to toggle between these. I’m going to kill off my Node service that
I have here. That can go away. Now I don’t have a working application. What I can do here is first of all, I need
a way of I guess an entry point to inject in the application. As we have seen before, we have an if block,
if it’s in environment for production, then do something in prod mode. Similarly, we’re going to see if we’re not
no production. Not in production. And we’re going to do some stuff here. So, we need to import a couple of things. First of all, import a function called inject
mocks. Cool. All right. So, we can add that code here. And we need to give inject mocks some mock
data and tell it how these mocks should behave. Fortunately, that’s pretty easy. So, let’s let’s define a variable. Scenarios. Now, this is all fully TypeScript supported. So, we have typings for our scenarios. Do that. The scenarios. And all, I guess, this set of scenarios is,
is a mapping of a scenario to what we want our mock data to look like. And the minimum requirement for a set of scenarios
is that we have a default scenario. So, here, I’ll just quickly jump over to our
entire application here. We can see that we’re calling two endpoints. We’re calling the widgets endpoint to get
a list of all of our widgets. And we’re calling a new widget when we want
to get a new widget. So, first of all, let’s have a look at what
a mock looks like. So, there are only five things that you need
to describe a mock with data mocks. One of them is URL. So, it’s just a regular expression. Matching on the endpoint that we want to mock
out. So, we’ve got widgets. The next thing is the HTTP method that we
want to mock for that endpoint. And lastly, for the set of required things,
is the response that we want. So, I think we’ve probably want an array of
ten items. See if I can remember how to do this. And we want the IDs for them. Return an object from them. Probably want the I think that that will probably
do that for us. And the two optional things that we can also
define for each of our mocks is a delay in milliseconds. So, we can sort of make it simulate as if
this request is really hitting our HTTP server. Which is nice if your application’s got things
like loading spinners or loading animations and stuff so you can see that in action. It gives you a good look and feel while you’re
doing this. And then lastly, we’ve got a. So, code. Which is just the HTTP response code. And that’s it. So, we can do that for all of our endpoints
that we have here. And we should we should be in business. So, we’ll also do it for new widgets. Yeah. Response here is a little bit different to
the one above. I’ve got that rand function that I imported
before. So, just make it return one widget with a
random ID. And we’ll keep maybe the delay for this 2,000
milliseconds. And keep the response code. Today. Hundred. And there we go. All we need to do now is come through the
scenarios through to inject mocks and that should work. Let’s take a look. The other thing that we can also explicitly
pass through to inject mocks is the name of the scenario that we want to run as well. So, what we explicitly tell it to run, the
default scenario for now. Even though it will implicitly run that if
you have one scenario. If I want to do that, come here, should be
able to load the page again. I have that exact same set of functionalities
with which is cool. And add the widget, wait a couple seconds. And now it gives me a random widget. Yay! We’ve got that. Cool. [ Applause ]
But that isn’t particularly useful for anything other than a happy path. So, I want to just sort of disclose a little
bit more about my application. Super contrived. So, that’s okay. In the template for this, actually, we’re
considering any widget whose ID is greater than 100 to be a dangerous widget. And we haven’t actually seen this behavior
yet. But we have written the code for it. We want to see it in action. And we want to write tests around it later. We want to simulate that. So, what we can do is we can keep our default
scenario. That’s our happiest use case for our application. And let’s define a new scenario here. We can call it whatever we want. But we’ll call it bad widget. And what we’re gonna do here is we don’t need
to rewrite all of the mock for everything. We’re only concerned when we get a new widget,
what happens if it’s a bad widget. All I’ll do is write a new mock for the new
widget. Forget again. And instead of responding with something that
has a random ID, respond with something with an ID of 101. And we’ll see what happens. We’ll keep the delay in response code the
same. Cool. So, what we’ve got here now is we just as
I said before, we need to tell the inject mocks what scenario we want to run. And I can show you something that makes this
a little bit cooler in a second. But for now, just run a bad widget. Come over here. It’s going to get pretty ugly. Because a bad developer wrote this code. And, oh, no, we have a really dangerous widget
now. And it doesn’t look good. And something’s gone wrong and we need to
fix this bug. And I guess like obviously you could imagine
this a more real world case. You know, what happens when the data is or
what’s going to be when you have a bug. But, yeah, I think you can sort of see what
I’m getting at here. But one thing that is a pain is I don’t want
to have to go back into my code every single time I want to change scenario and tell data
mocks to run this scenario. Fortunately, I’ve written a little helper
function which is called extract scenario from location. What we can actually do is pass through the
window.location object into it. And save that and come back into our code. And you’ll notice when I click add new widget
here, this will just run as normal through our default scenario. But in the browser, in the query string, if
we specify a specific scenario we want to run. In this case, we have got bad widget. And we run this. Instead of getting our default scenario, we
click add new widget. And I get that and get a dangerous widget. And that’s quite handy because now I don’t
need to go back to my code and work out what scenario I want to run. I could just run it straight from my browser
and that’s nice and quick and easy. But I’m just finishing off work. Unfortunately, I don’t have it here to demo. A Chrome extension which works out what scenarios
are available and allows you to focus between them between a nice pretty Chrome extension. That’s quite cool. But let’s just jump back here. And I guess I’m sort of shown you something
that is quite handy for, you know, we’re developing features, we’re fixing things, we’re improving
things. And maybe the endpoint for these don’t exist
yet. Even if you wanted to, you can’t even hit
a running endpoint. We can mock out those responses. But I think like this, it can do a bit more
than that for us. When we’re running things like UI tests. tests like the integration of certain things
that don’t require Internet work activity. Or sort of like I guess third party services
that absolutely must be called. This is a really nice solution for that as
well. Because you can run different scenarios in
your integration tests, and you don’t need to hook up anything else into it. Obviously, tools like Cyprus also have fixtures
that you can run, and you can have mock groups for that. Which is really cool. And I guess if you’re leveraging that, there’s
no need to leverage this. But sometimes it is nice to just leverage
one set of mocks. Like, for instance, go back to my code real
quickly. We’ve got all of our mocks in this main.ts
file. But we can extract that into a separate mock
file so we’re not polluting our bootstrap code here. And you can share the set of mocks between
this and your UI test as well. But that’s obviously entirely up to the developer
what they would like to do for it. So, I guess I have been talking a little bit
about the things I think that really are great about this. I guess there still are some things that this
is lacking. Because we’re intercepting the HTTP requests
before they go out of your browser, nothing appears in your network tab. Which is a little bit of a pain if you want
to debug the order in which requests were executed in. And a load of other stuff like that. So, I think that I probably still need to
add some logging middleware into that. Or accept a pull request if anyone is interested
in that. It’s probably quite a good one. Things like simulating paginated data. I know a lot of the other sort of competing
things also do that. And it doesn’t GraphQL support yet. I would love people to test. even if one person were able to go and have
a look at this, I think it’s a great way of handling this sort of this sort of way of
being able to develop code. It’s really quick and it’s really easy. And it stops you needed to rely on either
someone or yourself writing a working endpoint or even just the pain of having to write pages
and pages of JSON just to get you mocked up and running. I guess, just concluding, this isn’t a silver
bullet solution. There are tradeoffs. At the moment, you can’t ensure that the mocks
that you’re providing are the right type that your endpoint is providing. I’m also working on something that will take
in something like a swagger or RAML specification to ingest that and ensure that the mocks you
are writing get those specifications. Which is pretty cool. But as I said at the very beginning of this,
I really do thinks that a stagnant topic. Almost everyone I speak to when I ask them
what they do for working locally, they either pull data from staging or prod or they’ll
use some JSON based mocking solution. And I still don’t think that we’re quite there
yet. We haven’t really progressed in the ways that
certain other facets of web development have been progressing this decade. So, I guess I just think that it would be
really cool just to get more involvement in this problem space. There are some useful links at the end of
this talk. There’s a few underlying libraries that I
really leverage And I wouldn’t be able to make this possible
if it weren’t for that. I’ll put out the slides on Twitter and all
of that. But, yeah. That’s basically it, guys. Thank you very much. And please clap and cheer very loudly.

Comment here