NDC London is a 3 day developer conference hosted at London’s ExCeL Centre. With 145 speakers across 8 tracks over 3 days there is a great diversity of content such as cloud computing, machine learning, web, security, devops, agile and soft skills, along with language specific sessions including C#, F#, Javascript and others. With such variety it’s an opportunity for any dev to learn from others’ experience with your platform of choice, and also to gain some insight into technologies you might not be using everyday.

The conference took place in the ICC Capital Suite, with a central Expo area with stands for sponsors, non-stop food and drink, and plenty of space to chat with other delegates and speakers. The 3 days provided a packed schedule of 18 sessions that were variously thought provoking, instructional, funny and in the case of Troy Hunt, simply scary. In the same manner as my esteemed colleague Marc Costello posted last week, I’ve picked a few sessions to look at in more detail rather than the conference as a whole.

Keynote: Saving the World One App at a Time – The Humanitarian Toolbox

Richard Campbell (@richcampbell)

I was looking forward to the keynote as I was aware of the Humanitarian Toolbox project through Carl Franklin and Richard Campbell’s DotNetRocks podcast. The project is an open source initiative to build software for disaster relief – for the non-profit organisations involved in disaster relief, the volunteers that provide their time, and the victims of disasters. It was born out of the founders’ response to coverage of hurricane Katrina and it’s aftermath and the challenges faced in organising relief efforts.

The HTBox team work with organisations to gather requirements and plan work, while recruiting developers from around the world to donate time picking up work items to build the product. They have individual contributors, small teams that host regular HTBox hackathons, and larger hackathons at conferences and other events. The thinking behind the project is that a developer’s time and skills are a valuable donation to a product that would otherwise cost money that can be put to other uses.

It’s a challenging problem space, where the focus is on providing out-of-the-box, sustainable software tools that organisations can quickly leverage to to support their operations in the field. Richard quoted examples such as shipping data in areas with damaged mobile networks by using SMS for data content, or dealing with battery life issues.

It was an inspiring talk about a project that is still in it’s infancy, but has the potential to help save lives.


Ops and Operability

Dan North (@tastapod)

Dan’s Jane Austen inspired talk on the relationship between dev and ops was very timely as it’s a high profile topic in tombola as we grow our product base and scale out both our teams and our architecture.

Pride and Packaging

He began by outlining the typical difference between devs and ops: devs just want to deploy, whereas ops are on the hook for runtime environments, SLA’s, diagnosis, recovery, restoration and general business continuity. He stressed the difference between automated deployment and release management – an area we’re currently working to improve at tombola. He also also introduced the importance of logging in what we ship, as characterised by the process of Instrumentation -> Telemetry -> Monitoring.

Support and Supportability

This section looked at the process of incident management, and the three key questions: ‘What happened?’, ‘Who is impacted?’ and ‘How do we fix it?’; the emphasis here being how do we reduce the impact of any incident and the importance of recovery time as a measure of resilience rather than the time between failures. When developing systems we should plan for failure; in the ideal scenario the user should never notice that something went wrong. It’s a fair point. Who cares if your system is incident free for months if it takes you a day to recover when something does go wrong?

Use and Usability

The final section looked at how we bring this together by ensuring that what we log allows us to answer these three questions as quickly as possible. Key data points should include time stamps and traceable sessions ids, but most importantly ‘the cause, the whole cause and nothing but the cause’. He suggested testing our ability to recover by forcing a failure, then having someone else diagnose the fault based on what was logged. It’s an intriguing idea, and one that may be worth pursuing to validate our own ability to recover from an issue.

It was a useful session as it forces you to look more closely at what you ship, how you ship it, and how quickly you can recover when something breaks. While much of the above may seem obvious, very few systems are so perfect that there is no room for improvement.


Building JavaScript and mobile/native Clients for Token-based Architectures

Brock Allen (@BrockLAllen) & Dominick Baier (@leastprivilege)

Slides

This session covered Identity Server, an open source .Net project providing Authentication as a Service. It enables single sign-on (and out) over multiple application types, allowing for centralized login logic and workflow for all applications (web, native, mobile, services).

The focus of the session was how to leverage Identity Server to establish authentication (you are who you say you are) and authorisation (you are allowed access to a service) from JavaScript and native mobile clients. Brock and Dominick explained the flow from app to login page and back, along with the exchange of tokens that underpins the process.

Managing user login is a fundamental part of any system. Perceived wisdom in modern software development is that where possible it’s better to let someone else manage the process, not least because handling passwords is fraught with mantraps for the unwary. Authentication as a service along with single sign on allows us to isolate the risk, albeit with the trade off of a complex request response dance to handle tokens in the client. Identity Server is one solution to this, and may be worth investigating in more detail with the potential to integrate it into tombola’s platform.

JavaScript Patterns for 2017

K Scott Allen (@odetocode)

Scott Allen has been synonymous with JavaScript and front end frameworks for a number of years, so I was keen to hear his opinions on the current and upcoming state of JavaScript. Scott discussed a variety of new language features including arrow functions, async / await, and generators. For me, the key features he discussed are modules and classes:

ES6 Modules

The concept of modules in JS is not new; breaking up large JS applications into manageable chunks and isolating scope is a process that has gone through several iterations over the years. We started with IIFEs (Immediately Invoked Function Expressions) that leveraged closures to isolate scope while exposing API’s through global objects. Subsequently other module systems emerged such as CommonJS (the default module pattern for Node.js) and AMD (as popularised by RequireJS). While each of these approaches have their merits, the language really needed a standard. Enter ES6 Modules.

The concept is simple. A file is a module. Anything declared inside the file is private to the module, unless it is explicitly exported. A module may import one or more exported items from another module. For example:

Equally, you can import the whole module and refer to it’s exports using named property syntax:

While the above seems obvious, it enables a number of useful features. Build systems such as WebPack can trace the import declarations  starting at the application root module to concatenate all the application’s modules for bundling in the browser. The most recent implementations take this further by analysing this dependency tree and then excising code paths that are not used; a pattern known as ‘tree-shaking’. This can significantly reduce the file size of the bundled application.

There is a lot more to ES6 Modules than there is space to discuss here. For further reading I’d recommend Axel Rauschmayer’s book: “Exploring JS: Modules”.

Classes

The arguments over how to implement classical inheritance in JavaScript, or even whether you should, have gone on for years. Fundamentally JavaScript is an object based language in that everything is an object. However it supports inheritance through prototype chains, where an object’s prototype is another object. The ECMAScript standards committee have settled the argument of how to implement classical inheritance by introducing the class keyword.

So, we go from:

to this:

Certainly the class syntax is much terser and more readable. However in my opinion there is a gotcha here: this is simply syntactic sugar for the existing prototype syntax; the JavaScript runtime implements this using the existing prototype system. Developers new to the language may see and use the class syntax believing that they are using classes in much the same way as in C# or Java – but they are not. It’s an important distinction. It could also be argued that the ‘class’ keyword implies a static type definition, which it is definitely not. The prototype created from the Foo class definition is a dynamic object, just as it is in the first example using a constructor function.

Understanding the prototype chain is fundamental to understanding how the language behaves, and is hard enough to teach at the best of times. I don’t believe that hiding it behind the new syntax is going to make that any easier.

Testing

One of the Scott’s closing points particularly caught my attention. We’re all familiar with automated testing; if you’re not, then you should be! However Scott raised the suggestion of focussing our attention on integration tests rather than unit tests. It’s a theme that was echoed in couple of other sessions across the the 3 days.

Unit testing can require a frankly excessive use of mocking, and can easily result in a test suite that is more concerned with implementation details than behaviour and is way too brittle. One of the main benefits of tests is to protect you from introducing bugs when modifying existing code, which is impossible if a test breaks because it is too finely grained.

The alternative means mocking the boundaries of your system, but no more. Everything else should execute real code. There are plenty of developers that will insist that such a case is an integration test not a unit test. For my part, a test is a test – I find the distinction between integration tests and unit tests less than useful, and would prefer simply to test the behaviour of the system as a whole. Arguing over the name of the testing approach is rather pointless.

For example (abbreviated code):

Testing in this way asserts that the object under test, in this case “reportListController”, behaves as expected. How it handles the http response is irrelevant to the test. It could be instantiating some “reportService” that reads a config, makes an httpRequest, returns a promise, gets the response, passes the response through a separate parser, then resolves the promise, which the controller then responds to to update the scope. All of which is irrelevant. We can modify this controller in any way we choose with the confidence that we have a test that will verify the controller’s behaviour.

If we applied a fine grained unit test to this controller, we would inject a mock “reportService” and stub its methods, which in turn means we can no longer modify the implementation of “reportListController” without breaking the test even if the behaviour is exactly the same. So we have to modify the test too. At which point it begs the question – what is the point of the test?

All in all, the session was an interesting walk through the latest and greatest in the language. Many of the new features are available now in Node.js, and in the latest versions of the popular browsers; however as with all things front-end, it will be a few years before they can be used in general in the browser without the benefit of transpilers such as Babel.js.


Summary

This was my first trip to an NDC conference, and was a great experience from start to finish. The breadth and quality of the speakers was first rate and highly recommended. For downtime, the organisers also lay on an evening boat cruise which provides another chance to chat with other delegates and speakers, and the NDC party with beer, food, entertaining tales of development disasters and a conference pub quiz!

Can I go again next year boss?