Marketing Influenced Application Development

Post by Pete Wailes

I've spent most of my professional life doing three different things:

As a result, I've got a reasonably unique perspective. So I thought, in that vein, I'd talk for a minute or two about modern application design, and why it matters to designers and marketers in particular.

why it matters

The first thing to address here is why how you build your application matters to your designers and developers. There's a few reasons here, which can be broken down into a few loose headings...


It's almost never the case that a site or application is built, and then when it's finished it never needs to be altered again. Instead, most of the time, users are going to discover a need for various features that would make life easier, or require that certain functionality be changed/amended to match how they interact with the application on a day-to-day basis. As a result, you need to be building your application in a way that's easy to maintain, and to extend. As such, some form of architecture which lends itself well to maintenance would be advisable. In 2016, I'd say that means a back end system outputting JSON via a nicely formatted API, with a front end built in JS (preferably React.js from my experience).

That gives the ability to grow or prune the system as required in the future, as each module is its own self-contained set of exposed APIs, and the front ends are simply small modules that interact with them.


Who could have predicted 10 years ago the explosion of mobile? Or 5 years ago, the explosion and then stagnation of tablet computing? Or the modern obsession with the rise of VR? No-one knows what the technology we'll be using even 10 years from now will look like. It's this rapid rate of change that means that you need to build your application to be able to accept multiple, changing, different and fragmented front ends. Imagine it needs to work for the web on mobile, tablets, laptops, desktops, VR/AR systems, Android and iOS and anything else you can think of. That means that you need a system designed to be easily adapted to new devices and culled from old ones as time passes.


Finally, your marketers are going to want to dig in to how users are interacting with the application. They're going to want to analyse usage, time spent in various areas, measurement of KPIs against benchmarks and so on. As a result, you need be have a system that's able to offer them that data. This means liaising with them as to what technologies they want the application to log to, what specific things will need to be measured and so on and so forth. That could be anything from GA to an ELK stack to something entirely custom. Either way, make sure you know what they need so you can build those requirements in to your spec. Then that everyone ends up (mostly) happy.

The Basic Structure

Imagine this as our downstream path for our application:

HTTP request > Routing > Adapter > Model > Data Access Layer > Data

Let's make this live a little. Imagine we've got a basic CRUD app, maybe something like a simple CMS. So how does this apply to that? Well, let's break it down.


Keeping our example of a CMS, we'd use GET for requesting individual pages, lists of posts for a blog archive, showing information about a user, displaying a post in the admin panel and so on. We'd use GET for front end stuff and POST for creating new pages, posts, users and the like. You could in theory support PUT and DELETE as well, but in practice I'd tend to advise against deletes and updates. Mark items as inactive or retire them instead, as that gives the option for undoing things later. Imagine everything that goes on as like a financial audit trail or GIT history. You don't want to delete stuff wholesale, instead you want to replace it with a newer version, whilst preserving the old legacy data.

Anyway, we want the top-most layer of our application to be a routing structure, designed to handle those requests, and then instantiate the various parts of the application that will be required.


If we continue with our CMS example, this is where we handle specific things relating to doing something. Every component has a component script, which is a tiny file identifying which model(s) and view(s) will be used. This then calls an inclusion method that includes all the needed files for those views and models, and instantiates any required classes. That way, should we change either of these, we only have to update them in one place.

The second component file deals with setting the input up to be passed to the model. As a result, it handles primary validation, ensuring that all data is in the correct format and creates a holder for the data that's going to come back from the model. It then sends a call to perform whatever database operation is required, and hands the data returned off to a second component file that we'll come to in a bit.


At the bottom of our application we have our data logic. Each different type of thing has a separate class for this. Thus posts, pages, comments, users, site settings and so on in our CMS will have a complete model dedicated to them. It's worth noting that this is done around the concept of data encapsulation, not simply echoing your database tables.

The theory behind this is that with the normal flow of an application, your views and adapters don't have to worry about database querying – that's all handled by the model. This has obvious advantages with regards to things like React based front ends, where the application state needs to reflect the model state. However, there's some opacity in this as we don't know why what's being returned by our model is what it is. Was it recently cached? Is it a call to the database and thus absolutely up-to-date? Who knows?

The advantage here is that we don't really care where it's coming from - all our components should be black boxes designed to respond to specifically formatted input. It's irrelevant where that data comes from, as long as it passes validation.

This isn't a problem until you start building larger, more complex applications. Because caching logic gets hard, and fast.


Most caches work by read-through. That is to say, you query your cache store, if something is found then it's returned, failing that a call is made to your data store, that output is cached, and the data returned. All well and good. There's an obvious issue here though – first time requests are slow. Traditionally, we just take this as read, say "Oh well", and move on, as for most systems, the amount of data being read is going to be fairly small, and affecting one user once for a generation of a particular view isn't the end of the world. However, there are systems where this is going to become a problem very quickly, because they have so much data, or the data is so complex that the queries start to bog down.

The question then becomes how much potential time delay do we have in our app? This is simply the number of known queries likely to be generated over a period, plus the number of unknown ones. This can be computed fairly easily, and modelled to then decide on a caching methodology.

If the number of unknown queries the number of known ones, we may want to use a system that simply caches on request, and uses garbage collection to control memory usage. We would then handle slow requests with messaging to the user to make sure they understand the query will be slow, and to update them when it's done (via a websocket or polling).

If though we have a system which is the other way around, we'll want to look in to something like event sourcing to ensure a pre-computed cache is ever-present for the majority of the user requests.

Neither of these is superior to the other in theory, it's just a question of which is more appropriate for your use case.

There and Back Again

So that's our various steps as we go down. What about coming back up again? Well, we do something similar:

Data output > Model (& output data sanitisation & validation) > Component adapter > Output (JSON)

So here we've fetched our data (assuming it needed fetching and we're not just retrieving and displaying a cached output), and we start doing things with it. With the data retrieved, the model can ensure that it's all as it should be and that it validates successfully. Provided all's well, it can then be sent to the adapter, which can validate and format it into whatever it's needed as for the view. Finally, it gets passed to the output view which turns it in to something else, be that JSON, HTML, XML,

The view here can take many formats – it could be a templating engine for the web, it could be the front end of a web app, or something else again. The advantage of this should be clear – by abdicating the responsibilities of data modelling and handling to the model and adapter. By re-introducing the adapter into the stack coming back, we can then check that the action that's been requested has produced the kind of output that is expected, and to therefore better handle any possible issues/errors that may occur. This means we're able to have slightly thinner models, at the expense of fatter adapters.

Note that I've tagged the data output as JSON, and mostly that's because JSON is reasonably universal nowadays. Again, we're referencing our key concepts of extensibility/modularity and fragmentation/reconfigurability.


With the application exposing JSON as its only output, it's then up to the views to deal with the data returned. Most projects I build nowadays work on a Node server with React, communicating using Axios, a lovely little XHR library with Promise support. That enables the front end to fetch data and treat that as initial state, sending changes to that state back to the server as required to ensure consistency between both systems.

This has the downside of meaning that there's more technologies involved, but the upside of meaning that your front end is its own encapsulated application, rather than being welded to your middleware. Which brings me to...

Management Benefits

There's more to this though than just splitting out code – it means you can split out teams too. As a result, the team that does the front end UX/UI/CRO work on your website doesn't have to be the team doing the development work on the application side of the system. This means they're free to make better decisions about what to do, rather than meeting engineering goals which may or may not be aligned with what the user wants or needs. Work demarcation is just as important as anything else in the code world.

On that subject, just as we adhere to the principles of KISS, SoC and DRY when coding, we should follow through this in our team structures and workflows. It's just easier to ensure that work isn't needlessly replicated or teams infringing on the responsibilities of others if they're set up in such a way as to give clear boundaries as to what each should be doing.

The aim of this form of application structuring should be that we can have code, teams and projects that are able to interact with each other in a sensible, clear, transparent way, ensuring better communication and work rate, whilst reducing the potential surface area for issues at the same time.

If you liked this post, you should follow me on Twitter