Featured

The Royal Statistical Society Conference and Data Science

This year the UK’s Royal Statistical Society (RSS) held its annual international conference in Aberdeen between the 12th and 15th September 2022.

You may think that the society’s main conference doesn’t hold that much relevance for you as a Data Scientist. Yes, you have an interest in Data Science with a statistical flavour, but surely the main conference is all clinical trials analysis and the like, isn’t it? My job over the next 980 words is to persuade you otherwise.

Statistics is about the whole data life cycle

Go to the RSS website or look at an official email from the RSS and you’ll see that the RSS strapline is “Data | Evidence | Decisions”. This accurately reflects the breadth of topics covered at the conference – in the session talks, the posters, and the plenary lectures. Statistics is about data, and modern statistics now concerns itself with all aspects related to data – how it is collected, how it is analysed, how models are built from that data, how inferences are made from those models, and how decisions are made off the back of those inferences. A modern general statistics conference now has to reflect the full end-to-end lifecycle of data and also the computational and engineering workflows that go with it. This year’s RSS conference did just that.

A Strong Data Science focus

Over the three main days of the conference there were 7 specific sessions dedicated to Data Science, totalling 8hrs and 20mins of talks. You can see from the full list below the breadth covered in the Data Science sessions.  

  • Novel applications and Data Sets
  • Introduction to MLOps
  • The secret sauce of Open Source
  • Data Science for Health Equity
  • The UK’s future data research infrastructure
  • Epidemiological applications of Data Science
  • Algorithmic bias and ethical considerations in Data Science

On top of this there were Data Science topics in the 8 rapid fire talk sessions and in the 110 accepted posters. Example Data Science related topics included MLOps, Decentralized finance, Genetic algorithms, Kernels for optimal compression of distributions, Changepoint detection, Quantifying the Shannon entropy of a histogram, Digital Twins, Joint node degree estimation in Erdos-Renyi networks, Car club usage prediction, and Deep hierarchical classification of crop types from satellite images.

A growing Data Science presence

I’ve been involved with the conference board this year and last (Manchester 2021) and my perception is the size of the conference in increasing, in terms of number of submissions and attendees, the range of topics, and the amount of Data Science represented. However, I only have two datapoints here. One of those was just as the UK was coming out of its first Covid-19 lockdown, so will probably not provide a representative baseline. So I’m not going to stick my neck out too much here, but I do expect further increases in the amount of Data Science presence at next year’s conference.

Other relevant sessions

If like me you work primarily as a Data Scientist in a commercial environment, then there were also many talks from other Sections of the RSS that were highly relevant. The Business, Industry and Finance section had talks on Explainable AI, Novel Applications of Statistics in Business, and Democratisation of Statistics in GlaxoSmithKline, whilst the Professional Development section had talks on Linked Open Data, programming in R and Python, and the new Quarto scientific publishing system.

The Future of the Data Science Profession

Of particular relevance to Data Scientists was the Professional Development section’s talk on the new Alliance for Data Science Professionals accreditations of which the RSS is part. The session walked through the various paths to accreditation and the collaborative nature of the application process. This was backed up by a Data Science ‘Beer and Pizza’ event hosted by Brian Tarran (former Significance magazine editor and now RSS Head of Data Science Platform) and Ricky McGowan (RSS Head of Standards and Corporate Relations) who both explained some of the RSS long-term plans for Data Science.

Diversity of topics across the whole conference

Diversity of topics was a noticeable theme emerging from the conference as a whole, not just in the Data Science and commercial statistics streams. For me, this reflects the broader desire of the RSS to embrace Data Scientists and any practitioners who are involved with analysing and handling data. It reflects a healthy antidote to the ‘Two cultures of statistical modelling‘ divide identified and discussed by Leo Breiman many years ago.

For example, the range of plenary talks was equally impressive as the diversity of topics in the various sessions. Like many Data Scientists my original background was a PhD in Theoretical Physics. So, a talk from Ewain Gwynne on Random Surfaces and Liouville Quantum Gravity – see picture below – took me back 30 years and also gave me an enjoyable update on what has happened in the field in those intervening years.

Ewain Gwynne talking about Random Surfaces and Liouville Quantum Gravity.

Other plenary highlights for me were Ruth King’s Barnett lecture on statistical ecology and Adrian Raftery’s talk on the challenges of forecasting world populations out to the year 2100 and as far as 2300 – see below.

Adrian Raftery talking about Bayesian Demography.

A friendly conference

The conference is not a mega-conference. We not talking NeurIPS or ICML. It was around 600 attendees – big enough not to be too insular and focused only on one or two topics, but still small enough to be welcoming, friendly and very sociable. There were social events on every evening of the conference. And to top it all, it was even sunny in Aberdeen for the whole week.

I also got to play pool against the person who led the UK’s COVID-19 dashboard work, reporting the UK government’s official daily COVID-19 stats to the general public. I lost 2-1. I now hold a grudge.

Next year – Harrogate 2023

Next year’s conference is in Harrogate, 4th – 7th September 2023. I will be going. Between now and then I will be practicing my pool for a revenge match. I will also be involved with the conference board again, helping to shape the Data Science content. I can promise a wide range of Data Science contributions and talks on other statistical topics Data Scientists will find interesting. I can’t promise sunshine, but that’s Yorkshire for you.

© 2022 David Hoyle. All Rights Reserved

How many iterations are needed for the bisection algorithm?

<TL;DR>

  • The bisection algorithm is a very simple algorithm for finding the root of a 1-D function.
  • Working out the number of iterations of the algorithm required to determine the root location within a specified tolerance can be determined from a very simple little hack, which I explain here.
  • Things get more interesting when we consider variants of the bisection algorithm, where we cut an interval into unequal portions.

</TL;DR>

A little while ago a colleague mentioned that they were repeatedly using an off-the-shelf bisection algorithm to find the root of a function. The algorithm required the user to specify the number of iterations to run the bisection for. Since my colleague was running the algorithm repeatedly they wanted to set the number of iterations efficiently and also to achieve a guaranteed level of accuracy, but they didn’t know how to do this.

I mentioned that it was very simple to do this and it was a couple of lines of arithmetic in a little hack that I’d used many times. Then I realised that the hack was obvious and known to me because I was old – I’d been doing this sort of thing for years. My colleague hadn’t. So I thought the hack would be a good subject for a short blog post.

The idea behind a bisection algorithm is simple and illustrated in Figure 1 below.

How the bisection algorithm works
Figure 1: Schematic of how the bisection algorithm works

At each iteration we determine whether the root is to the right of the current mid-point, in the right-hand interval, or to the left of the current mid-point, in the left-hand interval. In either case, the range within which we locate the root halves. We have gone from knowing it was in the interval [x_{lower}, x_{upper}], which has width x_{upper}-x_{lower}, to knowing it is in an interval of width \frac{1}{2}(x_{upper}-x_{lower}). So with every iteration we reduce our uncertainty of where the root is located by half. After N iterations we have reduced our initial uncertainty by (1/2)^{N}. Given our initial uncertainty is determined by the initial bracketing of the root, i.e.  an interval of width (x_{upper}^{(initial)}-x_{lower}^{(initial)}), we can now work out that after N iterations we have narrowed down the root to an interval of width {\rm initial\;width} \times \left ( \frac{1}{2}\right ) ^{N}. Now if we want to locate the root to within a tolerance {\rm tol}, we just have to keep iterating until the uncertainty reaches {\rm tol}. That is, we run for N iterations where N satisfies,

\displaystyle N\;=\; -\frac{\ln({\rm initial\;width/tol})}{\ln\left (\frac{1}{2} \right )}

Strictly speaking we need to run for \lceil N \rceil iterations. Usually I will add on a few extra iterations, e.g. 3 to 5, as an engineering safety factor.

As a means of easily and quickly determining the number of iterations to run a bisection algorithm the calculation above is simple, easy to understand and a great little hack to remember.

Is bisection optimal?

The bisection algorithm works by dividing into two our current estimate of the interval in which the root lies. Dividing the interval in two is efficient. It is like we are playing the childhood game “Guess Who”, where we ask questions about the characters’ features in order to eliminate them.

Asking about a feature that approximately half the remaining characters possess is the most efficient – it has a reasonable probability of applying to the target character and eliminates half of the remaining characters. If we have single question, with a binary outcome and a probability p of one of those outcomes, then the question that has p = \frac{1}{2} maximizes the expected information (the entropy), p\ln (p)\;+\; (1-p)\ln(1-p).

Dividing the interval unequally

When we first played “Guess Who” as kids we learnt that asking questions with a much lower probability p of being correct didn’t win the game. Is the same true for our root finding algorithm? If instead we divide each interval into unequal portions is the root finding less efficient than when we bisect the interval?

Let’s repeat the derivation but with a different cut-point e.g. 25% along the current interval bracketing the root. In general we can test whether the root is to the left of right of a point that is a proportion \phi along the current interval, meaning the cut-point is x_{lower} + \phi (x_{upper}-x_{lower}). At each iteration we don’t know in advance which side of the cut-point the root lies until we test for it, so in trying to determine in advance the number of iterations we need to run, we have to assume the worst case scenario and assume that the root is still in the larger of the two intervals. The reduction in uncertainty is then, {\rm max}\{\phi, 1-\phi\}. Repeating the derivation we find that we have to run at least,

\displaystyle N_{Worst\;Case}\;=\;\ -\frac{\ln({\rm initial\;width/tol})}{\ln\left ({\rm max}\{\phi, 1 - \phi \right \})}

iterations to be guaranteed that we have located the root to within tol.

Now to determine the cut-point \phi that minimizes the upper bound on number of iterations required, we simply differentiate the expression above with respect to \phi. Doing so we find,

\displaystyle \frac{\partial N_{Worst\;Case}}{\partial \phi} \;=\; -\frac{\ln({\rm initial\;width/tol})}{ (1-\phi) \left ( \ln (1 - \phi) \right )^{2}} \;\;,\;\; \phi < \frac{1}{2}

and

\displaystyle \frac{\partial N_{Worst\;Case}}{\partial \phi} \;=\; \frac{\ln({\rm initial\;width/tol})}{\phi \left ( \ln (\phi) \right)^{2}} \;\;,\;\; \phi > \frac{1}{2}

The minimum of N_{Worst\;Case} is at \phi =\frac{1}{2}, although \phi=\frac{1}{2} is not a stationary point of the upper bound N_{Worst\;Case}, as N_{Worst\;Case} has a discontinuous gradient there.

That is the behaviour of the worst-case scenario. A similar analysis can be applied to the best-case scenario – we simply replace max with min in all the above formula. That is, in the best-case scenario the number of iterations required is given by,

\displaystyle N_{Best\;Case}\;=\;-\frac{\ln({\rm initial\;width/tol})}{\ln\left ({\rm min}\{\phi, 1 - \phi \right \})}

Here, the maximum of the best-case number of iterations occurs when \phi = \frac{1}{2}.

That’s the worst-case and best-case scenarios, but how many iterations do we expect to use on average? Let’s look at the expected reduction in uncertainty in the root location after N iterations. In a single iteration a root that is randomly located within our interval will lie, with probability \phi, in segement to the left of our cut-point and leads to a reduction in the uncertainty by a factor of \phi. Similarly, we get a reduction in uncertainty of 1-\phi with probability 1-\phi if our randomly located root is to the right of the cut-point. So after N iterations the expected reduction in uncertainty is,

\displaystyle {\rm Expected\;reduction}\;=\;\left ( \phi^{2}\;+\;(1-\phi)^{2}\right )^{N}

Using this as an approximation to determine the typical number of iterations, we get,

\displaystyle N_{Expected\;Reduction}\;=\;-\frac{\ln({\rm initial\;width/tol})}{\ln\left ( \phi^{2} + (1-\phi)^{2} \right )}

This still isn’t the expected number of iterations, but to see how it compares Figure 2 belows shows simulation estimates of \mathbb{E}\left ( N \right ) plotted against \phi when the root is random and uniformly distributed within the original interval.

The number of iterations needed for the bisection algorithm
Number of iterations required for the different root finding methods.

For Figure 2 we have set w = ({\rm initial\;width/tol}) = 0.01. Also plotted in Figure 2 are our three theoretical estimates, \lceil N_{Worst\;Case}\rceil, \lceil N_{Best\;Case}\rceil, \lceil N_{Expected\;Reduction}\rceil. The stepped structure in these 3 integer quantities is clearly apparent, as is how many more iterations are required under the worst case method when \phi \neq \frac{1}{2}.

The expected number of iterations required, \mathbb{E}( N ), actually shows a rich structure that isn’t clear unless you zoom in. Some aspects of that structure were unexpected, but requires some more involved mathematics to understand. I may save that for a follow-up post at a later date.

© 2022 David Hoyle. All Rights Reserved

Part 3 – What does the future hold?: Using forecasting in a commercial environment

This is the last in my series of posts on forecasting. The posts have focused on the ‘why’ of forecasting and also some of the practicalities of forecasting. This last post is going to be shorter. It is simply some links to forecasting resources that I’ve found useful or think could be useful – books, articles, online tutorials and software, as well as my opinions on which bits you should focus on learning.

Books/Articles

  1. Forecasting: Methods and Applications by Spyros Makridakis, Steven Wheelwright and Rob Hyndman. This was the book that was recommended to me by a colleague when I started in commercial forecasting. Rob Hyndman (one of the authors) says it is out of date, and recommends his later textbook (see next), but I still find it useful.

  2. Forecasting: Principles and Practice by Rob Hyndman and George Athanasopoulos is considered one of the modern bibles on classical forecasting techniques. It is now in its 3rd edition and also available online for free.
  3. Introductory Time Series with R by Paul Cowpertwait and Andrew Metcalfe. I found this short but concise Springer book (in the Use R! series) on classic time-series analysis in R a great help. It was useful both from an R perspective, but also for short practical introductions and explanations of the various ARIMA concepts. Some of the links to the datasets used are now broken apparently, but I have seen comments that the resources are not hard to find with a google search.

  4. This recent and comprehensive review article in the International Journal of Forecasting is great (arxiv version here). It has short readable paragraphs and sections on a large number of concepts and forecasting topics, so you can simply pick the topic you’re interested in and read just that. Or read the whole article end-to-end if you want.

Blogs

Rob Hyndman’s blog is the main blog I tend to routinely look at. It is always an excellent read and contains links to blogs that Hyndman recommends (although these tend to be more econometrics and statistics focused).

Software

I’m only going to give links to free open-source software. There are some other excellent commercial applications available, but not everyone will be able to get access to them, so I won’t list them.

  1. R: I have tended to do most of my classical time-series analysis in R. The in-built arima functions and also the forecast package (created by Rob Hyndman and Yeasmin Khandakar) provide a great deal of functionality and are my go-to packages/functions in R for time-series.
  2. The statsmodels package in python provides a similar model building experience to building models in R. Consequently, its time-series functionality provides similar capabilities to that found in R.
  3. Darts package in python: I have done less time-series analysis in python than I have in R. When I have done exploratory time-series analysis in python I have tended to use statsmodels. Having said that, the Darts package, and the Kats package (from Facebook) look like useful python packages from the bits I have read.
  4. Prophet package: The Prophet package, from Facebook, is open-sourced, well used, flexible and very powerful. I have used it for a couple of tasks and like it. Under the hood it is based upon the Stan probabilistic programming language (PPL), which I have used a lot (both in and outside of my main employment). Prophet is fully automated but I would still recommend you have a basic grasp of classical time-series analysis concepts before you use Prophet, to guard against those situations where a fitted model is inappropriate or clearly wrong.
  5. The engineering team at Uber have also released their own forecasting package, Orbit, which performs Bayesian forecasts using various PPLs under the hood (similar to the way the Prophet package uses Stan).

Methods/Concepts/Techniques you should know about

  • ARIMA: You should definitely become familiar with the classical approaches to time-series analysis, namely ARIMA. The name is a combination of acronyms, and I’ve given a breakdown of the full acronym and what I think are the important aspects to know about.
    • AR: Auto-Regressive. These are the ‘lag’ terms in the time-series model equation, whereby the response variable value at timepoint t can depend on the value of the response variable at previous time-points. It is important to understand, i) how the value of the lag coefficients affect the long-run mean and variance of the response variable, including how this determines whether a process is stationary or not, ii) how to determine the order of the AR terms, e.g. by looking at a Partial Auto-Correlation Function (PACF) plot, iii) how the AR terms are infinite impulse response (IIR) terms, in contrast to the finite impulse response (FIR) moving average terms.
    • I: Integrated. This is the concept within ARIMA that most Data Scientists are least familiar with but a very important one, particularly when dealing with quantities that we naturally expect to grow over time, for example by accumulating increments that increase on average. It is important to understand, i) how to run unit-root tests to test for integrated series – beware the difference between Phillips-Perron (PP) and KPSS tests, as the null hypothesis is different between them, ii) Cointegration and spurious regression (spurious correlation) – for which Clive Granger won a Nobel memorial prize in Economics (shared with Robert Engle) in 2003.
    • MA: Moving Average. These are the ‘error’ terms in the time-series model equation, whereby the response variable value at timepoint t can be affected by the stochastic error not just at t, but at previous timepoints as well. This allows the response variable to be affected by short timescale perturbations of finite duration (hence the classification of the moving average terms as being finite impulse response terms).

    Even if you intend to use only neural network approaches to forecasting, or want to use, say, the Prophet package as an AutoML forecasting solution, it is still a good idea to get a good grasp of ARIMA models. Investing some time in getting to grips with the basics of ARIMA and doing some hands-on playing with ARIMA models will pay huge dividends.

  • Error Correction Models (ECMs). You may never have need to use Error Correction Models, but they are useful for incorporating the transient departures from a long-term equilibrium. I found these University of Oxford summer school lectures notes given by Prof. Robin Best from Binghamton University (SUNY) a very good and accessible introduction to error correction models. The lecture notes also give excellent explanations of the concepts of integration and co-integration in time-series analysis. I used these lecture notes when I was having to develop an ECM for a long-range stress-testing model.
  • Neural Network and other Machine Learning techniques: Neural networks have been applied to time-series for a long time, but being blunt, until the last 7 years or so they weren’t very good and didn’t outperform the classical time series approaches (in my opinion). In part that was probably because machine learning practitioners tended to view time-series analysis and forecasting as ‘just another prediction problem’, and so the approaches didn’t really take into account the time-series nature of the data, e.g. the auto-regressive structure, the very things that make time-series what they are. Coupled with the fact that the classical time-series analysis approaches have a very solid theoretical underpinning in ARIMA (expressed in terms of the lag or backshift operator), this meant that machine learning approaches didn’t make as many inroads into time-series analysis as they did in other fields. However, with the advent of Deep Learning approaches using Recurrent Neural Networks and LSTM units, machine learning approaches to time-series analysis have really begun to make their mark. Models such as the DeepAR model from Salinas et al are now considered to outperform classical approaches for specific tasks. Hyndman’s book, “Forecasting: Principles and Practice” contains a chapter on machine learning approaches to time-series analysis, but in my opinion it is only very basic. The extensive review article by Petropoulos et al has a section on ‘Data-Driven Methods’ that includes sub-sections on Neural Networks and also Deep Probabilistic Forecasting Models. However, given the comprehensive nature of the whole review article these sub-sections are necessarily short. Other more extensive resources that I have found useful, which also cover the Deep Learning approaches include,

That is it. I hope you have enjoyed this series of posts on forecasting. As with anything in Data Science, forecasting isn’t a spectator sport. The best way to learn is to download some datasets and start playing. You will make mistakes, but that is how you learn.

© 2022 David Hoyle. All Rights Reserved.

Part 2 – What does the future hold? : Using forecasting in a commercial environment

<TL;DR>

This is part 2 of 3 about producing forecasts in real-world situations. Part 1 was more about the ‘what’ of forecasting, and specifically about different forecast horizons and how those different horizons shape how you do the forecast and what you can do with it. Part 2 is advice to help you avoid common mistakes when producing a forecast.

There are multiple distinct stages to producing and using a forecast. At the simplest level, we can list these different stages as,

  1. Planning a forecast.
  2. Executing a forecast.
  3. Assessing a forecast.
  4. Taking actions or decisions informed by a forecast.
  5. Updating a forecast.
  6. Deploying a forecasting process.

I have found that overwhelmingly the majority of mistakes I’ve made or seen made, are in the planning stage of producing a forecast. In fact, mistakes that I’ve seen in many of the later stages can ultimately be traced back to a failure to plan appropriately. That is, mistakes were made and spotted in one of the later stages, but if we’d thought about it properly, we could have anticipated that the error or issue would occur due to the way the forecasting process had been planned. This introduces our main takeaway,

Put a lot more time and effort into planning the forecasting process than you were initially going to do

</TL;DR>

Let’s get started. Since the majority of errors I’ve seen (and therefore opportunities for learning) are in the planning stage, that is where I’m going to focus most of my discussion. In fact, I am going to simplify the 6 stages outlined to above to just 3 broad areas of discussion,

  1. Mistakes to avoid when planning a forecast
  2. Mistakes to avoid when executing a forecast
  3. Mistakes to avoid when assessing a forecast

In each of those broad areas I’ll introduce a couple of common issues or mistakes that tend to get made, and also provide some hints on how to solve the issues or avoid the mistakes – the issues and solutions will be underlined to highlight them. I’ll also drop in a couple of real-world examples where I’ve seen these mistakes made, or where I made them myself.

Planning

Model Scope:

  1. Issue: The initial information supplied to you is never sufficient to perform an appropriate forecast. This is an unwritten first rule of forecasting1.

    There will always be important/crucial things the person requesting the forecast has not told you – out of ignorance or absent-mindedness. This is the time to ask those extra questions, such as,

    1. Why do you need the forecast? What problem are you actually trying to solve? More importantly, what decision are you trying to make using the information the forecast will give?
    2. How are you going to consume the forecast? Is it for insight – identifying the drivers that have the biggest impact on a medium-range outcome? Or is it for strategic planning? Or is it to be incorporated into a machine learning pipeline with an action automatically determined from the result of the forecast, e.g. changing an offer to a segment of customers.
    3. What is the forecast horizon over which you need the forecast?
    4. At what level of time granularity and segmentation do you need the forecast?

    Solution: The answers to a) – d) above are inter-related, i.e. the answer to one may uniquely determine the answer to one of the others, but you should still ask each of those questions individually.

    By understanding the scope of a system and how the forecast output is actually going to be used we avoid errors such as, failing to identify when the use-case does not justify the time and effort to develop the proposed forecasting model. A good example of this I’ve seen was a model developed for a national social housing charity that needed to predict the future costs of repairs to its housing stock. Due to various operational sensitivities, actions off the back of this prediction could only be taken at the regional level – the charity only needed to predict what the next month’s total repairs costs would be for each region in the country. But …the solution that was built used xgboost to predict the likely repair costs for each house in a region, given details about each house, and then simply aggregated the total predictions to regional level. Over the time horizons being forecasted the housing stock in each region was stable, so an equally accurate forecast could be obtained by using just the actual historical total monthly repair costs. As the historical total monthly repair costs just displayed seasonality and trend, a simple piece of SQL gave a prediction as accurate on a holdout sample as the xgboost based model.

Model Inputs:

  1. Issue: Will you actually know all the input values and model parameter values at forecast time? Check that the values of all the exogenous variables will be known ahead of running the main forecasting model. If the exogenous variables you’ve used in your model include macro-economic quantities, their future values will not be known and you will either need a separate forecasting model for these, or their values will need to be part of the forecast scenario specification. This may be what you intended all along, but you’d be surprised how often someone builds a forecasting model and only afterwards realizes the challenges in specifying the input variables.

    This problem can occur even in seemingly benign situations. For example, one mistake I’ve made in the past is using a set of dummy variables to model cohort fixed effects for a model of default rates in a loans portfolio. The only problem was that the loan book was still open – new cohorts were still coming onto the loan book – so to forecast the future default rate of the portfolio required assigning fixed effects to future, as yet unobserved, cohorts. In this instance we chose to make assignment of the future cohort effects part of the scenario specification – scenarios designated future cohorts as ‘high risk’, ‘medium risk’ or ‘low risk’ with the effects values being calculated from an appropriate centile of the historic cohort effects estimates. An alternative approach might have been to treat the cohort effects as random effects and when forecasting marginalize over the random effects of future cohorts. However, the two takeaways from this are, i) when planning a forecast model, think ahead to when you’re going to use it to produce the forecast and make sure you know how you’re going to obtain the input variables, ii) be cautious about including fixed cohort effects when producing forecasts for a changing cohort mix.

    Solution: Mentally run through the forecasting process in your head, or on paper, before you start estimating your models. This will flush out issues with the model form or forecasting technique before you have committed to building them.

  2. Issue: Variables/features not included in the model in a sensible or correct form.

I’ve a seen a model built by a marketing team that predicted the response to marketing activity. It was suspected that weather had an impact on how effective the marketing was. This was not an unreasonable hypothesis – when the summer weather is hot and sunny (not that common in the UK) most people want to be outside, in the park or at the beach, not paying close attention to some TV advert. The only problem was that ‘weather’ had been included in the predictive model as the average monthly temperature across the whole of England. There are so many things wrong with this,

  • The temperature in England on a single day can vary hugely from one place to another. It can be sunny and 25°C in London whilst it is raining and 15°C in Manchester and hailing and 10°C in Newcastle. The England-wide average is a meaningless feature to try and reflect how likely anybody in a specific geography is to respond to marketing. The people in London will be relaxing in the park, ignoring the TV adverts, whilst people in Newcastle will be putting an extra jumper on and hunkering down in front of soap re-runs on the TV.
  • In a similar vein, the use of a monthly average temperature is pointless. It was believed that the impact of weather on the marketing effectiveness was because of un-seasonal sunny weather over a few days. A monthly average will not reflect this. The monthly England-wide average temperature will reflect just seasonal patterns, not the particular effect the stakeholder was trying to understand.
  • Temperature is the wrong feature to use here. Hours of sunlight may be better, since the hypothesis was that it was the un-expected very sunny summer weather that was reducing the effectiveness of the marketing. Even better, a feature that captured the presence of unusually hot, dry weather on summer days would be preferable to include in the model. Note that we have now moved from discussing temperature to talking about ‘dry’ summer days, i.e., the absence of any precipitation. When including weather effects in a model it can even be crucial to think what form of precipitation is relevant here. A former colleague told me about some work he’d done for a British mobile phone operator. The mobile company was interested in the impact of weather as they’d noticed that call volumes increased sometimes when there was precipitation. The analysis revealed that, yes, precipitation had an impact, but the form of the precipitation is hugely important. If it’s raining the impact is small, but lower the temperature so that the precipitation comes in the form of snow and the call volumes spike – everybody is phoning home or phoning work to say they are delayed because of snow-blocked roads or trains not running because of ice and snow on the rails. The lesson here is that the precise way in which weather affects our outcome of interest needs to be understood.
  • Lastly, weather is a prime example of a variable that we won’t automatically know when it comes to producing the forecast. We may have a brilliant forecasting model, but we need to forecast the specific weather feature as well and we may not be able to do that accurately enough to get the benefit of our main forecasting model.

It was suspected that the person who built the model I’ve described just threw ‘weather’ into the model because they were told to. They simply got hold of the easiest or most accessible single weather variable they could find.

Solution: Spend time thinking through the form or particular variant of the feature you are putting into your forecasting model. Will it actually be capable of reflecting the actual effect you are trying to capture in your model? Is it at the right temporal and spatial granularity to be able to do that?

  1. Issue: Insufficient length of training data. Make sure the length of historical data you use to build your forecasting model is sufficiently long. How long is ‘sufficiently long’? Paul Saffo in this 2007 Harvard Business Review article on the 6 Rules For Effective Forecasting says that you should “look back twice as far as you look forward”. I would say you should look back even further. The point here, however, is not to give a precise rule for how much historical data you need for a given forecast horizon, but more to emphasize that the length of historical data you should use is always longer than you think. It should be sufficiently long enough to display several examples of the phenomena you need to capture with your model. To give an example – if you are modelling the effects of macro-economic climate on a financial metric, e.g., loan default rates, then you will want to include several business cycles and more importantly several recessionary periods in your historical training data. How far back in history is still a matter of subjective judgement – for example, was the recession resulting from the 2008 financial crash typical or atypical of the dynamics you want to model and forecast? This highlights two points, i) how far you go back in history can require a detailed discussion and review of the historical data – it is not simply, ‘let’s just include the last two recessions’, ii) you need to have a good idea of what sort of phenomena and/or dynamics you need to model for your forecasts to be representative of the scenarios you are trying to understand. Use the wrong data and the usefulness of your forecasting model may be short-lived. For long-range forecasts you’ll never really know in advance the full range of phenomena that your model needs to capture, as unpredictable and impactful phenomena are always capable of arising within the forecast horizon of a long-range forecast – what are called ‘Black-Swan’ events in parts of the popular science literature. In Part 1 we explained that by considering a very wide range of scenarios we mitigate against this, to a degree. But it means that a model used for long-range forecasting has to have captured dynamics and behaviour appropriate to a very wide range of phenomena – and that can require an awful lot of historical data. Anecdotally, I’ve seen that the length of training data required increases super-linearly with the length of the forecast horizon.

Solution: Think about the kind of phenomena or scenarios you want your model to be capable of forecasting. Does your training data contain adequate examples of such phenomena or scenarios? If yes, then you probably have enough training data. If no, then get additional appropriate training data, or shorten your forecast horizon (see Part I for why you should do this).

Model Form:

  1. Issue: Using a complex or unusual generative modelling technique and believing you can just include the predictive features into the ‘model’ as you would when building a linear model or GLM.

    I have seen an agent-based model used to attempt to forecast and identify emergent phenomena that it was believed would occur in response to a macro-economic change or shock. The agent-based simulation was used to mimic the microscopic interactions between the agents and their external environment – the general economy. A macro-economic variable, I think it was unemployment rate, was directly coupled to each agent’s propensity to spend. Lo-and-behold, when the unemployment rate increased the forecast showed that the total expenditure in the system went down. This was hailed as a new finding, showing emergent behaviour. No! It just reflected how the macro-economic variables had been included in the modelling. At this point over 12 months (at 3 FTE, I believe) had been expended on this project.

    How the exogenous influences are coupled to a forecasting technique is critically important if we want to identify genuine emergent phenomena. Genuinely emergent phenomena are typically a global property of the system, often resulting from a global constraint. For example, in a physical system it could be a requirement to minimize the overall free energy of the system. In a financial system it could be a requirement to maximize total profit. Ideally, we should think about how the exogenous influences interact with these global constraints when including the exogenous variables in our modelling. If instead the exogenous variables are coupled directly to a metric we will later measure, we should not be surprised when that metric changes when the exogenous variables do.

    Solution: The more complex the technique used you more you’ll need to think about how you put the predictive features into the ‘model’.

  2. Issue: Computational optimization of forecast model outputs will exploit the weaknesses in your model. Be aware of the potential future downstream uses of your forecasting process. The forecasting technique you’ve used may not be robust enough to support likely (and anticipatable) downstream uses. It is likely, you’ve set up your forecasting process as an automatable, reproducible codebase. That codebase can then be included in a downstream automated process, such as finding the optimal value of one the actionable input variables. The optimization process will optimize the output of the forecasting model and because forecasts are, almost by definition, ‘out-of-sample’, there is the potential for the optimization process to drive the model to a region of the input space where the model output is non-sensical. This is because the optimization process does not know any better – we have not ensured that the forecasting model output has sensible and credible behaviour for all scenarios or for all sets of input values. To do this we need to build sensible structural constraints or principles into our forecasting model. Such constraints or principles usually come from domain knowledge, e.g. from economic principles when building forecasting models that include macro-economic inputs. These constraints or principles represent assumptions – we are assuming that our system of interest will or should obey classical economic principles. If those assumptions are incorrect, we will be guilty of produced a biased forecast. How do we know when to include such constraints or principles? We don’t know precisely, but we can think before constructing the forecasting model whether the benefits of including them outweigh the dis-advantages, and we are forewarned as to the potential bias. The main point here is, again, think and plan.

    Solution: Understand if you will always be in control of the uses of your model. If not, then think whether your model needs to be robust to use-cases you can’t control.

Model Estimation:

  1. Issue: The model estimation process is not set up to reflect what you’re actually trying to model. Use a cost-function that reflects the outcome you care about. When fitting a forecasting model, we will typically be minimizing some cost-function. Choose the cost-function appropriately. If forecast accuracy is going to be assessed using a different cost-function you may want to rethink the cost-function you use for fitting. Or in simple terms, fit your model to optimize the outcome you actually care about.

    Solution: Think through each step of the proposed estimation process. Is it ideal for the thing you’re trying to capture.

Execution

  1. Issue: How do you gauge whether your forecast is credible? Always run a baseline calculation or baseline scenario. For a short-range forecast you may have characterized very precisely the scenario you wish to predict, but your baseline scenario can still be something like a Business-As-Usual (BAU) scenario. For long-range forecasting, you can also use a BAU scenario as your baseline scenario but the definition of BAU may be more subjective and contain significant movement in exogenous influences – although it probably won’t be what you consider to be the most extreme scenario. The main benefit of running a baseline scenario is that it allows you to compute realistic ‘deltas’ even if there are far-from-perfect assumptions in your forecasting methodology. Remember the quote from George Box – ‘All models are wrong, but some are useful.’ The skill as a Data Scientist/Statistician is in knowing how to extract the useful insight and information from a ‘wrong’ model. With a baseline calculation you can compute how much worse or better the outcome is under scenario X compared to the baseline scenario. As a human you may then have a feel for what is incorrect in the baseline calculation and correct it or down-weight it. The model based estimate of the delta between scenario X and baseline scenario can then be applied to the human corrected baseline.

    Solution: Running a baseline scenario where you have a more intuitive feel for how the system responds will help you assess any other scenario.

  2. Issue: We have a forecast but not a quantitative measure of how confident we are with it. Always produce measures of uncertainty, e.g. confidence intervals for your forecasts. There is limited value in just a point estimate. How sensitive is that estimate to the stochastic component of the response variable dynamics? Then on top that we have uncertainty in the forecast due to parameter uncertainty and potentially also input uncertainty. Sensitivity analysis can help us quantify the impact on the forecast from both parameter uncertainty and input uncertainty, so that we can identify which we need to improve most. Don’t assume that just because values of exogenous variables have been specified for the forecast scenario that they are accurate. Forecast scenarios that are, upfront, specified very precisely can still be mis-specified or specified inappropriately, or even subject to change – it is not unusual for a company to execute a different BAU scenario to what they said they would at the time the forecast was produced.

Solution: Quantify the impact on a forecast from all the major sources of uncertainty. Doing so is essential for framing and qualifying any decisions or actions taken on the back of the forecast output. 

  1. Issue: Distinguishing outputs from different forecasts gets messy when you have lots of them. Create a system for time-stamping forecast outputs, the associated input data and meta-data and also the codebase used to produce the forecast. This creates a disciplined process for running, re-running, and changing forecasts whilst knowing which run produced which forecast. You’ll be surprised how often you’ll end up asking yourself questions similar to , ‘now, did I include 5 or 6 years of training data and was it a 2month gap between the end of the training data and the beginning of the forecast horizon?’ Set-up a system to accurately capture what each forecast used, did, and what it was about.

    Solution: Set up a system from the outset for timestamping and logging your forecast outputs along with the details on the inputs.

Assessment

  1. Issue: How do you assess the accuracy of your forecasting method if the inputs may also be uncertain? If your forecasting model includes an input feature that itself is forecasted, then always perform holdout tests on your model with and without perfect hindsight.
    1. Testing with perfect hindsight is when we perform the holdout test using the actual observed values of all the input features.
    2. Testing without perfect hindsight is when we perform the holdout test using the forecasted values of any input features that need to be forecasted when actually running in production.

    The clear value of performing the two different versions of the holdout tests is that it helps identify where the biggest bottleneck in forecast accuracy is. There is no point in trying to further improve a main forecasting model that is already accurate, say over a 1 year time horizon, if it is only accurate if we know all the inputs precisely and our forecasts of one of the input features is woeful – put the effort into improving the feature forecasting model.

    Solution: Assess holdout forecast accuracy with and without perfect hindsight on the input variables.

  2. Issue: How do know whether your complex forecasting technique is adding any value? Always run a baseline technique when assessing holdout accuracy. This is true of any use of machine learning. When building predictive classification models we often build a simple classifier, such as a naïve Bayes classifier, to provide a baseline against which to judge our more complex and sophisticated models and to check that the extra complexity is warranted. Similarly, when producing a forecast for a scenario using our chosen forecasting technique, we should include the forecast from a much simpler technique, such as exponential smoothing. In fact, this commentary, in the International Journal of Forecasting, on the recent M5 forecasting competition suggests 92.5% of the time you won’t beat a simple exponential smoothing model.

    Solution: Include a simple baseline forecasting technique.

  3. Issue: Over confidence in the forecast output.

    Another of my favourite quotes from the famous statistician George Box,

‘Statisticians, like artists, have the bad habit of falling in love with their models.

George Box

I have been guilty of this myself. We become blind to the possibility that the output from a model can still be garbage even when we have provided high-quality input data. We believe that because we have circumvented the ‘garbage-in, garbage-out’ issue we must have a credible forecast because of the elegance and sophistication of the forecasting technique we have used. We have become seduced by the elegance of the forecasting technique. We have forgotten that ‘all models are wrong’. Well, if a model can be wrong, it can be completely and utterly wrong, and we should remember that. The first rule of forecast assessment – always doubt your forecast. Look at the forecast and see if you can explain to yourself why the forecast has that shape given the inputs.

Solution: Always be prepared to doubt and, if necessary, overrule your forecasts.

That’s Part 2. I hope it has been helpful. In Part 3 I’ll list some forecasting learning resources and tools that I’ve found useful.

Footnote 1: I seem to recall seeing this rule written in a blogpost or paper somewhere, but I’m unable to locate it. If anybody is aware of an original source for it, please let me know.

© 2022 David Hoyle. All Rights Reserved.

What does the future hold? : Using forecasting in a commercial environment

TL;DR: Forecasting is a process, not just a forecasting model. The over-whelming majority of textbooks will teach you how to build a particular type of forecasting model, but not about how, when and where to use a forecasting process. These things are often learnt only through experience. In this series of blog posts I detail what I have learnt about building forecasts and the forecasting process, through 10 years of commercial Data Science roles. The main takeaways – before you build any forecasting models think long and hard about why you need a forecast, what you are going to do with it, at what granularity and over what time horizon is the forecast needed – long-range forecasting is very different from short-range forecasting. You’ll always need some human involvement in the forecasting process even when using automated short-range forecasts, where it is still advisable to include a human oversight step in the decision making process. The longer the range of the forecast, the more human involvement is advisable.

Why forecast?

Almost all my Data Science roles in the commercial sector have been focused on some form of forecasting – from my first role outside of academia, where I was building long-range ‘stress-testing’ models for the UK’s largest retail bank, through building models that predicted the website clicks for AutoTraderUK in response to a TV advertising campaign, to the demand models I build at dunnhumby to forecast demand for the world’s largest grocery retailers. The focus on forecasting is perhaps no surprise. The ultimate use of the models in business is to help optimize some aspect of the business, be it helping determine the correct Tier 1 capital required to underpin the bank’s risk-weighted-assets, or to determine the best mix of TV channels and timings given a TV marketing budget, through to determining the optimal prices for products in a supermarket category. In all these examples it is the future performance of the business that we want to optimize. The use of forecasting models for business optimization is very much at the ‘prescriptive’ end of the Gartner analytics ascendency staircase. Businesses that use Data Science and ML models in this way are attempting to influence the future towards an outcome that is beneficial for them.

Why the need for this series of posts?

Not all businesses do use Data Science and Machine Learning in this way, or are able to, and so are less in control and more subject to the random winds of chance. Businesses that use forecasting models to optimize business operations tend to be both data and analytics mature. Typically, they have been using analytics in this way for a long time. It is not a new endeavour for those businesses. For other businesses that are new to forecasting there will be a temptation to believe that learning to forecast just requires learning the various forecasting techniques. During my various commercial roles I obviously had to learn the technical details of various forecasting techniques – ARIMA, Holt-Winters, etc. BUT….this article is not another introduction to how to use those various techniques to build models – there are plenty of excellent textbooks and online educational resources that will show you how to do that better than I can1. Instead, this is a series of blog posts about what I have learnt about forecasting along the way. Things which typically aren’t explained in the technical textbooks or technical online articles. Some of these things I’ve learnt the hard way – by making mistakes. Other things I have learnt after the forecast models have been built – when the real challenges of utilizing the models for the actual use case emerge.

Overall, the focus is on how to use forecasting, not how to build specific forecasting models. It will be on understanding what forecasting can do, where you should use it, what it can’t do, and how to get the best out of the forecasting process. I’m going to break this down into a series of 3 posts,

  • Part1: What is forecasting? What can forecasting be used for?
  • Part2: How to organize a forecasting process. What to do and what not to do.
  • Part3: Links to resources and further reading.

Part 1

What is forecasting?

In this era of machine learning and AI can’t we just regard forecasting as just another form of prediction, and forecasting models should be constructed and interpreted just like any other machine learning model? The answer is no.

Forecasting vs Prediction

The high-level distinction between forecasting and prediction is the temporal element. When we forecast we are extrapolating into the future. When we build a predictive model we are usually interpolating within the training set from which the model has been built.

The term ‘projection’ is also used when talking about forecasting. Some organizations, such as the International Panel on Climate Change (IPCC) refer to a projection as a forward looking prediction under a particular scenario, whilst a forecast is the projection (scenario) that is considered most likely.

This also highlights that forecasts acknowledge the inherent element of uncertainty within them. Nate Silver in his book, ‘The Signal and the Noise’ notes that some fields such as seismology strongly emphasize this aspect, distinguishing,

A prediction is a definitive and specific statement about when and where an earthquake will strike…whereas a forecast is a probabilistic statement, usually over a longer timescale.

Nate Silver

Nate Silver states that, ‘The United States Geological Survey’s official position is that earthquakes cannot be predicted. They can, however, be forecasted’ – more details from the USGS here.

Recognizing the importance of the probabilistic nature of forecasts, many modern forecasts are now built directly from probabilistic models, with the uncertainty communicated, for example visually through the use of fan charts.

It’s about time

The temporal element of forecasting is key. It impacts two important aspects of any forecasting model we construct – i) The nature of the variables used in the forecasting model, ii) The time-horizon over which we forecast and what we can use those forecasts for. Let’s look at those two aspects.

Endogenous vs exogenous factors

The temporal element of forecasting means it naturally involves trying to model and/or understand how a system evolves. The factors that influence that evolution can be internal to the system itself -what we call endogenous factors. These are variables that are determined or created by, or emerge from the system itself. An endogenous variable could be as simple as the lagged response variable itself. Other factors that can influence a system’s evolution are external to the system – what we call exogenous factors – such as the broader macro-economic climate when modelling the short-term dynamics of demand for goods or services in a small geographical region.

Forecast horizon

There are multiple temporal components/dimensions/concepts we may need to consider when building a forecasting model,

  • The time-period used to train a forecasting model.
  • The time-period over which the forecasting model is tested.
  • The temporal granularity at which the forecasts are made, e.g. daily, weekly, monthly, etc.
  • The time increments we use when advancing training/testing windows during the evaluation of the forecasting model.
  • The time increments we use that set the frequency of the forecasting process when deployed.
  • The time gap between when the forecasts are made and the date of the first forecast period, i.e., the gap between when the forecasts are made to when they are used.
Figure 1: Some of the different temporal concepts involved in defining a forecast.

Some of these concepts are illustrated in Figure 1 above, but perhaps the most important temporal component, and the one I want to focus on, is the length of the forecast horizon – how far into the future are we attempting to forecast? That is, are we making forecasts for the short-term, medium-term, or long-term. The forecast horizon is strongly linked to what a forecast model can be used for (or should be used for), and how it is used. More specifically,

  • The appropriateness of different forecasting models and techniques is different over the different horizons.
  • The accuracy of a forecasting model is different over different horizons.
  • The factors or variables that influence the response variable being forecasted differ over different horizons.
  • Even how the system being forecasted responds or evolves can be different over different horizons.

The net effect of all this is that the uses of forecasting are and should be different over different forecast horizons. So how do we define the forecast horizon? What defines a short-term horizon, versus a medium-term or long-term horizon? Ultimately, those concepts should be defined in terms of the characteristics or response of the system being forecast, and not the forecasting technique used.

However, there is no universally agreed definition of short-range versus long-range, as this discussion on CrossValidated testifies to. Below I’ll give my own definition and discuss in detail what distinguishes a short-range forecast from a medium-range or long-range forecast. As well as giving a definition based upon the dynamical characteristics of the system being forecasted and the factors that influence it, I’ll also give a second practical definition based upon how we intend to use the forecasts.

Short-range forecasts:

At a trivial level a short-range or short-term forecast is a forecast of the system we are interested in, but over a short period into the future – yes, a very unhelpful definition. So, what precisely defines short-term? It is more helpful to realize that what we mean by ‘short-term’ can vary from system to system. By short-term, we ultimately imply that we expect the behaviour of the system in the immediate past to be reasonable guide to its behaviour in the short-term future – we don’t expect to frequently see massive changes in level of the response variable, and the recent historical values of the response variable alone can enable us to produce a decent forecasting model. We are in the realm where ARIMA models do well. Over a short-term horizon the influence of exogenous variables has not yet begun to kick in, primarily because the important exogenous variables have not changed significantly – they are effectively constant – on these short-term timescales. For a high-street retailer interested in forecasting how many items they will sell, short-term may mean a few days ahead and up to two weeks ahead, whilst for a financial trader involved in ultra-high frequency trading, short-term is measured in milli-seconds and up to only a second or so. For a system whose dynamics are almost entirely endogenous, or systems whose exogenous influences evolve on timescales of years, e.g. climate systems, then short-term can be measured in multiples of years.

The fact that over a short-term horizon any exogenous influences may not vary illustrates that over different forecast horizons the dynamics of our system can be very different. Over short timescales the dynamics is endogenously controlled, over long timescales the dynamics is exogenously controlled.

In complex systems the dynamics over short and long timescales may be different for reasons other than just how the forecast horizon timescale compares to the timescale on which the exogenous influences vary. In complex systems we are likely to have multiple endogenous timescales defined or emergent. This is particularly true for economic systems and we see it in how those systems respond to a shock or a perturbation. Economists have a rule of thumb – in the short-term people are price inelastic (price-insensitive)2, meaning that after a change to a economic system, e.g. supply chain shocks, consumers may not have had time to adapt their behaviour to the new circumstances or the new prices/information available, and so still purchase in a similar manner to before even though prices may have risen. Over longer timescales, people adapt their behaviour – they find cheaper substitutes for the now more expensive item they used to purchase, or they find cheaper suppliers, and so consumers become more price-sensitive again over longer timescales.

The short-term horizon is defined by the shortest timescale process that has an appreciable/relevant influence on the response variable. In our system of interest there may be both exogenous and endogenous processes, and so timescales defined exogenously and endogenously. In a complex system, we will have multiple endogenous timescales, possibly varying by orders of magnitude.

This economic example also highlights that in complex systems and over long timescales we should probably regard everything as ultimately being endogenous due to the degree of inter-connectedness of the various sub-systems. Or in other words, no sub-component of the complex system can be considered on its own as a closed system or independent of other sub-systems, and we should always study the complex system as a whole – but very likely with a lot of simplifying assumptions.

Over the short-term we would expect a forecast to be accurate, or rather capable of being accurate. This doesn’t mean a short-term forecast can’t be massively inaccurate; we could have a extraordinary event and perturbations occur after the the forecast was made, i.e., an assumption that we are forecasting a stable system (a stationary process in statistical language) may turn out to be incorrect due to circumstances that could not have possibly been foreseen – think of a retailer making supply chain forecasts four or six weeks prior to the stock-piling panics that occurred as a consequence of the first Covid-19 lockdowns. Or it may simply be the case that the short-range forecasting model has been poorly built.

Putting unforeseen circumstances and model building competence aside, we would expect a short-range forecast to be reasonably accurate. It can be used for making accurate predictions for very specific scenarios. In contrast, a long-range forecast cannot. This gives us a second practical means of defining forecast horizon. Practically, short-term means the time horizon over which we can use the forecasting model for making detailed specific predictions. Note the emphasis on the word ‘use’. The use-case/business model will define the level of accuracy we require and so can effectively define what is short-range and what is long-range. The recent example of Zillow which exited from US house price forecasting is a case point – see here, here, and here for more detailed discussions. Zillow was using forecasting models over time horizons for which the accuracy was not sufficient to support the particular business model. Zillow was effectively relying on long-range forecasts for detailed predictions, even though the time horizon of 6months ahead may have appeared to be short-term.

The Zillow example illustrates again the difficulty in forecasting complex systems such as markets, particularly if actions taken on the back of the forecast are intended to be part of the market making process. It highlights that perhaps for complex systems we should regards almost all forecasts as long-range.

Medium-range forecasts:

As you might have guessed we can define a medium-term forecast as a forecast over a horizon over which any exogenous influences begin to show significant variation. This is true also for a long-range forecast, but for a medium range forecast horizon we may have a reasonable idea what the values of those exogenous influences may be, or we may even be in control of them, for example they may correspond to actionable variables such as marketing activity variables3. For typical business use-cases medium-range can mean anything from 3-6months out to as much as 18months in the future.

Because exogenous influences start to show significant variation, they can’t simply be absorbed into the intercept of any model, and the typical modelling techniques used are of the form, ‘technique A + X’, meaning that we include the exogenous variables X much like we would when building a standard regression model. Over the medium-term techniques such as ARIMA+X and SARIMA+X are useful.

Long-range forecasts:

By contrast to the definition of what constitutes short-range and medium-range forecasts, a long-range forecast is a forecast over a time-horizon in which exogenous factors have a significant influence and display a significant variance. This could be, for example, macro-economic factors evolving through several business cycles. For the stress-testing models I had to build we were interested in forecasting bank-loan default rates with unemployment rate and central bank base-rate as inputs into the forecasting model. The models were used to produce forecasts with a 5yr forecast horizon. Future unemployment and interest rates were obviously unknown, and so required their own additional forecasting models to predict them.

This macro-economic example highlights that the exogenous influences are themselves subject to variation that is difficult to know in advance. Long-range forecasts have an additional element of uncertainty that increases the final uncertainty of our end forecasts – namely that we probably don’t know all of the inputs to our main forecasting model to a high degree of accuracy. To a large extent this is to be expected. We are forecasting multiple years into the future. In that time many unforeseen circumstances can play-out, e.g., a referendum to leave a major trading block not going the way many people expected, or a global pandemic occurring.

Forecasting exogenous inputs, which encapsulate the influence of national and international contexts, can only be suggestive at best – a reflection of what we think might happen to those exogenous variables, all other things being stable. But….major random, global events do happen. Since we cannot always confidently know what the true future values of the exogenous variable will be, a long-range forecast can only ever be viewed as a ‘what-if’ forecast – what would be the loan default rate if the macro-economic conditions were X. More importantly, a long-range forecast should only ever be used as a ‘what-if’. An individual, specific long-range forecast shouldn’t be used to plan the operation of an organization, or its tactical response to a particular situation.

Does this mean long-range forecasting is useless? No! Far from it! Long-range forecasts won’t tell us what will happen, they tell us what might happen. And so long-range forecasting can be used to help an organization plan strategically. Okay, I hear you say that all forecasts only tell us what might happen, because all forecasts have some uncertainty. What I mean here is that, because the validity of a long-range forecast is dependent on the validity of the input values, we don’t even know if we are looking at an appropriate input scenario. So instead of producing a long-range forecast for a single input scenario, we should always produce a range of long-range forecasts from a range, or ensemble, of input scenarios. The output from an ensemble of long-range forecast then might reveal some behaviours we weren’t expecting, which the business or organization can plan an appropriate response or intervention to. Or alternatively, an ensemble or long-range forecasts may reveal that a particular output metric is largely insensitive to the input scenario, and therefore although we don’t know which scenario will ultimately play-out, we can be confident we know what the value of the metric will be. In our bank stress-testing example we may see that for a wide-ranging ensemble of input scenarios the long-range forecasts indicate that in all the scenarios considered a bank has sufficient capital to withstand the likely increased loan default rates. The bank executives may be confident that no significant capital needs to be raised to protect the bank against whatever the future holds.  

You may scoff at last example given financial crisis of 2008, and you may question whether large banks are ever well-prepared for whatever economic future transpires. This may be true, but what it also highlights is that there should always be some discussion around whether the ensemble of input scenarios considered has been wide-ranging enough. Has a big enough stress been applied in the ‘what-if’ scenarios during the stress-testing exercise? This illustrates that long-range forecasting has a high degree of human involvement – to discuss both inputs and interpret outputs. How successful a long-range forecasting exercise is can depend on how an organization approaches it, and how the human contributions/element are brought into play – the excellent non-technical book, Uncharted by Margaret Heffernan discusses these points in depth – I discovered the book through this review by Tim Harford in the Financial Times.

Figure 2: The different forecasting time horizons

Human involvement

The need for significant human involvement in producing some forecasts may be surprising in this era of Data Science and Machine Learning, but direct human involvement in producing forecasts has a long history. Prior to the development of rigorous time series analysis techniques such as ARIMA, this is to be expected. It is interesting to go back and read old articles such as this 1971 Harvard Business Review article on forecasting in business. Putting aside the very gendered language, it is intriguing to see the emphasis upon judgmental forecasting methods and forecasting by analogy. The value of judgmental methods has been re-discovered over the 15 years or so. Specific techniques such as the Delphi method are still widely used in fields as diverse as public transport planning and health. Techniques such as the Delphi method excel at getting a wide range of opinions and inputs to help reduce the uncertainty in understanding complex, many-layered situations. This is what humans are good at. It is also exactly the situation we often face when making long-range forecasts for complex systems.  It is unsurprising then that the ability of humans to handle nuanced, ambiguous and complex scenarios is used in other techniques such as prediction markets and superforecasting.

What can we use forecasting for?

The challenges of making long-range forecasts illustrate that there can be markedly different uses of forecasting. Long-range forecasts are about reducing uncertainty through gaining qualitative and semi-quantitative understanding of what might happen. Short-range forecasts are about quantitative prediction of what we think will happen.

There are also other dimensions that differentiate what we can use forecasting for. Three of these that are worth highlighting are,

  • Insight vs Prediction: This is illustrated well already above by the contrast between short-range forecasts and long-range forecasts, but it is also applicable to short-range or medium-range forecasts on their own. We can use a medium-range forecasting model to make predictions of what will happen in the future, but also to extract insight from the values of the parameters of that model as to what are the relative influences of the different factor upon that future.

  • Prediction vs Prescription: In my opening paragraphs I highlighted the Gartner analytic ascendency staircase and how some companies have the data and analytic maturity that enable them to use computational forecasting models they’ve built in a prescriptive way – they are used to determine the optimal course of action as opposed to merely forecasting the current baseline scenario.

  • Different levels of aggregation: When we build any predictive model we have to decide upon the response variable we are going to model. This typically involves a choice about what level of aggregation we going to use – should we build models of individual units (e.g. consumers), groups of units (e.g. a cohort of customers), or the entire population/collection of units (e.g. the entire customer base of an enterprise)? Generally speaking, we should model at the lowest level of granularity at which we first expect to see a homogenous response over the time-horizon of the forecasting exercise. Think of the example of modelling the future default rate of a loans portfolio; if the portfolio is made up of a heterogenous mix of different customer (loan) segments whose response to economic conditions differs across the segments, then as we change the segment mix of that portfolio we will get very different forecast. Modelling the default rate of each homogenous segment separately will allow us to flex that segment mix when exploring different forecast scenarios, whilst modelling the default rate of the portfolio in a single model will not.

Summary:

Forecasting involves extrapolation into the future. This makes it different to other predictive models you might build – these typically involve interpolation within a training dataset. The granularity at which you model is important. Even more important is the horizon over which you are forecasting. A long-term forecast should only be used for exploring behaviour under a range of hypothetical scenarios, whilst short-term and medium-term forecasts can be used to make detailed predictions about specific and highly likely scenarios. Long-range forecasts inform strategic planning, short and medium-term forecast inform tactical responses. This means that when forecasting we should identify what kind of forecast we want and only then choose our forecasting technique appropriately.

In the next part of this series of blog posts I will cover some of the do’s and don’ts of forecasting. It won’t be about how to use a particular forecast model building technique. It will be about the common mistakes made – including ones I’ve made or I’ve seen made – so that you can avoid them (in the future).

Footnotes

  1. This very recent review by Petropoulos et al (to appear in International Journal of Forecasting) gives both a comprehensive coverage of the different forecasting techniques available and also a comprehensive set of case studies. The case studies illustrate the practice and challenges of forecasting in individual sectors and so touch in part on some of the issues I’ll be discussing. I’ll also be aiming to give broad general advice (not sector specific) on the practice of forecasting.
  2. See for example, Milgrom, Paul and Roberts, John. “The LeChatelier Principle.” American Economic Review, March 1996, 86(1):173-179.
  3. For the purposes of this blog and for simplicity I’m going to ignore the subtle distinction of whether price and marketing variables in demand models are exogenous or endogenous. I’m going to consider them here as exogenous since they are being imposed or set by the marketer or retailer. However, price drives demand and demand drives the price, so it is common to consider price to be an endogenous variable over longer timescales and within the more complex system consisting jointly of the retailer and the consumer.

Using your own algorithms with hyperparameter optimization in AWS SageMaker

TL;DR

  • AWS SageMaker provides a number of standard Machine Learning algorithms in containerized form, so you can pull those algorithms down onto a large EC2 instance and just run, with minimal effort.
  • AWS SageMaker also provides a hyperparameter optimization functionality that pretty much runs ‘out-of-the-box’ with the algorithms provided.
  • You can run your own algorithms within SageMaker if you containerize your algorithm code.
  • I wanted to find out if it was possible to easily combine the ‘run-your-own-containerized-algorithm’ functionality with the ‘out-the-box’ hyperparameter optimization functionality in SageMaker. It is. It was a straight-forward, but slightly lengthy process.

Introduction

<DISCLAIMER> This is a blog-post I started back in Autumn/Winter 2019. I knew it would be a fairly length post but one I was keen to write. But then, well, a pandemic got in the way and its taken a while to get back to writing blog posts. I still believe there are some useful learnings here – I hope you do too </DISCLAIMER>.


Back in 2019 I was using SageMaker a lot, including running an AWS Machine Learning Immersion Day at Infinity Works. One of the things I like about SageMaker is how the resources used to do any heavy lifting in training a model are separated from the resources supporting the Jupyter notebook. The SageMaker service provides several standard Machine Learning algorithms (e.g. Random Forests, XGBoost) in containers. This means it is possible to explore a dataset and develop an modelling approach in a Jupyter notebook that runs on one EC2 instance, and then when we want to scale-up the training process to the full dataset we can pull down the relevant container from ECR and run the training process on a separate much larger instance. Provisioning of heavier infrastructure needed for training on the full large dataset is only done when it is needed and you only pay for what you use of those larger EC2 instances. A Data Scientist like me doesn’t have to worry about the provisioning of the larger EC2 instance, it is handled by through simple configuration options when configuring the training job. It is also possible to configure a hyper-parameter optimization job in a similar way, so that multiple training jobs (with different hyper-parameter values) can be easily run, potentially in parallel, on large EC2 instances just by adjusting a few lines of json config.

So far, so good. As a Data Scientist the pain of getting access to or configuring compute resource has been removed and training on really large datasets is almost as easy as exploring a smaller dataset in a Jupyter notebook running on my local machine. But are we restricted to only using the algorithms that AWS has containerized? This is where it get more interesting and fun. You can use any algorithm that is available in a container in ECS. That means you can develop/code up your own algorithm/training process, containerize it, and then run that algorithm using multiple large EC2 instances with minimal config.

AWS have an example of how to containerize your own algorithm and deploy it to an endpoint. The git repo is here. The AWS team use the example of a scikit-learn decision tree trained on the Iris dataset (I know, why do examples not use something more original than the Iris dataset).

What I wanted to explore was,

  • How easy was it to actually containerize my own algorithm for use in SageMaker,
  • How easy was it to combine my containerized algorithm with the easy to configure hyperparameter optimization capability already present in SageMaker.

The rest of this post is about what I learnt in exploring those two questions, in particular the second of those. The first question is essentially already answered by the original AWS repo. What I wanted to learn was could I easily use my own algorithm with the out-the-box hyperparameter optimization functionality that SageMaker provided, or was the easy-to-use hyperparameter optimization functionality essentially restricted to the in-built SageMaker algorithms? What I’ll cover is,

  1. The choice of algorithm we’re going to containerize
  2. The basics of building the Docker container
  3. Pushing the container to the AWS container registry
  4. Using the containerized algorithm within a SageMaker notebook
  5. Running hyperparameter optimization jobs using the containerized algorithm.

If you want to follow the technical details, I would suggest that you first become familiar with the basics of AWS SageMaker – tutorial here. You may also want to look at the basics of hyperparameter tuning for one of the standard machine learning algorithms within SageMaker, as I’ll be assuming some of this background knowledge is known to you or at least you can pick it up quickly – to fully explain all the SageMaker background material would make this an even longer blog. You can find explanations of how to configure and run a SageMaker hyperparameter tuning job here and here.

Now, let’s start with the first of our questions.

Algorithm choice

I wanted to use an algorithm that wasn’t already available within SageMaker, otherwise what would be the point of going through this exercise? I have been doing some work recently on Gaussian Processes (GPs), in particular with kernel functions that are composite functions.

I won’t explicitly cover the basics of GPs here – the blog post is long enough already. Instead I will point you towards the excellent book by Carl Rasmussen and Chris Williams and this tutorial from Neil Lawrence. However, I will say briefly what my interest in GPs is. Gaussian Processes have an interesting connection with large (wide) Neural Networks. This connection was discovered by Chris Williams and Radford Neal. I wrote some GP code, on the basis of the Williams’ paper, that made it into commercial software (my first ever example) back in 1999 (yes – I am that old, and have been working in Machine Learning that long). More recently, the connection has been extended to link Deep Learning Neural Networks and Gaussian Processes (see for example, here and here). Cho & Saul did some nice early work in this area, using dot-product kernels that are composite functions. It is the dot-product kernels derived by Cho & Saul that I’ll use here for my example algorithm, as the kernels are of relatively simple form, and yet are specified in terms of a few simple parameters that we can regard as hyper-parameters. For the purposes of this blog on AWS SageMaker it is not important to know what the Cho & Saul kernels might represent, merely how they are defined mathematically. So let’s start there,

For this illustration we are focusing on datapoints on the the surface of the unit hypersphere, i.e {\bf x} \in \mathbb{R}^{d} with ||{\bf x}||_{2}^{2}\;=\;1 . We then consider a set of kernels, K_{q,l}\left (  {\bf x}_{1}, {\bf x}_{2} \right) , defined via,

K_{q,l}\left ({\bf x}_{1}, {\bf x}_{2} \right )\;=\;  k_{q,l}\left ( {\bf x}_{1}\cdot {\bf x}_{2} \right )

The dot-product kernels k_{q,l}(t) are defined iteratively,

k_{q,l+1}(t)\;=\;k_{q,0}\left ( k_{q, l}(t)\right )

The base kernels k_{q,0}(t) are constructed from,

k_{q,0}(t)\;=\; J_{q}\left ( \arccos (t)\right ) / J_{q}\left ( 0\right )

with,

J_{q}\left ( \theta \right )\;=\;(-1)^{q}\left (\sin\theta\right)^{2q+1}\left ( \frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\right )^{q} \left ( \frac{\pi-\theta}{\sin\theta}\right )

Choosing a particular kernel then boils down to making a choice for q and l. Once we have made choice of kernel, we can train our model. For simplicity, I have defined the model training here to be simply the process of constructing the Gram matrix from the training data, i.e. the process of calculating the matrix elements,

M_{ij} = \sigma^{2}\delta_{ij}\;+\;K_{q,l}\left ( {\bf x}_{i}, {\bf x}_{j}\right)

Here, σ2 is the variance of the additive Gaussian noise that we consider present in the response variable, and {\bf x}_{i}\;,\; i=1,2,\ldots,N , are the feature vectors for the N datapoints in the training set. Along with the training feature vectors we also have the response variable values, y_{i} .

Whilst it may not match the more traditional concept of model training – there is no iterative process to minimize some cost function – I am using the training data to construct a mathematical object required for calculating the expectation of the response variable conditional on the input features. Within a Gaussian Process it is considered usual to optimize any parameters of the covariance kernel as part of the model training. In this case, for simplicity, and for purposes of illustrating the hyperparameter tuning capabilities of SageMaker, I wanted to consider the kernel parameters q,l and σ2 as hyperparameters, essentially leaving no remaining kernel parameters to be optimized during the model training.

Once we have the matrix {\bf M} defined, we can calculate a prediction for the response variable at a new feature vector {\bf x}_{\star} via the formula,

\mathbb{E}\left ( y\left ( {\bf x}_{\star}\right )\right )\;=\;{\bf v}\left ( {\bf x}_{\star}\right )^{\top}{\bf M}^{-1}{\bf y}\,\,

where {\bf y} is the vector of response values in the training set, and the vector {\bf v}\left ( {\bf  x}_{\star} \right )\;=\; (v_{1}, v_{2}, \ldots, v_{N}), with the element v_{i}\left ( {\bf x}_{\star}\right ) given by,

v_{i}\;=\; k_{q,l}\left ( {\bf x}_{\star}\cdot {\bf x}_{i}\right )

Now we have given the mathematical definition of our algorithm, we need to focus on code. Following the example in the original AWS repo we need python code that,

  1. Defines a class for a trained GP model. I have called my class, unsurprisingly, trainedGPModel . Instantiating an instance of this class by passing the training data to the class constructor method, runs the Gram matrix calculation process mentioned earlier. Within my trainedGPModel class I also have a method predict(xstar) that returns the predicted expectation of the response variable given an input datapoint xstar. The code for the trainedGPModel class implements the linear algebra formulae given above and so is straight-forward.
  2. We also need code that runs the training process. This code is held in a file called train. I made minimal modifications to the train module in the original AWS repo. The main change I made was including code to make predictions on a validation dataset, and from that calculating the Root-Mean-Squared-Error (RMSE) on the validation dataset. The validation RMSE is the metric I will use for hyperparameter tuning and so I have to write the validation RMSE value to stdout so that it can get picked up by the SageMaker hyperparameter tuning process. I had to write the RMSE value with a string prefix and delimiter, e.g.
print( "validation:RMSE=" + str(RMSE_validation) + ";" )

with a corresponding matching regex in the configuration of the hyperparameter tuning job – see later section on running the containerized algorithm in a SageMaker notebook. It wasn’t obvious that I needed to write the validation metric in this way, and it took a bit of googling to work out. Most SageMaker links on hyperparameter tuning point to this page , but the detail on how the metric is passed between your algorithm code and the SageMaker hyperparameter optimization code is actually explained in this SageMaker documentation page.

Docker basics

Now let’s talk about putting our code in a container. We need to construct a Docker compose file. For a refresher on Docker I found this tutorial by Márk Takács to be really helpful. I actually use a Windows machine for my work, so I’m running Docker Desktop. However, I also use WSL (Windows Subsystem for Linux) for when I want a linux like environment. Although you can install a Docker client under WSL, you still have to make use of the native Docker daemon of Docker Desktop. I found this guide from Nick Janetakis on getting the WSL Docker client working with Docker Desktop invaluable, particularly the configuring of where WSL mounts the Windows file system (by editing the /etc/wsl.config file) so that I can then easily mount any sub-directory of my Windows file system to any point I choose in the container image when testing the Docker file locally.

I won’t go through the aspects of testing the container locally – you can read the original AWS repo to see that. Instead we’ll just go through the Docker file for building the final SageMaker container. The Docker file is fairly simple and other that changing it to use a python3 runtime (see lines 9&10)  we have not changed anything else in the Docker file in the original AWS repo. Line 36 of the Docker file is where we copy across our algorithm code into pre-specified directory in the image that SageMaker will look for when running the containerized algorithm.


# Build an image that can do training and inference in SageMaker
# This is a Python 3 image that uses the nginx, gunicorn, flask stack
# for serving inferences in a stable way.

FROM ubuntu:18.04

RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python3 \
python3-pip \
nginx \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*

# Here we get all python packages.
# There's substantial overlap between scipy and numpy that we eliminate by
# linking them together. Likewise, pip leaves the install caches populated which uses
# a significant amount of space. These optimizations save a fair amount of space in the
# image, which reduces start up time.
RUN pip3 install numpy scipy scikit-learn pandas flask gevent gunicorn &amp;amp;&amp;amp; \
(cd /usr/local/lib/python3.6/dist-packages/scipy/.libs; rm *; ln ../../numpy/.libs/* .) && \
rm -rf /root/.cache

RUN pip3 install setuptools

# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard
# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE
# keeps Python from writing the .pyc files which are unnecessary in this case. We also update
# PATH so that the train and serve programs are found when the container is invoked.

ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"

# Set up the program in the image
COPY gaussian_processes /opt/program
WORKDIR /opt/program

Pushing the container to AWS

We can now push our Docker container to the AWS ECR (Elastic Container Registry). This is simple using the AWS CLI (command line interface) and the build_and_push.sh shell script provided in the original AWS repo. Within the shell script we have just modified on lines 16 and 17 the name of the top-level directory in which our training and prediction code resides,

image=$1

if [ "$image" == "" ]
then
    echo "Usage: $0 "
    exit 1
fi

chmod +x gaussian_processes/train
chmod +x gaussian_processes/serve

Then we just run shell script, passing the name of the container we have just built as a command line argument,

./build_and_push.sh gpsagemaker

After running the shell script we can see the container present in the AWS ECR,

Screenshot of our Gaussian Process SageMaker Docker container in AWS Elastic Container Registry (ECR) – ready to use within a SageMaker notebook.

Using the containerized algorithm in SageMaker

Now we have the container, that has our GP code, in AWS ECR we can use it within a SageMaker notebook. Let’s do so. For this I’m just going to adapt the notebook within the original AWS repo. I go to the Sagemaker under ‘ML’ in the list of AWS services and from there I can start/create my SageMaker notebook instance. Once the notebook instance is ready I can open up a Jupyter notebook as usual,

The first main difference is that we’ll create some simple small-scale simulated training and validation data. Our goal here is to test how easy it is to containerize and use our own algorithm, not build a perfect model. Our generative model is a simple one – a linear model, dependent on just two features (with coefficients that we have chosen as 1.5 and 5.2 respectively). We use this simple model to create the response variable values and then add some Gaussian random noise (of unit variance).


# create training and validation sets
nTrain = 100
X_train = np.random.randn( nTrain, 2 )
y_train = (1.5 * X_train[:, 0]) + (5.2*X_train[:,1]) + np.random.randn( nTrain )
y_train.shape = (nTrain, 1)
data_train = np.concatenate( (y_train, X_train), axis=1)
df_data_train = pd.DataFrame( data_train )

nValidation = 50
X_validation = np.random.randn( nValidation,2 )
y_validation = (1.5 * X_validation[:, 0]) + (5.2*X_validation[:,1]) + np.random.randn( nValidation )
y_validation.shape = ( nValidation, 1 )
data_validation = np.concatenate( (y_validation, X_validation), axis=1)
df_data_validation = pd.DataFrame( data_validation )

We then specify our account details and also the image that contains our Gaussian Process algorithm.


account = boto3.client('sts').get_caller_identity()['Account']
region = boto3.session.Session().region_name
image = '{}.dkr.ecr.{}.amazonaws.com/gpsagemaker:latest'.format(account, region)

The next cell in our notebook then uploads the training and validation data to our s3 bucket,


# write training and validation sets to s3
from io import StringIO # python3; python2: BytesIO 
import boto3

bucket = mybucket

# write training set
csv_buffer = StringIO()
df_data_train.to_csv(csv_buffer, header=False, index=False)
s3_resource = boto3.resource('s3')
s3_resource.Bucket(bucket).Object('train/train_data.csv').put(Body=csv_buffer.getvalue())
csv_buffer.close()

# write validation set
csv_buffer = StringIO()
df_data_train.to_csv(csv_buffer, header=False, index=False)
s3_resource = boto3.resource('s3')
s3_resource.Bucket(bucket).Object('validation/validation_data.csv').put(Body=csv_buffer.getvalue())
csv_buffer.close()

Running a single training job

So first of all let’s just configure and run a single simple training job. Note the validation metric being specified along with the regex.


create_training_params = \
{
    "RoleArn": role,
    "TrainingJobName": job_name,
    "AlgorithmSpecification": {
        "TrainingImage": image,
        "TrainingInputMode": "File",
        "MetricDefinitions":[{"Name":"validation:RMSE",
                              "Regex":"validation:RMSE=(.*?);"    
        }]
    },

We also set values for the hyperparameters, which are static since we are just running a single training job and not doing any hyperparameter optimization yet.


    "HyperParameters": {
        "q":"0",
        "l":"2",
        "noise":"0.1"
    },

We can then run a training job using our containerized Gaussian Process code, just as we would any other algorithm available in SageMaker. We can see the training job running in the AWS Management console – click under “Training jobs” on the left hand side of the console. We can see the current training job ‘in progress’ and also an earlier completed training job that I ran.

Screenshot of a single SageMaker training job running using our GP algorithm code.

Running a hyperparameter tuning job

So that appears to run ok. So now we have our algorithm running in SageMaker ok, we can now just configure the SageMaker hyperparameter optimization wrapper and run one of the out-of-box SageMaker hyperparameter optimization algorithms over what we have specified as hyperparameter in our Gaussian Process code. The config for the hyperparameter tuning job is below – we have largely just modified slightly the examples in the original AWS repo and also followed the guidance. You can see that we have specified the RMSE metric on the validation set as the metric to optimize with respect to the hyperparameters. For illustration purposes we have specified that we want to optimize only over the q and l hyperparameters. The σ2 hyperparameter we have kept static at σ2=0.1. You can also see that we have specified to run 10 training jobs in total, i.e. we will evaluate the validation metric at 10 different combinations of the two hyperparameters, but we only run 3 training jobs in parallel at any one time.


# Define HyperParameterTuningJob
# We will only tune the learning rate by maximizing the AUC value of the 
# validation set. The hyperparameter search is a random one, using a sample of
# 10 training jobs - better methods for searching the hyperparameter space are 
# available, but for simplicty and demonstration purposes we will use the 
# random search method. Run a max of 3 training jobs in parallel
job_name = "gpsmbyo-hp-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
response = sm.create_hyper_parameter_tuning_job(
    HyperParameterTuningJobName=job_name,
    HyperParameterTuningJobConfig={
        'Strategy': 'Random',
        'HyperParameterTuningJobObjective': {
            'Type': 'Minimize',
            'MetricName': 'validation:RMSE'
        },
        'ResourceLimits': {
            'MaxNumberOfTrainingJobs': 10,
            'MaxParallelTrainingJobs': 3
        },
        'ParameterRanges': {
            'IntegerParameterRanges': [
            {
              "Name": "q",
              "MaxValue": "4",
              "MinValue": "0",
              "ScalingType": "Auto"
            },
            {
              "Name": "l",
              "MaxValue": "4",
              "MinValue": "1",
              "ScalingType": "Auto"
            }    
        ]}
    },
    TrainingJobDefinition={
        'StaticHyperParameters': {
            "noise":"0.1"
        },
        'AlgorithmSpecification': {
        'TrainingImage': image,
        'TrainingInputMode': "File",
        'MetricDefinitions':[{"Name":"validation:RMSE",
                              "Regex":"validation:RMSE=(.*?);"
                             }]
        }

If we then look at our AWS console (screenshot below) we can see the hyperparameter tuning job running, along with previous completed tuning jobs.

Screen shot of AWS console showing current and previous hyperparameter tuning jobs.

We can also see the individual training jobs, corresponding to that tuning job, running (screenshot below). Remember that the hyperparameter tuning job is just a series of individual evaluations of the validation metrics, run at combinations of (q,l) specified the tuning algorithm. From the screenshot we can see that there are 3 training jobs running, in accordance with what we specified in the tuning job config.

Screenshot of the 3 training jobs running as part of the hyperparameter tuning job.

Once the tuning job has completed, we can retrieve the validation metric values for the 10 different hyperparameter combinations that were tried, to see which combination of q and l gave the smallest RMSE on the validation set.

Summary

The two questions I was trying to address were,

  1. How difficult is it to create your own algorithm to use in SageMaker?
  2. How easy is it to use the hyperparameter optimization algorithms available in SageMaker with your new algorithm?

The answer to both questions is, “it is a relatively easy but lengthy process”. That it is a lengthy process is understandable – SageMaker gives you a functionality to apply out-the-box hyperparameter tuning on an algorithm/code that it knows nothing about until runtime. Therefore there has to be a lot of standardized syntax in specifying how that algorithm is structured and called as a piece of code. Fortunately, all the details of how to structure your algorithm and create the Docker container are in the excellent example given in the AWS repo and the documentation. The only complaint I would have is that it would be good for the repo to have an example showing how your own algorithm can utilize the hyperparameter optimization functionality of SageMaker – hence this blog. Working out the few remaining steps to get the hyperparameter optimization working with my Gaussian Process code was not very difficult, but not easy either.

The example algorithm I have chosen is very simplistic – the training process literally only involves the calculation and inversion of a matrix. A full training process could involve optimization of, say, the log-likelihood with respect to the parameters of the kernel function, but explaining the extra details would make this blog even long. Secondly, we only needed a simple/minimal training process to address the two questions above. Likewise, we have not illustrated our new trained algorithm being used to serve predictions – this is very well illustrated in the original repo and I would not be adding any new with my Gaussian Process example.

The different types of data I have encountered as a Data Scientist

TL;DR

Where Data or Data Science is the business model or primary purpose of an organization you can expect data and the data eco-system to be properly invested in. In scientific research and industrial settings this will be common. In the commercial world this is not always the case. As a Data Scientist in the commercial world you should learn to ask, ‘Does this company need data? Does this company actually need Data Science?’ If the answers are not a resounding yes, then beware. Sometimes you won’t be able to answer those questions until you’re up close and inside the organization, but there are indicators that suggest upfront whether an organization is likely to have good quality data and a functioning data eco-system, or whether it will be a pile of trash. In the long read below I outline the different kinds of data I’ve encountered across the commercial and non-commercial sectors I’ve worked in, and what signs I’ve learnt to look for.

Long version

Over 20 years ago, as I was just starting research in the Bioinformatics field, a colleague explained to me the challenges of working with high-throughput biological experimental data. The data, he commented, was typically high-dimensional and very messy. “Messy” was new to me. With a PhD in Theoretical Physics and post-doctoral research in materials modelling, the naïve physicist in me thought, ‘…ah…messy…you mean Gaussian errors with large variance”. I was very wrong.  

I learnt that “messy” could mean a high-frequency of missing data, missing meta-data, and mis-labelled data. Sometimes the data could be mis-labelled at source because the original biological material had been mis-labelled – I remember one interesting experience having to persuade my experimental collaborator to go back to the -80°C freezer to check the lung tissue sample because a mis-labelled sample was the only remaining explanation for the anomalous pattern we were seeing in a PCA plot, and yes something had gone wrong in the LIMS because the bar-code associated with the assay data did not match the bar-code on the sample in the freezer.

However, as I moved from the academic sphere into the commercial realm (10 years in the commercial sector now in 2021), I’ve learnt that data errors and issues can be much more varied and challenging, and varies from sector to sector, from domain to domain. BUT…. I have seen that there are broad classes that explain the patterns of data issues I have experienced over 30yrs of mathematical and statistical modelling. In this piece I am going to outline what those broad classes and patterns are, and more importantly how to recognize when they are likely to arise. The broad classes of data are,

  1. Scientific or Experimental – here, data is collected for the purposes of being analysed, and so therefore is optimized towards those purposes.
  2. Sensor data – here, data is collected for the purposes of detecting signals, possibly diagnostic, i.e., working out what the cause of a problem is, or being monitored, i.e., detecting a problem in the first place. It is intended primarily to be automatically monitored/processed, but not necessarily analysed by a human (Data Scientist) in the loop. The data lends itself to large-scale analysis but may not be in a form that is optimal or friendly for doing data science.
  3. Commercial operational data – this is data that is stored, rather than actively collected. It is stored, usually initially on a temporary (short-term), for the purposes of running the business. It could be server event data from applications supporting online platforms or retail sites, or customer transaction data, Google ad-click data, marketing spend data, or financial data.

It is this last category that is perhaps the most interesting – quite obviously if you are a Data Scientist working in a commercial sector. For commercial operational data, not only can the data contain the usual errors but there can be a range of additional challenges – columns/fields in tables containing completely different data than they are supposed to because a field has been over-loaded, or re-purposed for a different need without going through a modification/re-design of the schema. Without an accurate updated data-dictionary, this knowledge about the true content of the fields in a table resides as, ‘folk knowledge’ in the heads of the longer serving members of staff – until they leave the company, and that knowledge is lost.

I have my own favourite stories about data issues I have encountered in various organizations I have worked for – such as the time I worked with some server-side event data whose unique id for each event turned out only to be unique for a day, because the original developer didn’t think anyone would be interested in the data beyond a day.

Every commercial data scientist will be able to tell similar ‘war stories’, often exchanged over a few beers with colleagues after work. Such war stories will not go away any time soon. The more important point is how do we recognize the situations where they are more likely to occur? The contrast I have already highlighted above, between the different classes of data, gives us a clue – where data is not seen as the most valuable part of the process, nor the primary purpose of the process, it will be relatively less invested in and typically of lower quality.

For example, in the first two categories, and in the scientific realm in particular, data is generated with the idea, from the outset, that it is a valuable asset; the data is often the end itself. For example, the data collected from a scientific experiment provides the basis for scientific insight and discoveries, and potentially future IP; the data collected to monitor complex technical systems saves operating costs by minimizing downtime of those systems and potentially identifying improved efficiencies. Poor quality data, or inadequate systems for handling data has an immediate and often critical impact upon the primary goal of the organization or project, and so there is an immediate incentive to rectify errors and data quality issues and to invest in systems that enable efficient capture and processing of the data.

Some commercial organizations also effectively fall into these first two classes. For companies where the data and/or data science is the product from the outset, or the potential value of a data stream has been realized early on, then there is an incentive to invest in efficient and effective data eco-systems, with the consequence benefit in terms of data quality – if sufficient investment in data and data systems is not made, a company whose main revenue stream is the data will quickly go out of business.

In contrast, for most commercial organizations, the potential value of commercial data may be secondary to its original purpose, and so data quality from the perspective of these secondary uses may be poor, even if the value or future revenue streams attached to these secondary uses may be much greater than the original primary use of the data. For such companies, the general importance of data may be recognized, but that does not mean that data eco-systems are getting the right sort of investment. For example, for a company providing a B2C platform, data on the behaviour of consumers can provide potentially actionable and important insight, but ultimately it is the platform itself (and its continued operation 24/7) that is of prime importance. Similarly, for an online retail site, the primary concern is volume of transactions and shipping. Consequently, for these organizations the data issues that arise are richer, more colorful, and often more challenging. This is because poor quality data and systems do not immediately threaten the viability of the business, or main revenue streams. Long-term, yes, there will be an impact upon UX and overall consumer satisfaction, but many operations are happy to take that hit and counter it by attempting to increase transaction volume.

For these organizations it may be recognized that data is important enough for capital to be spent on data eco-systems, but that capital investment may be poorly thought through or not joined-up. Again, there are several symptoms I have learnt to recognize as red flags. Dividing these into symptoms related to strategy and tactical related symptoms, they are,

  • Strategy related symptoms:
    1. Lack of a Data Strategy. The value of data, even the potential of value has not been recognized or realistically thought through. Consequently, there is unlikely to be any strategy underpinning the design of the data eco-system.
    2. Any ‘strategy’ is at best aspirational, being driven top-down by an Exec.  The capital investment is there, but no joined up plan. The organization sets up a Data Science function because other organizations are – typified by hiring of a ‘trophy Data Scientist’. Dig deeper and you will find no real pull from the business areas for a Data Science function. Business areas have been sold Data Science as a panacea, so have superficially bought into the aspirational strategy – who in a business function would not agree when told that the new junior Data Science hires will analyse the poorly curated or non-existent data and will come up with new product ideas that have a perfect market fit and will be coded into a fully productionized model. I have seen product owners who have been sold a vision and believe that they can now just put their feet up because the new Data Scientist will do it all.
    3. The organization cannot really articulate why it needs a Data Science function when asked – this is one of my favourites. Asking ‘why’ here is the equivalent of applying the ‘Five Whys’ technique at the enterprise-level rather than at a project-level. Many companies may respond to the question with the cliched answer, ‘because we want to understand our customers better, which will help us be more profitable’. Ok, but where are the examples of when not understanding this particular aspect your customers has actually hurt the business. If there is no answer given, you know that the need for an machine learning based solution to this task is perceived rather than evidenced.
    4. A mentality exists that data is solely for processing/transacting, not analysing – hence there is little investment in analytics platforms and tooling, resulting in significant friction in getting analytics tooling close to data (or vice-versa). Possibly there is not even a recognition that there is distinction between analytics systems and reporting systems. Business areas may believe that only data querying skills and capability are needed, so the idea that you want to make building of predictive models as frictionless as possible for the Data Science team, is an alien one.
    5. Lack of balance in investments between Data Engineering and Data Science – this is not optimal whichever function gets the bigger share of the funding. This is often mirrored by a lack of linking up between the Data Science team and the Data Engineering team, sometimes resulting in open hostility between the two.
    6. Lack of Data Literacy in business area owners or product owners. Despite this, organizations often have a belief that they will be able to roll-out a ‘self-serve’ analytics solution across the enterprise that will somehow deliver magical business gains.
  • Tactical related symptoms:
    1. Users in business areas are hacking together processes, which then take on a life of their own. This leads to multiple conflicting solutions, most of which are very fragile, and some of which get re-invented multiple times.
    2. Business and product owners ask for Data Science/Machine Learning driven solutions but can’t answer how they are going to consume the outputs of the solution.
    3. The analytics function, whether ad-hoc or formal, primarily focuses on getting the numbers ready for the month-end report. These numbers are then only briefly discussed, or ignored by senior management, because they conflict with data from other ad-hoc reports built from other internal data sources, or they are too volatile to be credible, or they conflict with the prior beliefs of senior management.
    4. Business and product owners consider it acceptable/normal to use a Data Analyst or Data Scientist for, ‘can you just pull this data for me’ tasks.

As a Data Scientists what can you do if you find yourself in a scenario where these symptoms arise? There are two potential ways to approach this,

How do we respond to it – what can we do reactively? Flag it up – complain when systems don’t work and highlight when it they do. Confession here – I’ve not always been good at complaining myself, so this is definitely a case of me recognizing what I should have done, not what I did. On the less political front, you can try and always construct your analytical processes to minimize the impact of any upstream processes not under your control. By this I mean making downstream analysis pipelines more robust by building in automated checks for data quality and domain specific consistency. Where this is not possible, then stop or refuse to work on those processes that are so impacted as to be worthless. This last point is about making others share in your pain or shifting/transferring the pain elsewhere – onto business/product owners, from insight to operational teams. If other teams also experience the pain of poor-quality data, or poorly designed data-ecosystems and analytical platforms, then they have an incentive to help you try and fix the issues. Again, on a less political front, make sure expectations take into account the reality of what you have to deal with. Get realistic SLAs re-negotiated/agreed that reflect the reality of the data you are having to work with – this will protect you and highlight what could be done if the data were better.

How can we prevent it – what can we do proactively? The most effective way for any organization to avoid such issues is to have a clear and joined up Data Strategy and Analytics Strategy. It is important for an organization to recognize that having a Data Strategy does not mean it has an Analytics Strategy. Better still, an organization should have a clear understanding of what informed decisions underpin the future operations of its business model. Understanding this will drive/focus the identification of the correct data needed to support those decisions and hence will naturally lead (via pull) to investment in high-quality data capture processes and efficient data engineering/data analysis infrastructure. Formulating the Data Strategy may not formally be part of your brief, but it is likely that, as a Data Scientist and therefore end-user, you can influence it. This is particularly important in order to make sure that the analytical capability is a first-class and integrated citizen of any Data Strategy. This can be done by giving examples of the cost/pain when it is not so. Very often, a CDO or CTO formulating the Data Strategy will come from an Engineering background and so the Data Strategy will be biased towards Ops. Any awareness of a need for the Data Strategy to support analytics will be focused on near-term analytics, e.g., supporting real-time reporting capabilities. The need to for the Data Strategy to support future Ops by supporting insight, innovation and discovery capabilities may be omitted. If a Data Science function is considered, it will be as a bolt-on – possibly a well-funded Data Science bolt-on, but still a bolt-on.

Demand Forecasting at Amazon

AmazonDemandModelBlog_FrontPicture

This is a post that I’ve been meaning to write for a while. Having worked on demand forecasting in the past, I was intrigued when I saw this paper posted on the arXiv pre-print archive from one of the research teams at Amazon.

Although it was obvious why Amazon would be interested in forecasting demand, I was intrigued that Amazon chose to use a state space model approach. About 6 months later I attended the ISBIS2018 conference, at which Lindsay Berry from Duke University presented this paper that also used a state-space model approach to modelling the demand for supermarket goods. I also subsequently became aware of this technical report from Ivan Svetunkov at Lancaster University’s Management School.

With three pre-prints on demand forecasting that all utilised a state-space modelling approach I thought it would be interesting to do a post summarizing the work from the Amazon team. I may get round to doing a further post on the other two pre-prints at a later date.

At this point it is worth explaining a bit about demand models in general. Demand models are statistical models that are usually built from 2-5yrs worth of historical sales data. A demand model enables us to forecast how many units of a product we will sell given the price of the product and a number of other variables. The models allow us to forecast a number of ‘what-if’ scenarios, e.g. what will happen to my sales if I reduce the product price by 20%? Ultimately, the demand model can enable us to determine what is the optimal price we should charge for a product depending on what business KPI we want to optimize. Some traditional approaches to demand modelling use a log-log model, with the log of the demand of an item being linear in the log of the price of the item. These models are of the Working-Leser type of demand models1,2. For goods with large demand volumes, a log-log model form will be a reasonable assumption as the natural quantum of demand (a single unit) is much smaller than the typical level of demand and so we can consider the demand of such goods as effectively being continuous random variables.

The actual problem the Amazon team tackled was forecasting of intermittent demand, i.e. for what are commonly called, ‘slow-moving goods’, whose pattern of sales are bursty. We might typically only sell, on average, less than say 10 units a week for these kinds of products. There may be no sales for a week or two, followed by a burst of sales concentrated in a few days. Snyder et al give a good modern review of the problem of intermittent demand forecasting3.

For such products, the traditional log-log type demand models can perform poorly, as we are dealing with products that sell only a few units per time period. However, there is no consensus approach to modelling such products, but this means it is an area ripe for novel and innovative methods. The  paper by Seeger et al  combines 3 interesting ideas,

  1. A multi-stage model – this means decomposing the modelling of demand into several models that cover different demand sizes. In this case separate models are constructed for when the expected demand is 0 units, 1 unit, and >1 unit.
  2. The combining of the multi-stage model with a state-space model. This has the effect of introducing exponential smoothing and hence some temporal continuity to the modelled demand.
  3. The use of a Kalman-filter approach to location of the mode when using a Laplace approximation to approximate a marginal posterior. This third innovation is the most technical, but, for me, also the most interesting.

The first of these innovations is not necessarily that much of a step-change. Other attempts to model slow-moving goods have also considered a mixture of distributions/processes to allow for the zero-inflation that one has in the weekly observed sales of a slow-moving good. Seeger et al use a three stage model, so that we have 3 latent functions,

y_{t}^{(0)}(x), which is used in modelling the probability of zero sales at time point t

y_{t}^{(1)}(x), which is used in modelling the probability of a single unit being sold at time point t

y_{t}^{(2)}(x), which is used in modelling the distribution of units sold at time point t, given the number of units is greater than 1.

The second innovation is an interesting one. Whilst I had come across the use of self-excitation (Hawkes processes) to model the bursty behaviour of intermittent demand, I hadn’t seen temporal continuity enforced via a latent state contribution to the linear predictors of the mixture components. For demand greater than a single unit Seeger et al model the demand {z_{t}} at time point t as following a Poisson distribution,

P\left ( z_{t}-2 | y^{(2)}_{t}\right )\;=\; \frac{1}{(z_{t}-2)!}\lambda( y^{(2)}_{t}  )^{z_{t}-2}\exp\left ( -\lambda ( y^{(2)}_{t} )\right )\;\;.

Here \lambda(\cdot) is a transfer function. The latent function y^{(2)}_{t} depends upon a latent state {\boldsymbol  l}_{t} and it is this latent state that is governed by a Kalman filter. Overall the latent process is,

y^{(2)}_{t}\;=\; {\boldsymbol a}^{\top}{\boldsymbol l}_{t-1}\;+\; b_{t}\;\;,\;\;b_{t}\;=\;{\boldsymbol \omega}^{\top}{\boldsymbol x}_{t}\;\;,\;\;{\boldsymbol l}_{t}\;=\;{\boldsymbol F}{\boldsymbol l}_{t-1}\;+\;{\boldsymbol g}_{t}\epsilon_{t}\;\;,\;\;\epsilon_{t}\sim N(0,1)

The latent variables \epsilon_{1}, \epsilon_{2},\ldots,\epsilon_{T-1}, {\boldsymbol l}_{0} have to be integrated out to yield a marginal posterior distribution that can then be maximized to obtain parameter estimates for the parameters than control the innovation vectors {\boldsymbol g}_{t}\;,t=1,\ldots,T-1.

It is the marginalization over \epsilon_{1}, \epsilon_{2},\ldots,\epsilon_{T-1}, {\boldsymbol l}_{0} that the third interesting technical innovation of Seeger et al is concerned with. The integration over \epsilon_{1}, \epsilon_{2},\ldots,\epsilon_{T-1}, {\boldsymbol l}_{0} is approximated using a Laplace approximation. The Laplace approximation simply replaces the exponent of the integrand by its second order Taylor expansion approximation, in order to approximate a complicated integration by an integration of a Gaussian. It is the simplest of a family of saddlepoint expansion techniques for obtaining asymptotic expansions of integrals (see for example the classic book by Wong).

The main task in a Laplace approximation is locating the maximum of the exponent of the integrand. Seeger et al do this via a Newton-Raphson procedure, i.e. expand the exponent to second order around the current estimate of the mode and then find the maximum of that second order approximation.

Consider a 1-dimensional example. Let q(x) be the function whose maximum, x_{*}, we are trying to locate. If the expansion of q(x) around our current estimate {\hat x}_{*} of x_{*} is,

q(x) \;=\; q( {\hat x}_{*} )\;+\; ( x - {\hat x}_{*}) q^{(1)}({\hat x}_{*})\;+\; \frac{1}{2}(x- {\hat x}_{*} )^{2}q^{(2)}({\hat x}_{*})\;+\; O\left ( (x-{\hat x}_{*})^{3}\right )

The updated estimate of x_{*} is then determined by maximizing the second order expansion above, and is given by,

{\hat x}_{*} \rightarrow {\hat x}_{*} \;-\; \frac{q^{(1)}( {\hat x}_{*})}{q^{(2)}( {\hat x}_{*})}

The schematic below depicts how the iterative Newton-Raphson procedure locates the maximum of a one-dimensional function.

NewtonRaphsonSchematic

The multi-dimensional equivalent update rule when we are maximizing a function q({\boldsymbol x}) of a vector {\boldsymbol x} is,

\hat{\boldsymbol x}_{*} \rightarrow \hat{\boldsymbol x}_{*} \;-\; {\boldsymbol H}^{-1}( \hat{\boldsymbol x}_{*})\,\nabla q (\hat {\boldsymbol x}_{*} )\;\;,

where {\boldsymbol H}( \hat{\boldsymbol x}_{*}) is the Hessian of q({\boldsymbol x}) evaluated at \hat{\boldsymbol x}_{*}\;\;.

As Seeger et al are marginalizing the posterior over \epsilon_{1}, \epsilon_{2},\ldots,\epsilon_{T-1}, {\boldsymbol l}_{0}, the Taylor expansion around any point is necessarily multi-variate, and so ordinarily, finding the maximum of that second order approximation would involve inverting the Hessian of the log-posterior evaluated at the current estimate of the mode. As the latent variables we are trying to marginalize over by doing the Laplace approximation are the T-1 innovations, \epsilon_{1},\;\ldots\;, \epsilon_{T-1} and {\boldsymbol l}_{0}, this means that each step of the Newton-Raphson procedure would involve the inversion of a T\times T matrix, i.e. an O\left(T^{3}\right) operation for each Newton-Raphson step. However, Seeger et al point out that once we have replaced the log-posterior by a second-order approximation, finding the maximum of that approximation is equivalent to finding the posterior mean of a linear-Gaussian state-space model, and this can be done using Kalman smoothing. This means in each Newton-Raphson step we need only run a Kalman filter calculation, an O\left( T \right) calculation, rather than a Hessian inversion calculation which would be O\left(T^{3}\right). When training on, say, 2 years of daily sales data with $T=730$, the speed-up will be significant. Seeger et al do point out that this trick of reducing the computation to one that scales linearly in T is already known within the statistics literature4, but not widely known within machine learning.

Seeger et al apply their methodology to a number of real-world scale datasets, for example to a ~40K item dataset with almost a year of historical data at the day-level. Overall run-times for the parameter learning are impressive (typically a few seconds for each of the separate demand model stages), though admittedly this is when running on a 150 node Spark cluster.

References

  1. Working, H. (1943) Statistical laws of family expenditure. J. Am. Statist. Ass., 38:43–56.
  2. Leser, C. E. V. (1963) Forms of Engel functions. Econometrica, 31, 694–703.
  3. Snyder, R., Ord, J., and Beaumont, A. (2012), Forecasting the intermittent demand for slow-moving inventories: A modelling approach. International Journal on Forecasting, 28:485-496
  4. Durbin, J. and Koopman, S. (2012), Time Series Analysis by State Space Methods. Oxford Statistical Sciences. Oxford University Press, 2nd Edition.

Defensive & Offensive Predictive Analytics

Harvard_business_review

This article from the Harvard Business Review was a short but interesting read. The article talks about defensive and offensive data management strategies – defensive being about minimizing downside risk, and being more common in regulated industries, whilst offensive data management strategies focus on supporting business objectives and are more common in un-regulated or less regulated industries. The authors rightly point out that the correct mix of defensive and offensive strategies will be specific to each individual organization.

Having worked in a number of commercial sectors, my experience is that the use of predictive analytics within organizations also divides into defensive and offensive activities, irrespective of whether that predictive analytics and data science activity is enabled by a well thought out data management strategy. There are good reasons for this, and again it is largely determined by whether or not the activity of an organization carries a large downside risk.

Consider a company, such as a bank, whose activity has a large downside risk, and where losses due to bad loans can almost wipe out a balance sheet very quickly. My experience of doing analytics in a UK retail bank is that the predictive analytics focus is on modelling that downside risk with a view to understanding it, forecasting it and ultimately mitigating against it. The analytics effort focuses on risk minimization (still an optimization), whilst optimization of the profit side of the P&L is less computationally based, e.g. by committees of human subject matter experts deciding mortgage or savings rates for the coming year.

In contrast, in companies where the downside risk is lower, such as those where transactions with the organization’s customers are on much shorter timescales than a bank, then the use of predictive modelling tends to focus more on the optimization of revenue and profits, rather than minimization of losses from liabilities. Take grocery supermarkets, where predictive demand models are used to set product prices in order to optimize profit. Whilst getting the pricing strategy wrong will impact revenues, it does not lead to the organization holding a long-term liability and is ultimately reversible. Mistakes when using predictive models in this domain are unlikely to take a company down.  

From what I have seen the use of predictive modelling within a business is typically almost binary, i.e. either predominantly on the downside risk, or predominantly on optimizing the business objectives, even though most businesses will have both upsides and downsides to their activity. I haven’t seen that many medium-scale organizations where predictive modelling is used at a mature level across the majority of the business areas or tasks. Even rarer in my experience are situations where predictive modelling of both downside risks and business objectives are done concurrently with the optimization taking into account both sides of the P&L. It would be interesting to find good examples outside, say the largest 50 companies in the FTSE100, DowJones, Nasdaq, or S&P500, where a more joined up approach is taken to using predictive analytics for optimizing the P&L.

Practical PCA

PCA_Schematic

Principal Component Analysis (PCA) is a commonly applied algorithm in statistics and data science. Because it is so easy to understand at a high-level, and because it is so easy to apply, PCA has become ubiquitous. It is often applied without much thought and with the output rarely questioned. Typically, the questions I ask when applying PCA are,

  1. Do I need to do any transformation of the data before applying PCA?
  2. How many principal components should I select?
  3. How do I interpret the loadings (and scores)?

Unfortunately, I have seen a number of talks and presentations recently where PCA has been used and the impact on the analysis of not having thought about these questions was clear. Ok, I’m biased, as the behaviour of PCA, particularly when applied to high-dimensional data, is one of my areas of research. Whilst the research papers on PCA can be very complex, they do however provide some useful insight and guides on how to apply PCA to real data. In this post I’m going to look at those three questions in turn. In the following I’m also going to assume you are already familiar with PCA and that you are aware that the principal components are the eigenvectors of the sample covariance matrix corresponding to the largest eigenvalues. For a good introduction to PCA see the blog post by Laura Hamilton and also the classic book by Jolliffe.

TL;DR  – the the short answers to the questions above are, i) Check for outliers and/or heavy tails in your data before applying PCA, ii) use the ‘knee’ in the eigenvalue scree plot to select the number of components, iii) make sure you look at the loadings of the selected principal components and that you can explain any major patterns in those loadings.

1. Transformation of the data

The original derivations of PCA, such as that by Hotelling (1933), are heuristic and make no explicit assumptions other than the requirement that the selected components retain as much of the variance in the original data as possible.  The formal derivation of PCA as the maximum likelihood estimate of parameters of a probabilistic model assumes additive Gaussian noise – see for example the paper by Tipping and Bishop. In practice, where the distributions of latent factors and additive noise are still reasonably symmetric and decay sufficiently fast we would expect deviations from perfect Gaussians to have only a minor effect. Despite this I have seen a number of talks with PCA plots similar to that shown on the left below.

PCAScores_HeavyTailExample1

I have based this example on a real commercial data set I was shown, but for privacy and confidentiality reasons I have generated a simulated data set that reproduces the issue. The majority of PC1 scores are very squashed into the left hand side of the plot, with a few scores reaching much higher values than the rest. This can been seen more clearly if we look at the estimated density of the 1st PC scores – see the right hand plot above. A heavy tail is clearly present, meaning we are deviating significantly from the assumptions under which the eigenvectors of the sample covariance matrix are the optimal estimators of the signal directions in the raw data.

So an obvious first step would be to take a quick look at the distribution of the raw data that we are trying to decompose using PCA. If that distribution has a heavy tail or significant outliers then there is an argument for applying a transformation (e.g. logarithm) before applying PCA. If we take the log of the example data above then we obtain much more reasonable distributions for the PC scores – see below.

PCAScores_HeavyTailExample2

Tip: If you see an elongated distribution of scores along the PCs you have selected, then it may be worth going back and looking at the distribution of the raw data going into the PCA – you should have already looked at the distribution of your raw data anyway as part of EDA best practice.

2. Selection of the number of components

There are two common methods for selecting the number of components that most people will be familiar with or encounter. Those are,

  1. Select the number of components so that a fixed proportion, e.g. 90%, of the total variance is retained.
  2. Look at a scree plot of the eigenvalues and locate the ‘elbow’ (or ‘knee’ depending on your interpretation of anatomy).

Of those two methods it is the second that I always prefer. This is because it has a sound theoretical underpinning and is more robust when applied to the kind of high-dimensional data sets that are commonplace nowadays. Let me explain why. The goal of PCA is to select a small number of directions in the data that we believe capture the signal within the data. The first approach to PCA model selection is effectively based upon the assumption that the ‘signal’ contribution to the total variance is considerably greater than the noise contribution to the total variance, and thus by selecting to retain the majority of the total variance in the original data we believe we are effectively selecting components that represent signal.

If our data points are p dimensional, then we have p sample covariance eigenvalues, \lambda_{i},\;i=1,\ldots,p. We consider the signal part of the data is represented by a small number, k, of eigenvectors/eigenvalues, and we have the p-k-1 remaining non-zero eigenvalues that correspond to pure noise in the original data.

Typically, there is no reason to believe that the noise process affects any of the original features/variables more strongly than the others – i.e. it is reasonable to consider the noise process to be isotropic. If not, then the preferred directions are essentially some form of signal and not noise. This means we expect the noise eigenvalues to be approximately the same, and let’s say they have an average value \sigma^{2}. As we look at data sets of increasing dimension p, unless the number of signal eigenvalues, k, or the eigenvalues, \lambda_{1},\ldots,\lambda_{k}, themselves grow with p, then the variance explained by the signal values, \sum_{i=1}^{k}\lambda_{i}, will remain relatively static whilst the noise eigenvalues contribute (p-k-1)\sigma^{2} to the total variance. Thus we can see that the fraction of total variance explained by the signal components is essentially,

{\rm Fraction\;of\;variance\;explained}\;=\;\frac{\sum_{i=1}^{k}\lambda_{i}}{(p-k-1)\sigma^{2}\;+\;\sum_{i=1}^{k}\lambda_{i}}\;\;,

and so decreases as the data dimension p increases. Consequently, for high-dimensional data, where p is very large (often in the thousands) the percentage of variance explained by the true signal components can be very very small. Conversely, if we select the number of components so as to retain say 90% of the total variance, we will be including a lot of noise in the retained components and not reducing the dimensionality as efficiently as we could be.

In contrast, if the true eigenvalues are split into a small number of ‘signal’ eigenvalues and a much larger number of  ‘noise’ eigenvalues and we expect the noise process to be isotropic (i.e. we have a single highly degenerate noise eigenvalue), then the observed (sample) eigenvalues will also consist of a bulk of eigenvalues clustered around a single value and a small number of larger eigenvalues separated from this bulk. In other words we expect to see the distribution of sample covariance eigenvalues look something like the plot on the left below.

SP500_ScreePlot1

The eigenvalues in this example have been obtained from the sample covariance matrix of inter-day returns of closing prices over the 4 year period 2010 to 2013 for S&P500 stocks. We have actually omitted the largest eigenvalue, as this of a different scale and represents a ‘market-mode’ – see the next section.

Note that the bulk of the sample eigenvalues show some dispersion even though we had only one true, highly degenerate, noise eigenvalue. This is due to sampling variation, i.e. the fact that we have only a finite sized sample. If we plot the top 20 eigenvalues from the distribution above, ranked from largest to smallest, we get a scree plot looking like the plot on the right above.

Clearly, the sample eigenvalues corresponding to the bulk are very similar to each other, and so we see only small decreases with increasing rank in the sample eigenvalues in the bulk. In contrast, where there is an atypical jump in eigenvalue as we decrease the rank, this represents the point at which a signal eigenvalue can be detected as being separate from the bulk. This point also represents the ‘knee’ of the scree plot.

Statistical models that produce this kind of scree-plot are called ‘spiked-covariance’ models, so-called because the true population eigenvalue distribution is concentrated (or spiked) around just a small number of values. For these models we consider the data to have been produced by a small number of latent factors with isotropic noise. That is, our data point {\bf x}_{i} is given by,

{\bf x}_{i}\;=\;\sum_{j=1}^{k}z_{ij}\lambda_{j}{\bf B}_{j}\;+\; \boldsymbol{\epsilon}_{i}\;\;\;,\;\;\;\boldsymbol{\epsilon}_{i}\sim {\cal N}\left ( {\bf 0}, \sigma^{2}{\bf I} \right)\;\;\;,\;\;\;z_{ij}\sim {\cal N}\left ( 0, 1\right)\;\;\;,

with the vectors {\bf B}_{1},\ldots,{\bf B}_{k} forming an orthonormal set.

From the mathematical form of the spiked-covariance models we can work out the expected distribution of sample covariance eigenvalues when our data consists of N data points (each of dimension p). A large amount of research has been done in this area over the last 10-15 years. I won’t try to summarize it here, instead I’ll point you to the excellent review by Johnstone, and the original work by Paul. This research allows us to devise methods for the automatic detection of the number of principal components. Such methods have the advantage that they can be automated, i.e. programmed as part of an algorithm. On the practical side you can always “eyeball” the scree plot. For an isolated piece of analysis this is always advisable, as it takes so little time to do.

Also on the pragmatic side I have often found that simple ‘knee’ detection algorithms  work surprisingly well, particularly for real ‘real-world’ data sets that I encounter as part of my day-to-day work. The simplest of such algorithms involves finding the maximum in the value of a discrete approximation to the second derivative of the ranked eigenvalue plot. That is, we choose k as,

k \;=\; \underset{i}{\mathrm{argmax}} \left ( \lambda_{i+1} + \lambda_{i-1} - 2\lambda_{i}\right )

Improved approaches to ‘knee’ detection are based upon discrete approximations of the curvature. The paper by Satopää et al gives a good introduction to such methods. Again, these simple ‘knee’ detection approaches have the advantage that they can be coded and hence included as part of some automated process.

Note that in situations where the signal eigenvalues (those distinct from the bulk) do contribute the majority of the variance in the original data, then selecting the number of PCs on the basis of detecting a ‘knee’ in a scree plot will have the same advantages as selecting on the basis of wanting to retain a certain fixed percentage of the total variance. Consequently, there is very little reason not use the scree plot for selection in all cases.

3. Interpretation of the loadings

I have also found it is not uncommon for PCA to be blindly applied and the scores, i.e. the values of the new, lower-dimensional, features to be plotted, fed into some downstream process without any further curiosity being applied. Where the leading sample eigenvalue is very large – that is on a different scale to the other principal components retained or the bulk of the sample eigenvalues – I always take a look at the loadings. This is the case for the S&P500 data discussed above. The loadings tell us the contribution each of the original variables make to the principal component. The loadings for the 1st principal component of the S&P500 data is shown in the plot below,

SP500_1stPC_Loadings

Where the 1st sample eigenvalue is very large (compared to the others) it is not uncommon to see a loading pattern like that above – each loading value is of the same sign (and in this case of comparable size). The large 1st eigenvalue tells us that variation along the 1st principal component is the dominant mode of variation present in the original data. The loading pattern tells us that in this mode of variation all the original variables increase together, or decrease together. Such a pattern is often termed a ‘global mode’, i.e. it is a mode of variation that largely has the same effect globally on all the original variables. There are several scenarios and situations were the presence of a global mode can be naturally explained or is to be expected. For example,

  • Market modes in stock prices. This is where a rising or falling market causes all stock prices to rise or fall together.
  • In gene expression data obtained from model organisms exposed to a large environmental perturbation or insult. Here, for model organisms, e.g. yeast cultures, we can shock the biological system being studied without any ethical concerns, e.g. starve the organism of its primary food/fuel source or other essential nutrients. Consequently we see a system wide response to the starvation.
  • Price sensitivities of a collection of products sold by a retailer in a store. Here we expect price elasticities of products will reflect, to a large part, the economic conditions of the local geography. Consequently, a predominant part of the variation in product elasticities will be due to store-to-store variation in economic conditions and may show up as a global mode.

Where we see a global mode in the loadings, we should ask whether we can identify a credible mechanism behind the global mode. If not, then this should make us cautious about the appropriateness of the 1st principal component and hence the complete decomposition.

Finally it is worth mentioning that, if a sparse loading pattern is more naturally expected or convenient then sparse versions of PCA can be used. The lecture notes by Rob Tibshirani provide a good introduction to sparse PCA.