Merchandise Planning With Statistically Controlled Risk

Even in this age where “superforecasters” sit at the heart of government there is one thing we know about our forecasts. They will be wrong!

As I sit down to write this article the news is on the radio and the headlines are warning that the Corona virus Covid-19 is nearing pandemic status, and it seems that our government here in the UK, which promised only a matter of weeks ago that a No Deal Brexit was the most unlikely of events has now adopted it as the default bargaining position. Both of these developments are extremely likely to cause serious disruption to retailers’ supply chains, and to play havoc with any existing forecasts that have been made by merchandise planners.

When we are making and relying on sales forecasts for volatile merchandise we need to be able to review them on a timely basis, ideally with some system support which will simplify the task of deciding which of our forecasts need attention and which ones are still looking all right given the trend of actual sales.

An automatic, statistically based analysis of the variance between actual and forecast performance can support the merchandiser in deciding where changes need to be made.

An outline of the process is shown below

The key element here is that the system should be analysing the variances using robust statistical methods and presenting automatically calculated, revised forecasts to focus the merchandiser’s attention where it can most profitably be used.

TPF’s Fast Track model is a good example of how such a system can be implemented cost-effectively, so we will use it to illustrate the process in more detail.

1. Setting the Seasonal Forecasts by Line

This is normally done as part of the range planning process and the line (style /colour) forecast would be derived from that. As we can see though, to add value we also need to create some other information by line.

We need to tell the system which PD Pattern to use. PD stands for Percent Done and is the seasonality profile that will be assigned to the initial forecast to calculate the expected sales for any given week.

Next we need to assign an instance factor. In simple terms this is the number of items we expect in a given transaction. In retail this is normally one, but if you were a wholesaler it would be the expected average number of sales of the item per order. This is used to modify the significance of the variance in the statistical calculation. We can be more confident in the validity of the variance is when we are looking at 1 instance per transaction than if we were looking at say 20 per transaction.

Finally we tell the system what the minimum production / order batch size is. This acts as a control for the suggested forecasts that the system will create.

2. Defining the Seasonality Curves

In this process we need to define a set of curves from which we can choose to apply to the seasonal forecasts by line.

Here we can enter either sales data or index type data which can be converted to cumulative percent done patterns. These tell us by week what percentage of the seasonal budget should have been achieved by any given week.

3. Applying the Curves

This process is as simple as taking the cumulative Percent Done value for any given week and applying it to the total seasonal forecast, so if the seasonal forecast were 100 units and we expected to have done 50% by week 8 then we would calculate an expected sales figure of 500.

4. Setting the Statistical Parameters

The statistical method applied here uses the Kolmogorov Smirnov test to assess the significance of a variance between the actual sales and the expected sales. It is a twin tailed method which means that we can set parameters to be more sensitive for variances above the forecast or vice versa.

We select a confidence level required – we will use 95% and this tells us to use a Kolmogorov factor of 1.2

This factor is then used in the calculation to assess the significance of the variance when we review actual weekly sales

5. Importing The Weekly Sales Data

In the Fast track model the data would be cut and pasted from a downloaded file into a tab from which it will be looked up and used in the control sheet.

6. Assessing The Variances

Using the Kolmogorov Smirnov factor the model now compares the sales unit performance to date with the expected sales unit performance and calculates the significance of the variance, assigning a value between -4 and +4 to each styles variance, based on the number of deviations calculated. It also provides a trend for the previous 3 weeks. An algorithm decides based on preset parameters whether the system needs to generate a new forecast or whether actual performance remains within an acceptable range of the forecast.

In the screen below STD stands for Season to Date

6. Amending The Forecasts

The system also checks to see if the new forecast is sufficiently different to the previous one to allow a new batch to be ordered. If so the “Batch Forecast” is populated.

Finally the user decides whether to accept the suggested orders or not. If so they enter the value in “Forecast User” and the new forecast overwrites the forecast for the following week.

The rationale behind the process

In retail we are constantly striving to avoid the twin evils of mark-downs and stock-outs. One of the keys to managing this successfully is the ability to spot where our actual sales are diverging from our forecasts quickly enough for us to take remedial action. This is particularly true when the current environment is volatile for political or social reasons. This remedial action could be initiating a small markdown early or it could be re-ordering a line that is performing better than expectation.

Whichever way our sales are trending we need reliable decision support systems to reduce the workload on the planners and point them towards those products which will most benefit from their attention.

If you would like to know more about TPF’s Fast track model please send me an email.

Verified by MonsterInsights