How rethinking questions can deliver fresh RM insights

Tom Bacon finds inspiration in a blog by Levi Brooks and applies this to the field of airline revenue management

Certain business problems are recurrent – and we are geared up to analyse the data in the usual ways time after time. We are ready to ‘do it again’, likely justifying our current procedures or, potentially, recommending a small tweak in our modeling. Since we are ready to respond and can therefore provide a quick answer, our organisation loves it!  Boy, aren’t we smart! Sometimes, however, we are much too quick to jump to our normal conclusion. In fact, sometimes we need to be working harder on ‘The Question’ – is there another way to formulate it so that we learn something new this time?

In his blog in 99u, Levi Brooks the co-founder and creative director of Use All Five, an LA-based digital design, development and strategy agency suggests that brainstorming the question can add new value.

Let’s apply this thinking to airline revenue management where a frequent question is this: Why did we miss the forecast?  

So, most of the time, we are ready for this, ready to analyse the unusual blip in actuals that the forecast missed. Let’s see what happens, however, if we reformulate the question. Here are some examples:

      Question the definition

  1. Where did we get ‘the forecast’? Where did we get the ‘actual’? Are both unambiguous? Are both sources reliable? Has the definition of ‘actual’ changed over time, incorporating new phenomenon or behaviours? Should ‘actual’ be disaggregated, with multiple models capturing multiple underlying trends?

    Question the impact

  2. Does the ‘forecast’ actually matter in the pricing or inventory algorithm? Ideally, our models are robust and don’t rely so heavily on perfect forecast accuracy. It’s possible that the forecast variance didn’t change the actual price or inventory allocation much. This is, of course, an underlying goal of a useful model – ensuring the recommendation isn’t so sensitive to imperfect forecasts that it loses its value; we need highly robust solutions that acknowledge we’re going to have some forecast error.

  3. Similarly, ‘forecast’ accuracy would ideally improve, as we get closer to the departure date. On the other hand, the aircraft could already be oversold with too many low fares. In this case, missing the forecast two weeks out is likely too late to make the proper change. Models should recognise the uncertainty with respect to close-in demand by adjusting inventory allocations well in advance.

    Question the statistics

  4. Also, the ‘forecast’ model has a known and measurable historical error. How far off was the forecast? Was the error in the expected range?  Reviewing results over a longer time period, do the errors correspond to the expected distribution? Statistically, what is the right time period to be reviewing errors (a week? A month?)

  5. Often, too, we do not focus on the error that may be a direct result of increased granularity. Are there offsetting errors when a highly granular forecast is viewed at a more aggregated level? Forecasting individual flight performance is tougher than forecasting the whole market. Forecasting all low fare demand is easier than forecasting one specific low fare.

  6. All models have ‘known unknowns’. There are variables that cannot be forecast that are known to be important factors. A snowstorm, for example, may drive more no-shows. Or a desperate competitor may suddenly lower its fares. Rather than predict these factors, the model expressly incorporates uncertainty and allocates inventory appropriately. 

  7. Do we understand the value of analyst intervention in the forecast? Tracking model forecast accuracy versus the value-add of analyst intervention allows companies to refine forecast processes.

    Question the process

  8. Why was the forecast error highlighted this time? Do we have a consistent approach to forecast misses? Do we monitor all large variances? Do we have a track record of improving results when we perform these post-analyses?

  9. How does our model self-correct? Will the model automatically adjust to the forecast variance and update coefficients and forecast trends?  How can we improve the self-correction process?

    Reverse the Question

  10. Not ‘why did we get it wrong’, but ‘why did we get it right’? Perhaps asking the question in reverse can provide new insight. Were there similar blips or turning points that we were able to capture properly? Why did we get it right in those situations?

There is a danger in responding to business questions in the same way over and over again. The goal of any such introspection should be to help improve performance in the future - but if we apply the same analytical techniques to answer the questions each time, we are not as likely to really learn and improve in fundamental ways. It pays to periodically step back and “brainstorm the question.” Re-thinking the question can uncover new insights often overlooked when developing a more standard response.

Tom Bacon has been in the airline business for 25-years and is now and industry consultant in revenue optimisation. Questions? Email Tom or visit his website Make Airline Profits Soar

Related Reads

comments powered by Disqus