The term “Real Option” refers to an additional decision possibility that is created in any real-life situation if one extends the scope of permitted decision alternatives. For example, to the decision as to whether to book a vacation now or not go on vacation at all, one may allow the decision as to whether to go on vacation to be taken later. The real options “value” is the additional value that may be created by allowing these additional decision possibilities.
Another slightly geeky post from me 😊 about the PMT, PPMT and similar functions. It is very easy to use the functions in ways that can give incorrect answers. Incorrect answers are also quite easy to overlook in practice, not least as the functions are often embedded within compound calculations or within error-checking statements, such as “IFERROR(PPMT(…),0)”. This blog points out a couple of frequent errors that may arise.
I couldn’t resist this slightly geeky comment about a subject that comes up occasionally and to which I have not seen an answer elsewhere else, namely: if a financial model calculates interest expense (or income) by using the average of period start and end cash balances (including interest earned, thereby giving a circular reference), is this the same as using continuous compounding applied to the periodic cash flows (excluding interest) only?
Since 1989, I have been using risk modelling combined with Monte Carlo simulation to improve the robustness, transparency and accuracy of cash flow valuation models. This powerful approach is still underused for reasons that can be debated elsewhere. Here, I briefly summarise the key reasons that drive the improvement in the valuation when using such methods.
The key benefits and improvements are:
- To highlight biases in the input assumptions and in the calculated output value. Part of the risk modelling process is to determine (or estimate) the position of any specific numerical assumption (i.e. the base case) within its own possible range. For example, the base case assumption on the growth rate may be “reasonable” at first sight, but in fact may be overly optimistic or pessimistic when placed under the scrutiny that is required if determining the possible range of values for that input. In fact, as mentioned in the next point, it is generally impossible to have non-biased values.
- To calculate the average output. It is worth recalling that the core concept of value in economics is based on the average. In most cases, calculated outputs will not by default show the average. For example, if a model has its input values that are estimated as their most likely values, then the model’s output will generally show neither the average, nor even the most likely value (the “fallacy of the most likely” and the more general concept of the fallacy of the belief that input cases and output cases of a model are aligned are discussed extensively in my book Business Risk and Simulation Modelling.) The risk modelling process and the associated calculation of many scenarios during the simulation will help to establish the average value. Note that this point also highlights that non-risk-based approaches to modelling are generally inherently (or structurally biased): any quantity for which a single value must be used (as a base case) would in principle be selected at its most likely possible value, so that the resulting model will not show the average (nor even the most likely) of the output.
- To calculate the average when contingent cash flows are present. The most widely described forms of contingent cash flows are options (derivatives) and real options, but generally such cash flows are present in many contract forms and implicitly in any model with any non-linearity in its logic (e.g. IF or MAX functions). Whilst this point is in fact already covered by the above point (about the calculation of the average) it is worth noting that the application of contingent claim contracts is particularly important, since the accurate valuation of such a claim requires that many scenarios be considered, each of which is representative of the likelihood and magnitude of what may happen and its consequences for value.
- To generate more insight by highlighting business value-drivers. The risk modelling process is a more logically robust and coherent one than is general Excel modelling, and in particular requires an understanding of risk drivers and hence of value drivers (in order to build a proper model which captures the risks). The extra insight generated is often useful to highlight or enhance thinking and establish a more robust and more accurate valuation.
The fact that risk-based approaches are not frequently used (indeed there is a high resistance to them from many valuation practitioners in my experience) is perhaps surprising but can be debated elsewhere.
It never ceases to amaze me when I open an Excel worksheet and see VLOOKUP or HLOOKUP functions. These functions should basically never be used! In fact, in simple models (e.g. those containing two columns of data with the lookup values placed in the left column) the use of V- or HLOOKUP would be marginally quicker to implement than the INDEX-MATCH alternative. However, it so often arises that initial models are subsequently developed or added to, so that these inappropriate function choices become embedded in larger models that are then hard to rectify without significant rework. My rule of thumb would be to only use a V- or HLOOKUP for some quick-and-dirty analysis that is so raw that the workbook file would never be saved. One should never find these functions in a saved workbook! The benefits of their use are outweighed by other factors in all other cases.
The VLOOKUP (and HLOOKUP) functions have several problems associated with them:
- A lack of flexibility (e.g. column or row numbers needs to be hard-coded). The only way to avoid hard coding of a column or row number for the data to be looked up is by use of a MATCH function. Doing so means that the function is then used in conjunction with MATCH, so can be no better than INDEX-MATCH, even ignoring the other issues.
- Their error-prone nature (e.g. if a column or row is added before a row which is referred to with a hard-coded value in the formulae, then the formulae will give the values from an incorrect row or column).
- They are difficult to audit, since the dependency and precedent paths are two-dimensional ranges, not individual cells.
- File size is typically larger (Excel’s behind-the-scenes formula-paths are longer, creating a larger electronic file.
- They are computationally inefficient when a reference item needs to be looked up several times (e.g. look up a reference value in the first column in order to find its row number, and then return the corresponding value from both column 4 and column 6. In this case, if two VLOOKUPs are used, the row number for the reference figure is looked up twice). By using MATCH (in a separate cell that then drives and index function), this first lookup step need only be conducted once.
- They are less flexible, since the data set referred to cannot be split into separate components so that some parts could be moved to a more appropriate part of the model e.g. when a model needs to be adapted or restructured for another purpose.
In almost all cases, an INDEX-MATCH combination will be far superior. A discussion of this with practical example models is contained in Chapter 25 of my recent book Principles of Financial Modelling.
To simplify a recent post about the use of the FVSCHEDULE function to return the total and compound average growth rates (or asset returns), this demonstrates the point with screenshots.
In the figure below, cells E6 and E7 contain the FVSCHEDULE function that directly links to the data set (in column C) to calculate the growth rates over the 10 periods.
Without this function, one would generally need to re-establish the underlying actual values, shown through the additional calculations that need to be created in the figure below.
Thus, the use of the function avoids the need to do the reconstruction calculations which improves the speed and flexibility of adapting the results if additional data is added (since only the range referred to be the function needs to be extended).
Monte Carlo Simulation is simply the calculation of a model many times using an automated process that randomly samples the possible values for the model’s inputs. The output of a simulation will therefore indicate the likelihood of any possible value (or range of values). In principle (for the true likelihood of any output value to be determined), the random samples used for each input are drawn from a process (distribution) which corresponds to the true nature of the risk (or uncertainty) of that input.
The first intensive use of MCS was in the 1940s by scientists working on nuclear weapons projects at the Los Alamos National Laboratory. There is was simply used as a numerical method to evaluate complex integrals: The value of the integral of a function is equal to the average value of the function over an interval multiplied by the length of the interval. Thus, by calculating the value of a function at various random points in the interval and calculating the average of these, the integral can be estimated.
In business applications, MCS can be used to assess the risk in projects or business cases, and to value or evaluate businesses or financial products and contingent contracts. The value of the information depends on the quality of the underlying risk model.
Note the one should not confuse the risk modelling process with the simulation – the latter simply calculates the many possible outputs of the risk model, whereas the risk modelling process requires risk identification/mapping/quantification etc. When correctly used, the risk modelling/simulation combination should provide insight, as well as support a decision process (e.g. by helping to assess likelihoods of outcomes), albeit one who ultimate use is also dependent on the risk-tolerance of decision-makers (and is therefore in general beyond the scope of the model or of objective analysis). Of course, risk modelling/simulation can be incorrectly implemented (whether intentionally or not) and then may (intentionally or not) be used to justify incorrect decisions (e.g. those which in reality exceed the appropriate level of risk to take, even if this has been hidden or has not been made explicit).
My book Business Risk and Simulation Modelling covers these topics in much more detail.
As part of our occasional series on lesser-known Excel functions, this post highlights a key use of the FVSCHEDULE function in financial modelling.
The FVSCHEDULE function gives the future value of a quantity after applying a series of growth rates to it. The required inputs are an original value (or “Principal”), and a schedule of growth, return or interest rates. A key area of application is to work out the average compounded growth rate for a multi-period data set (i.e. where the growth rate in each period is different).
In the case that the periodic growth rates are based on the logarithmic method (i.e. are the logarithm of the ratio of the periodic values), then the average of such growth rates can be used directly. However, where the growth rate is serial growth (not logarithmic), one would need to add 1 to each growth rate and then multiply all the figures together before taking the appropriate root for the average annual compound figure (e.g. the square root for the annualised average over a two-year period. This process can be shortened by use of the FVSCHEDULE function: By setting the original (Principal) value to 1, the function returns the product of “1 plus the growth rate” in each period, so that subtracting 1 from this product gives the total growth. This increases the flexibility of the models and reduces the size of the calculation area, and hence is a useful tool (e.g. if a new period is added to the data set, the only change required is that the Excel range used in the formula be altered, rather than the formula itself needing amending).
Readers can experiment this for themselves in Excel (the function is easy to use, it is the lack of awareness of it that we are highlighting here). Specific example spreadsheets are also available in chapter 20 of my recent book Principles of Financial Modelling.
ValuationDecisions plans to start regular tweets on topics relating to business decision support using quantitative methods and modelling. Watch this space!
I had the pleasure of attending The Decision Quality Conference 2018 in London last week. This provided a good reminder of some of the close linkages between risk assessment and Decision Quality.