Project Finance operational models

11th December Written by Mehdi Assadi-Moghadam, Matthew Bernath and Lance Rubin

Introduction to the co-author

Matthew Bernath’s (“Matthew”) introduction can be found in a prior article using this link.

Mehdi Assadi – Moghadam (“Mehdi”) has 12 years of corporate finance transactions and an additional 10 years of data-driven application development experience.

Mehdi bridges the disciplines of corporate finance and data analytics and has held senior roles in corporate finance and business development for technology companies in the US and Canada.

He has used data and modelling to help clients understand their competitive environment, discern emerging trends, and support strategic decisions.

His technical experience includes advanced Excel, VBA, SQL, Power BI (including Power Pivot, Power Query, DAX and M Query Language), Alteryx, Tableau and Python.

His domains of experience and interest include Renewable Energy, Canadian Pharmaceuticals, Pulp & Paper, Mining, Blockchain, Crypto currencies, and Decarbonization of the economy.

Background

Recently Matthew wrote a great blog on Project Finance (“PF”) operational models so rather than simply repeat what has been written we decided to take a slightly different perspective with Mehdi and bring the perspective of the financier, financial modeller and the investment funds that invest in project finance assets.

As Matthew mentions in his article most project financiers will agree that a project finance model is one of the most complicated financial models to build. What happens when a project reaches Financial Close? The model, in most cases, then takes on a life of its own when it becomes an operational financial model!

This is when things can get somewhat out of control. Especially when the common understanding of any Excel model is that it can simply be adjustment for another purpose as it is so easy to do. However, this is a very dangerous task as the Financial Close model is built for a different purpose (raising capital and understanding a deal), with the final aim being to reach financial close.  Ruby Lui also discusses this in deal modelling vs other modelling article.

However, post ‘Financial Close’, construction and operations will commence. Debt is drawn down during construction and paid back during operations, and the project earns revenue and incurs expenses. Covenants and other reporting requirements are put in place to compare the financial close model to the actual results.

Matthew goes onto to explain some steps to convert a Financial Close Model into an Operational Model by doing the following, despite its risks:

  • Hard-Coding Inputs that cannot be changed eg sculpted debt repayment schedule.

  • Make sure things that can be changed can be changed easily (eg wind data or power generated, sales, traffic etc.)

  • Make sure there is a proper and well documented handover process.

  • Ensure key covenants that must be monitored are monitored.

  • Functionality is intact e.g. cash flow waterfall and ratios are correctly calculated.

  • Dashboards for operational tracking are useful also for management reporting.

  • Conduct an operational Financial Model audit to gain further confidence. Whilst it might seem like overkill an independent audit of the model can be extremely worthwhile and an easy decision to make, especially when conducted by a professional financial modeller.

  • Making the Operational Financial Model 'Decision-Useful' is sort of the catch all to help different stakeholders.

Mehdi takes a slightly different perspective where the operational model is not built from the Financial Close model but built from the ground up operationally and then the Financial Modelling aspects are grafted onto those operations.

This is due, in part, to the complexity of projects which involve large sums of capital, are highly structured, and cashflows are projected over many periods, e.g. quarterly forecast over 30 years. There is, however, an overlooked reason for their complexity: PF models are rarely built by a single person or team. Very often, the financial modelling is grafted onto the operational model of the project.

A typical case is a solar energy project. The project engineer, who is knowledgeable of seasonal insolation patterns and capacity loss of PV panels, will build a model projecting the energy produced. The project promoter will then layer on to this model the economics of feed-in tariffs and maintenance costs over 30 years.

Finally, the financial advisor to the project will add in analyses to optimize the project and make it more “sellable.” The final Excel file can be large, slow to open, slow to calculate, overwhelmingly complex, and difficult to audit for errors because of a structure reminiscent of a plate of spaghetti. In this context, using the model becomes a challenge, especially for the financial analyst considering making an investment in the project or any auditors performing an annual valuation of the assets in the project.

This means that it is often a complex (read messy) co-mingled operational and financial data meant to help with decision making hinders it.

So, what happens then and how we do cope with this?

What practical steps can people take to improve the operational model?

While we cannot go back and re-create the model from scratch, there are several steps we can take to make hybrid Operational-Financial PF models easier and faster to work with.

This issue of speed is not only important because of the adage that time is money, but also because slow Excel files exacerbate the human brain’s already poor error-detection abilities. According to usability-engineering expert, Jakob Nielsen:

  • 0.1 second is about the limit for having the user feel that the system is reacting instantaneously

  • 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay.

  • 10 seconds is about the limit for keeping the user's attention focused. For longer delays, users will want to perform other tasks while waiting for the computer to finish, i.e., they risk becoming distracted.

Checking the logic and mechanical accuracy of any model requires concentration and attention. This attention evaporates quickly when it takes more than one second to move between worksheets.

As our attention is focused on the “Calculating” message in Excel’s status bar, our cognition is struggling to maintain the last reference we are linking, or the previous two formulas we are stepping through.

This is where we will either invest more energy to check and double check the model, or short of time and budget, we will risk unpleasant surprises down the road.

There are, however, three categories of actions we can take to make life with these messy and slow models easier:

  1. Changing how we handle the Excel files we receive.

  2. Removing any deadweight from the model.

  3. Optimizing how the model calculates and recalculates.

Changing how we handle the Excel files we receive

Very often, when we receive an operational PF model or a hybrid PF model, we double click and grow alarmed as the seconds go by and Excel is still loading the model. We can avoid unnecessary time lost, as well as stress, by developing a few habits when we start working with an unfamiliar PF model.

Open the model in Safe Mode

This is done easily enough by holding Ctrl while double clicking the file. Excel starts, but in doing so, it bypasses all the files in the various startup folders (such as XLStart) and skips loading the toolbar file (Excel.xlb).

When you run Excel in Safe Mode, Excel does not open add-ins or other startup files and it does not run any VBA macros that maybe triggered on file open. It is a clean no-frills mode of operation with many options unavailable. This is perfect for the first time you are exploring an unfamiliar PF model.

Open the model in a new instance of Excel

Very often we will be working with multiple Excel workbooks open. When we open a file that hangs, resorting to killing the culpable Excel process via Task Manager will also close all other workbooks because they occupy the same instance of Excel in the computer’s memory. This is an incredibly frustrating way to lose work done in other workbooks and projects.

Instead, open a new instance of Excel and open the new file from within this new instance. You can open a new instance by holding down Alt as you launch Excel. You can verify that there are two instances running by checking task manager:

The value of this tactic becomes obvious when you have one instance recalculate a huge workbook or calculating sensitivity tables in a complex model: you can observe one instance of Excel consuming CPU cycles at the maximum allowed rate, while you continue working in other workbooks with no interruption or slow down (depending on your PC’s power, of course).

If ever an offending Excel file needs to be shut down because it hangs, only workbooks open in that instance will be closed. Other workbooks will remain unscathed. 

Give Excel more processing bandwidth

Often, standard issue laptops with corporate IT policies will limit the amount of CPU bandwidth available for a calculation task in Excel. You can observe this as the status bar shows “Calculating” non-stop, but the task manager shows Excel is only using 25% of the CPU. This constraint can be changed through Task Manager by right-clicking on the Excel process and setting the priority to High or Real-time:

This maximizes the amount of processing power available for tasks in Excel such as sheet calculation and the ever-offending sensitivity tables.

Removing dead weight from the model

Often, models we receive (or create) are templates or similar models created long ago and recycled over numerous projects. The result is a bloated model with lots of hidden junk (and attendant risks of errors) and potential performance bottlenecks.

Excess Used Ranges

Although it may appear that a worksheet’s Used Range is only a few hundred rows and perhaps 100 columns, the last used cell is not just the last cell that contains data or a formula, but also the last cell with any formatting including non-standard column width or row height.

It would suffice to select and change the height of an entire row to suddenly extend the Used Range to over sixteen thousand columns. It is not unusual for a single worksheet whose Used Range includes 1.8 billion empty cells to bloat a .xlsx file with an additional 15 Mb of “data”.

Unused number formats, styles, and names

In some extreme cases, extreme volumes of number formats and styles can not only reduce performance, but impede normal functioning of Excel.

As an example, a .xlsx file that had over fifty-thousand unused number formats and styles held up a project as analysts struggled to understand why worksheets could not be copied to or from the workbook.

Other performance penalties for “hidden” junk are observed in large volumes of Names and Named ranges. Not all names are visible in Name Manager leading to models with thousands of names that contain errors, or links to other workbooks and files, or formulas that are not pertinent. Not only can this junk take up disk space and memory, but Names are calculated every time a formula that refers to them is calculated thus leading to longer calculation times.

Depending on the volume of junk, the clean-up can be done manually. Sometimes, manual clean-up is not possible because user actions in Excel trigger long recalculation cycles. In these cases, the simplest recourse is to do the clean-up via VBA or an Excel Add-in that does housekeeping tasks programmatically.

Save as .XLSB

While not really dead weight, the file structure itself can improve performance by saving it as an Excel binary file, or .xlsb. Data and code stored inside an XLSB file is in binary, while the typical .xlsx format stores data as XML files which take longer to parse on opening. 

Saving a file as .xlsb can lead to load and save times which are twice as fast and the Excel binary file uses noticeably less space. The impact is more dramatic where very large .xlsx files are concerned.

Optimizing how the model calculates and recalculates

The suggested actions above focus on changes to the typical workflow we use understanding how to improve our workflow through cleaning up messy PF models. But once opened and cleaned of junk, a complex PF model can still be incredibly slow.

This final performance bottleneck is down to calculation and, to the surprise of many, not only is it the formulas that are used, but also the architecture of the model, that can make a dramatic difference in in how response Excel is when working in a PF model. To understand why this is, we must understand how Excel optimizes the calculation tasks it must perform.

Excel tries to be efficient

Excel is designed to calculate the minimum number of cells possible. It’s smart recalculation engine tracks changes and only recalculates elements (cells, formulae, names, etc.) that have been flagged as needing recalculation because they or one of their precedents have been changed. 

This process relies on Dependency Trees that Excel tracks in order to identify the optimal calculation sequence. The way this works is:

  • That each cell/name that is changed is flagged as uncalculated (i.e., to be re-evaluated).

  • The worksheet is scanned top-to-bottom and left-to-right looking for references to an uncalculated cell and, in turn, flagging these as uncalculated.

  • The scan is repeated until all references have been flagged.

Once the Dependency Trees have been determined, Excel will identify the optimal sequence for calculating the cells that have been flagged. Ideally, all the references in a cell are to cells which have not changes and do not require calculation. A cell that depends on another cell for which calculation has been completed is called a backward reference.

However, a cell that depends on another cell yet to be calculated is call a forward reference. If this reference also depends on a cell which has to be recalculated, the calculation chain grows, as does the calculation time.

Excel uses an iterative process of calculating cells to discover the correct calculation sequence. Because of the complexity of PF models, as well as the very long forecasts (e.g., it is not unusual for hydro-electric project cashflows to be forecast over 100 years), the calculation changes are long and branching.

Surprisingly, there are several user actions beyond the worksheet or workbook recalculation functions that can trigger recalculation including:

  • Deleting or inserting a row or column.

  • Saving a workbook while the Recalculate before save option is set.

  • Double-clicking a row or column divider (in Automatic calculation mode).

  • Adding, editing, or deleting a defined name.

  • Renaming a worksheet.

  • Changing the position of a worksheet in relation to other worksheets.

  • Hiding or unhiding rows, but not columns.

These innocent-seeming actions can lead to costly slowdowns as we navigate between worksheets or try to effect some basic changes to the PF model.

In building a PF model, certain choices of functions and how the workbook is structured can have important performance impacts:

Volatile Functions

The presence of these functions in a cell will always require that cell, and cells referring to it, to be recalculated.

These functions include some of the most popular functions used in PF models: Indirect(), Offset(), Rand(), Now(), and Today(). Using Indirect() or Offset() as part of a lookup will mean every formula that depends on that lookup will be included in the calculation chain, thus causing enormous amounts of unnecessary re-calculation.

Conditional Formats

Any formulae used in a conditional format is effectively volatile because conditional formats need to be evaluated at each calculation. Furthermore, conditional formats seem to be hyper-volatile: they are evaluated each time the cell that contains them is repainted on the screen. If we have received a PF model, it is unthinkable to try to replace the use of volatile functions throughout the model. However, performance can be improved by eliminating conditional formatting which can be done for entire worksheets.

Forward Worksheet Cross-References

Building on the idea of Forward References (references to cells which have not had their final calculation performed), references between worksheets further penalize performance.

These Forward Worksheet Cross-References can cause uncalculated cells to be evaluated multiple times and thus increase the amount of time spent calculating a workbook. This is exacerbated by the use of circular references which is one of the more frequent strategies used in PF models. 

All is not lost however, as the intersheet dependencies are driven by worksheet name. Much as Excel scans from top-to-bottom and left-to-right searching for references to uncalculated cells, it scans worksheets in alphabetical order. 

This does require knowing the model well enough to judge where most backward references will be found. It will quickly become apparent that the bulk of the operational model will be calculated in final form before finalizing calculations in the financial model. 

Some of the most calculation intensive portions of more mature models (those closer to financial closing) will include sensitivity tables. These can be more easily contained on worksheets which are the most forward cross referenced, meaning they are the absolute last sheets to be scanned and calculated, all of which can be enforced by renaming the worksheets with a “z” prefix.

 How important is this skill in the context of learning FM?

If you want to cut your teeth into the financial modelling sector, then PF modelling (either on the financial close or the operational model) is arguable the best place to learn as the models are so complex.

Until you have built a PF model you really haven’t pushed the boundaries of complex 3-way modelling. The skill of dealing with multiple parties and inputs across a wide range of stakeholders is an equally valuable skill to learn.

How does all this disruption, AI and automation talk impact this topic?

Given the complex nature of Project Financings, PF models were always destined to be large and complex. This complexity is often exacerbated by multiple different people adding to the model in an uncoordinated manner, i.e., ad hoc modelling or modelling without using the same best practice guidance. Because of the high stakes involved, a model which is responsive is critical for eliminating errors and producing the analytic results needed. When confronted with bloated and slow performing PF models, there are a number of straight forward steps (more even than what is offered in this article) we can take every time to reduce lost work, errors, and frustration. 

But for the moment this process cannot be simplified to any AI or disruption. There are ways to build component parts of the PF model quickly using a combination of templates (which are also dangerous) and module components the complexities cannot be removed entirely.

This will mean that for the foreseeable future most of the PF models will continue to be built in Excel perhaps with a few supped-up Add-ins to help us do things faster.

If you want to find out more and follow the rest of the article series be sure to download the Financial Modelling App

If you want to find more information on financial modelling and content visit the Model Citizn website.

Previous
Previous

Fundamentals of 3-way cashflow modelling

Next
Next

Project Finance vs other Deal Modelling