Evolution of Utility Analytics

The days of spreadsheet-driven decisions are dwindling as utilities embrace machine learning and artificial intelligence supported by big data

This article appeared in the Second Quarter 2019 Issue of The Navigator. 
Subscribe here.

Utilities have always been analytical companies. For more than a century, they have used quantitative techniques to inform operational strategies and major capital investment decisions. Those techniques have begun to evolve as the utility industry faces rapid technological advancements and increasing complexities from distributed energy resource adoption. The days of spreadsheet-driven decisions are dwindling as utilities embrace machine learning and artificial intelligence supported by big data.

Many utilities are doing all the right things to advance the sophistication of their analytical capabilities.

Smart grid and Internet of Things technologies are creating more data than ever for utilities. There is no shortage of business cases or software solutions that intend to use these rapidly growing sources of data. However, it is estimated that a mere 1% of the data collected by utilities ever leads to an action, which indicates just how little is known about the value advanced analytics could produce for utilities. This uncertainty, along with a lack of internal experience with advanced analytical techniques, is a major factor behind the relatively slow adoption of advanced analytics in the utility industry.

For utilities, turning data into action requires the application of a broader set of algorithms. Algorithms are commonly classified as supervised, semi-supervised, or unsupervised. 

Supervised algorithms consist of an outcome variable (or dependent variable), which is predicted from a given set of predictors (independent variables). All aspects of these algorithms are defined by the person conducting the analysis and the steps to reach the output are easily explained and transparent. This makes supervised algorithms a preferred approach for analysis in regulatory proceedings that require full transparency. Linear regressions and logistic regressions are two examples of supervised algorithms that have been used by utilities for decades.

Semi-supervised algorithms use a set of data with known outcomes to train a model and then apply that to data with unknown outcomes. A step further, unsupervised algorithms are left on their own to figure out the structure of the data without any preliminary information. Both approaches have proven effective at forecasting and classification problems. Neural networks, anomaly detection, and clustering are examples of these advance machine learning algorithms. For utilities, most of the data that isn’t commonly being used to inform actions can be leveraged with these techniques. Load forecasting, predicting equipment failures, and classifying customer consumption patterns are all proven applications.

Utility analytics will continue to evolve in response to new technologies and opportunities to create value.

Advanced machine learning algorithms have been sparsely used by utilities due to their output coming from a “black box” that is difficult to explain. The preference for supervised algorithms among utilities can be traced to requirements from regulators that want analysis informing major rate-making or policy decisions to come from fully transparent algorithms. Fortunately, the strongest applications of advanced machine learning algorithms are not subject to extensive regulatory scrutiny. In other words, regulators aren’t to blame for utilities’ slow embrace of advanced machine learning.

Another important step in the evolution of utility analytics is demystifying how algorithms work and explaining them in ways that make them approachable to non-technical audiences. The worst offender in this regard are neural network algorithms. The name conjures up the image of neurons in a brain mysteriously processing data and predicting an outcome, but most neural networks are simpler than they appear. 

At a high level, neural networks process input data through layers of nodes that apply weights and feed the data forward to the next layer. This data is processed through an activation function (e.g., sigmoid function) and the data is output as a prediction. This is compared with the actual outcome and the errors are processed back through the layers of nodes to adjust the weighting in the model. Repeatedly processing and training the neural net model will eventually yield a model that is good at predicting the targeted outcomes. It’s complex to explain how a neural net evolved from initially producing inaccurate predictions to reliably accurate predictions over the course of thousands of training iterations, but that isn’t what most people need to understand. They need a high-level understanding of what an algorithm is doing and how it can help inform actions.

Many utilities are doing all the right things to advance the sophistication of their analytical capabilities. They are investing in more capable and cost-effective computing resources. This includes embracing the benefits of cloud computing to manage the uneven computing loads that are common with machine learning algorithms. Better data management processes are being implemented to streamline data collection and ensure there is a single version of the truth. Data lakes are being implemented to consolidate the physical storage of data and enable effective master data management. There are also a broader range of tools to help analysts and data scientists run more advanced algorithms (e.g., R, Python, SAS, etc.). This has shortened the development time to run algorithms and determine if they provide the insights needed to support actions. Finally, utilities are hiring employees with advanced data science skills and investing in training for current employees. This is bringing machine learning capabilities to functions throughout the organization and creating more opportunities for developing algorithms that provide valuable insights.

Utility analytics will continue to evolve in response to new technologies and opportunities to create value. The development of advanced machine learning techniques will continue to help utilities leverage their data to inform actions rather than allowing the data to languish in a data lake. The industry has evolved from handwritten ledgers and rulers to Excel spreadsheet models, but the evolution will continue to accelerate as utilities embrace machine learning.


This article appeared in the Second Quarter 2019 Issue of The Navigator.
Subscribe here.

About the Experts

Back to top