The Leakage Challenge: Reducing PR19 Penalties with Software Engineering
Technology is a powerful ally for water companies in the age of PR19. It can help them meet performance commitments, avoid leakage penalties, and maybe even gain respect for overperforming.
Last year, all water companies set out a detailed business plan as part of the regulator’s 2019 price review, PR19. The Final Determinations of PR19 outline how water companies will meet the needs of their customers in the next five years (2020-2025), including information about price, services, operational efficiency and more.
Ofwat use performance commitments to ensure that water companies meet their PR19 targets. Let’s dig deep on one of the most relevant PCs – leakage.
Technology to the Rescue
To meet the challenging commitments all water companies face for reducing leakage, they will need help, and the best help available comes from technology sector. But there are in fact many areas where this needs to be introduced to produce quality results.
This is a journey of transformation, rather than a specific solution or a quick fix, and the technology applied needs to be tailored to each specific organisation since they will all be at a different stage on this journey.
Start with Data
If we are going to reduce leakage, we first need to understand the near real-time operation of the water network at an accurate and detailed level. Where are our assets? How are they operating? What interactions are being made to these assets?
This obviously includes data (and a lot of it), but data is not a substitute for knowledge - the data needs to provide useful information and it needs to be accurate. Otherwise, why bother? This data obviously has to come from somewhere, so there’s a need for sensors on the network’s assets and we'll need reliable communications to collect the data from the assets in near real time.
It's easy to say the more sensors there are in a network the better, but sometimes a few well-placed sensors can give better insights than a suite of sensors deployed into the wrong locations - so planning here is key. Smart meters are also a useful tool in the battle against leakage – if it isn’t possible to understand what is happening at the edge of the network, then the insights produced are limited.
Now, is it certain that all of our assets, sensors and technology have been purchased from a single vendor and all combine together beautifully into their single unified software platform... No?
Surely then, all the technology used talks the same language, and so is easy to integrate?...OK, maybe not.
The next challenge, then, is to integrate all the disparate technologies into a single platform, with the extensibility to allow new technologies to be integrated. This is not a simple task, but it is critical to be able to utilise all of that great knowledge available. For this, careful analysis will need to be undertaken to understand the options available to integrate each system into one larger system - some may need little or no work, but some may need custom software written to enable the best integration path.
At this stage the insights from third party data need to be considered as part of the operation of the network. These could include, for example, planned construction work, a timetable of events which attract large gatherings of people, weather information, soil/ground data, disruptions to power supplies, etc.
So let's assume the best data is being collected from all the right places, and it’s all integrated into a single system (well done us!). What comes next?
The next step is to be able to visualise the data and explore it in a way that gives real insight. There is a myriad of tools out there, but it’s important to choose carefully - the tool used to explore the data will need to be easy for users with little or no technical knowledge to understand.
We also want to ensure that it supports the data being collected, and the data that is planned to be collected in the future. For instance, if we have data for the assets in three dimensions, we don't want to be limited to a 2D view of the data. Using a good visualisation tool should allow for insight to be gained into the network, combine third party datasets, easily highlight anomalies, detect errors in the data, and give insight into where adding new data can improve our knowledge.
This will allow the trained eye to pinpoint leaks manually as they happen, assuming there’s a team of experts staring at screens all day - not a good use of our skilled workers! So, in order to automate the detection of current leaks, an algorithm needs to be built based on the data, showing "what a leak looks like", and can then be applied that to the data, with an alert triggered whenever new leak events occur. The data also needs to highlight all the leaks currently present in the network.
Wow, that's pretty good! But wouldn't it be amazing if we could pre-empt leaks and fix them before they occur? Now we're talking!
Of course, leaks can happen for many reasons, some are not possible to predict, but a large percentage (given the right data and algorithms) are entirely predictable. These algorithms are dependent upon the quality and detail of the data that they are supplied - if the quality is poor then predictions will naturally be inaccurate. If there isn’t enough detail, then predictions will be vague.
These algorithms will be entirely based on historical information, so the key is to capture as much accurate information on the history of previous leakages as is feasible. If the date a leak occurred is inaccurate, this is going to skew the results coming back from the algorithm. It is also important to be able to identify any false positives, when potential leakages were highlighted but turned out to be raised for another, erroneous, reason. These algorithms are only as good as information that they are supplied.
The next step is to not just predict the leaks, but to understand why these leaks have happened (and why others haven't), and to suggest interventions which will prevent the same issue occurring in other areas. Pressure management plays a part in this, but also understanding which assets are causing problems, for instance - an erratic pump? Repairs made during a specific time period? Pipe materials that are reacting poorly under specific ground conditions?
These are all patterns that can be discovered in a system where all the relevant data is being collected. For example, if data is included about when assets have been serviced/reconditioned, this can play a big part in understanding why issues have or have not occurred.
Closing the Loop
And so comes the final step in the journey to stop, or even prevent, leakages.
For the system to be complete, we need to close the loop, and make our system interactive. Currently we have designed a passive system that is collecting data, visualising leaks, predicting them, and suggesting interventions. Some of these interventions are manual jobs, such as replacing assets or servicing assets which cannot realistically be automated, but some assets (e.g. pumps) have properties that can be remotely operated.
By connecting our system to the operational technology that controls these assets, our system can become fully automated; sense when issues are starting to arise; and take corrective actions to mitigate these issues before they are realised (or alert users to carry out manual operations, where automation is not available/possible).
All the Pieces of the Puzzle
So far we have avoided the common buzzword - but don’t get too comfortable… The visualisation is a Data Analytics/Business Intelligence solution, the algorithms and pattern recognition are of course Artificial Intelligence (AI) and Machine Learning (ML), the system built is a "System of Systems" and is in fact a Digital Twin of the network.
So, a water company may be looking for a software solution that can help reduce the leakage in their network (and reduce the penalties they might face), but after close analysis you can see that this is not so simple, and for many organisations it will involve digital transformation for their systems, and a step change in their way of thinking.
Many people think that bringing in a stand-alone AI/ML software solution will mean water companies will suddenly be able to see into the future, but in reality this is just one piece of the puzzle, and can only be truly successful after many of the other pieces are put into place.