Every infrastructure project is unique. Unlike, for instance, a car manufacturer, construction companies don’t repeat the same task thousands of times in a row.
Each project has its own ground plan, its own geology, its own challenges with utilities and the surrounding built environment — and so on. At least, that’s how it might seem to those of us on the front lines of infrastructure.
But the reality — from a data-science perspective — is rather different. The great majority of techniques, materials, project milestones and other variables are actually the same from one project to the next.
Data scientists estimate that 80-90% of the data points in any project are common to almost all projects. And this has huge implications for efficiency in construction. It means that by collecting and analysing data from hundreds of projects, it’s possible to build accurate models of why some projects succeed — delivered on time and to budget — and others don’t.
And once you have that model you can use it to predict the outcomes of future projects. Data scientists extract from project plans, proposals, engineering blueprints and other documents that infrastructure providers have. Feeding these into the Machine Learning (ML) algorithms, they develop a clear picture of how stakeholders intend to proceed at each stage of the project, the materials and technologies they intend to use, and so on.
They can then compare these to the data model built on historical data from many different projects and spot where potential pitfalls, over-runs, conflicts, and budget problems can occur. But this is not something which may happen in the future or that requires technology that’s yet to be invented. The machine-learning required to perform this kind of predictive analysis already exist.
Atkins, for instance, has used machine-learning algorithms to build predictive models based on many hundreds of major infrastructure projects from around the world. We have already used these to help major infrastructure projects and providers predict and avoid potential delays, problems, and budget overruns.
Preparing for change
There are many advantages to using this kind of data-driven approach to project planning and ongoing operational optimisation. The kind of detailed analysis required to spot potential problems would — done manually —typically require a whole team of specialists. Using an ML and data-driven approach, often one or two people can get the job done.
Using ML to spot conflicts and sort these out before they make it out of the project plan and onto the site can also help to save time and money. And reducing the need for rework isn’t just good for the bottom line and the project timeline. By reducing the amount of energy and materials required to finish the project, it also helps reduce that project’s carbon footprint.
But to realise these benefits, organisations often have to change the way they work. Perhaps the most important change, is to engage fearlessly with what the data and the analysis is telling you. If, for instance, the data predicts a significant cost overrun, this is almost certainly highly inconvenient. But the sooner the company faces and deals with it, the less it’s going to cost and the less disruptive it will be.
Another barrier to realising the full potential of machine-learning, is the existence of data siloes. These can be technical, when data is distributed across different platforms, or they can be organisational and cultural. To maximise the accuracy and usefulness of the predictive models, it’s important to break these siloes down and draw in data from across all the functions and experts working on a project.
Letting data take the strain
Often, the best way to do this is to work with external experts. By working with a partner that specialises in data- and ML-driven decision making for infrastructure projects, you get instant access to the technology and the expertise you need to start benefiting from these methodologies. Just as importantly, if you work with the right provider, you also get access to predictive models based on past data.
And indeed, it is the data that has to come first before exploring the intricacies of ML. Without a strong foundation of robust data that has integrity, businesses will always struggle to make the most of the insights that aim to improve the predictability of infrastructure projects. Any relationship with a expert partner should start with interrogating the data to see how it can contribute to a reliable ML solution.
Organisations such as Atkins and Faithful + Gould work on hundreds of major projects every year. With client permission, we can gather, anonymise and process the data from these projects to build highly detailed and accurate normative and predictive models for the widest possible range of different project types. This gives us the baseline models we need to predict project performance in advance, so that you can eliminate potential problems at the planning stage.
Far too often, projects end up being more expensive and time consuming than they need to be. Thanks to Machine Learning, we get to understand the inner workings of the project at every stage so that we can prepare to right the ship when needed. This can represent a change of working style for many, but when it presents us with the data to operate more efficiently, it can be worth its weight in gold.
Anthony Reid (l) is associate Director of Faithful + Gould (l) and Alejandro Lopez (r) is Digital Solutions Director at Atkins. Both companies are members of the SNC-Lavalin group.
£22bn for CCUS and hydrogen in northern clusters
It would make more sense to move the east coast project from Teesside down to South Humberside where there is existing demand for hydrogen (Phillips...