Woodside Energy is using big data to provide a rolling 10-minute list of options to operators of its Pluto liquefied natural gas facility that could be used to increase production at the plant.
The gas giant’s chief technology officer Shaun Gregory told an investor day last week that the algorithm was able to poll Woodside’s corporate “memory” to find ways in which it had increased production in the past in similar operating conditions.
While much has been made of Pluto’s enormous internet of things (IoT) environment – consisting of some 200,000 sensors that feed data into AWS – it is the first time the company has spoken about the whole-of-plant production optimisation project.
It has previously disclosed a use case for the sensor data to predict when or if a specific problem in the liquefaction process might occur.
Gregory said the onshore Pluto plant operated in fluctuating environmental conditions, where temperature one day could be 25 Celsius and 35 Celsius the next.
“You get less LNG when it’s hot,” he said.
“So how do you optimise the plant given the current conditions? There’s a lot of other ways but we’ve taught [a] machine through many years of data and experience.”
Gregory said the algorithm was able to determine the most production ever delivered in the past during similar conditions.
It then showed operators what they were currently producing, and provided a “waterfall” of opportunities to increase production so they might close the gap between current production volumes and the system’s “best ever” result.
“We call this maximum possible reduction,” Gregory said.
“It runs and recalculates every ten minutes and re-presents that data back to the decision makers in the field.
“We’ve seen a real narrowing in production variability [as a result].”
Gregory said operators were presented with “four independent options” every ten minutes for how they could raise production levels.
However, it was ultimately left to the operators to make the call.
“It’s left to the experience of the operators because there might be a reason why [the best-ever level isn’t] being chased,” he said.
“For example, there might be some maintenance activity occurring” which made it dangerous or impossible to pursue greater output at that time, he said.
Google cloud use
Gregory also revealed that Woodside has expanded its cloud computing platforms from AWS to include Google’s Cloud Platform.
The company is using the Google cloud to process seismic image data taken by survey ships for the purpose of identifying potential resource fields that can be explored further.
The image data files are enormous. While it has traditionally taken 12-18 months to produce images for Woodside’s exploration team, cloud capacity had enabled the company to bring the time down to 4-8 weeks, while also improving the image clarity.
Getting hold of the images faster could change the speed with which the company is able to identify and act on potentially untapped fields.
But Gregory said Woodside is “hitting limits” with its public cloud resource, and is turning to the operators of the cloud services to try and resolve the bottlenecks.
“There is a limit to how much we can scale the computing in this instance and we are working with the providers to increase the amount that we can utilise,” he said.
“It’s a combination of us being able to move enough data on their cloud to the compute and of balancing memory, compute and movement of data around.
“We’re getting there. Six months ago we could spin up 10,000 nodes. Today we can spin up 100,000. There’s also data preparation we have to do.
“I know people want it from the day the boat finishes, an image is available the next day. That is our aspiration. But that is many years off.”