Anda di halaman 1dari 7

IBP1107_19

DIGITAL TECHNOLOGY AND 4.0


INDUSTRY: ACHIEVING OPERATIONAL
EXCELLENCE FOR OIL AND GAS
PIPELINES
Jacques van Dijk1

Copyright 2019, Brazilian Petroleum, Gas and Biofuels Institute - IBP


This Technical Paper was prepared for presentation at the Rio Pipeline Conference and Exhibition 2019, held
between 03 and 05 of September, in Rio de Janeiro. This Technical Paper was selected for presentation by the
Technical Committee of the event according to the information contained in the final paper submitted by the
author(s). The organizers are not supposed to translate or correct the submitted papers. The material as it is
presented, does not necessarily represent Brazilian Petroleum, Gas and Biofuels Institute’ opinion, or that of its
Members or Representatives. Authors consent to the publication of this Technical Paper in the Rio Pipeline
Conference and Exhibition 2019.

Abstract: Digital Technology and 4.0 Industry: Achieving Operational Excellence for
Oil and Gas Pipelines

This paper discusses how the principles and technologies of “Industry 4.0” can be adopted by
Oil and Gas Pipeline operators, focusing on how data analytics can be applied to specific use
cases. The principles of “Industry 4.0”, which originated in the manufacturing sector to create
“Smart Factories”, are presented in the context of how they can be mapped to the Oil and Gas
Pipeline Industry to increase pipeline efficiency and safety. Some of the enabling technologies
that characterize Industry 4.0 and are at the forefront of the pipeline industry are discussed,
including Automation, Data Analytics, Cloud Computing, Artificial Intelligence (AI),
Industrial Internet of Things (IIoT), and Augmented / Virtual Reality (AR/VR). Fundamental
design elements of Industry 4.0 and the drivers for their adoption in pipelines operations are
presented. Some of these elements include the use of large numbers of IP-connected field-level
devices to collect data, information management to collect and contextualize data from
disparate sources, and data analysis capabilities implemented at different business levels to
support decentralized decision-making. Finally, specific use cases from oil and gas pipelines
are presented in the context of systems and applications that leverage the Industry 4.0 principles
and technologies.

Keywords: Pipelines, Automation, Data Analytics, Cloud Computing, Artificial Intelligence


(AI), Industrial Internet of Things (IIoT), Augmented / Virtual Reality (AR/VR).

1. Introduction

The term “Industry 4.0” originated to describe the next revolution (“4th Industrial
Revolution”) in industrial development, particularly in the manufacturing sector, and built on
the changes of the “Digital Revolution” or the (“Third Industrial Revolution”). This fourth
revolution is described as the dawn of “Smart Factories”. The fundamental principles or
elements of Industry 4.0 are characterized by the increased use of data as a means to change
manufacturing to be more “agile”, allow more automatic or autonomous control, and lessen the
burden of process on the (human) labor force.

______________________________
1 Director, Industry Solution Validation, AVEVA
Rio Pipeline Conference and Exhibition 2019

Key enablers in the execution of Industry 4.0 are the ability to exploit advances in
technology as well as the decrease in the cost of technologies. Specifically, the lowered cost of
sensors, data processing, and connectivity has allowed more data to be collected, processed and
made available to more systems and stakeholders.

Many of these enabling technologies are allowing the Oil and Gas Pipeline Industry to
change operations to a “Smart Pipeline” approach. Technologies that are being increasingly
leveraged in the industry include Automation, Data Analytics, Cloud Computing, AI, IoT (or
really IIoT) and Augmented / Virtual Reality.

2. Industry 4.0 for Pipelines

2.1. Industrial Internet of Things (IIoT)

The pace of growth in sources and mechanisms of data and connectivity between
systems has dramatically increased as the cost of IP connected devices has decreased. This
design principle of Industry 4.0 has been characterized as “Interconnection” – not only the
connection of sensors, machines, devices but also people.

Pipeline companies have already generally been connecting significant amounts of data,
but there is an increased focus on centralizing data sources into a single “Data Lake” where
previously silo’ed data are brought together in an organized fashion to allow efficient queries
across heterogenous sources.

2.2. Data Analytics

IIoT allows silo’ed data, as well as data from new sensors, to be gathered and utilized
to improve efficiency. But the lower cost of gathering, communicating and storing this data
have made it possible to provide more data to pipeline operators. Some pipeline operators have
experienced incredible growth in multiple dimensions. For example, one former control room
manager mentioned to the authors that from 2000 to 2016, customer counts increased by 240%,
daily deliveries by 420%, and SCADA system tags grew 1150%, together with other increases
in scope of operations with essentially the same staff.

Many companies are realizing the value of this data – some even considering it an asset
as much as their compressors, valves and flow computers. This data (whether collected via IIoT
technology or legacy data collection methods), is, thanks to improved data storage, analytics,
and information sharing technology, becoming widely shared and more usefully analyzed by
more stakeholders to provide valuable information (The Industry 4.0 design principle of
“Information Transparency”).

In addition, systems, processes and devices are increasingly performing data analysis to
support decision making (The Industry 4.0 design principle of “Technical Assistance”). For
example, many companies are leveraging data to change the paradigm of maintenance from
mostly periodic maintenance or inspection to now include predictive maintenance for some
assets. Through analysis of the pipeline network’s assets, Risk Based Maintenance (RBM) is
now more prevalent – the strategy for each asset or asset class is selected based on the risk, cost
and implications of failure, rather than a one size fits all strategy.

Devices and (cyber physical) systems are being developed which perform more
2
Rio Pipeline Conference and Exhibition 2019

sophisticated analysis of data to perform tasks more autonomously at the edge of the network
where intelligence can be applied without human intervention (The Industry 4.0 design
principle of “Decentralized Decisions”).

An example of the increased sophistication of data analytics in devices is the ability of


valves, ultrasonic meters and other assets to provide detailed self-diagnostics and alerts from
the device based on onboard sensors and measurements with specific information on issues that
the assets are experiencing.

Analysis of data in the field (“Edge Analytics”) is also allowing for different approaches
and improved results for traditional challenges, as more data is available with higher collection
rates and cost effective computational power. An example is in the use of field devices to collect
pressure data at extremely high collection rates with onboard analysis to aid in the location of
leaks using negative pressure wave (NPW) algorithms. Whereas in the past, NPW could be
applied to data being collected for general control and operational purposes to identify leakage,
the fidelity of the data generally did not lend itself to leak location. Newer leak detection
systems using NPW operate using cyber physical systems where field devices are used to
identify the signatures of negative pressure waves, then upload a short period of the data around
the identified signature to a coordinating system, where false alarms are identified by
algorithmically combining the data with process data to identify cases where operations of
pipeline are the cause of the pressure changes and not a leak.

Another form of technical assistance enabled via data analytics is to provide Operators
with normative bounds for variables – an envelope around variables which promotes
operational excellence by ensuring that Operators react to issues before they reach alert or
warning levels – these envelopes are learned automatically based on analysis of the variables
in context. For example, when a pump is switched on, the pressure rises and flow increases.
The target pressure and flow will be reached within a certain time. Identifying a difference in
the time taken to reach the target pressure and flow or the pressure and flow can provide early
indication of a potential equipment issue.

Analysis of measurement data can be used to identify anomalies that may lead to
equipment issues. For example, a meter which continuously provides the same data can indicate
equipment failure and can be flagged earlier to minimise accounting loss. As accurate gas
quality is a vital part of accuracy for gas measurement, ensuring chromatographs function
correctly is extremely important. Condition Based Maintenance (CBM) allows chromatograph
issues to be identified earlier, but analysis of the data can equally be used to identify issues. For
example, gas quality data from the chromatograph can be validated with simulation models to
identify anomalies. Reconciliation of data around a piece of equipment or entire pipeline
operations can identify bad instrument readings and pinpoint mass and energy loss locations,
effectively acting as an asset validation layer.

API 1167 11 suggests several KPIs which are helpful to garner insights into alarms and
related information such as average and peak rates for alarm rates per controller position, alarm
floods, frequently occurring alarms, chattering/fleeting alarms, stale alarms, alarm suppression
and alarm attributes (configuration). These KPIs are helpful in managing alarms and reducing
the load on operators, but more useful context can be obtained through analysis of the alarms
and alarm configuration; analysing values for tags with frequently occurring chattering/fleeting
alarms can be used to establish alarm limit changes reducing the number of alarms or events
without impacting the process. This allows more efficient handling of the operational process
3
Rio Pipeline Conference and Exhibition 2019

without compromising safety.

2.3. Artificial Intelligence (AI)

The use of AI (and Data Analytics in general) is gaining traction in Oil&Gas Pipelines.
Specifically, one can look to the challenges (and opportunities) inherent in the conversion of
Big Data into actionable information – where Machine Learning is used to augment human
ability to identify patterns or correlations in data (“Technical Assistance”).

Advances in the AI over the last decade have resulted in algorithms, particularly in the
field of object or image identification, which can outperform even trained humans and it is not
farfetched to expect that these advances will mean that, in the near future, AI based systems
will outperform humans in other forms of analysis.

For example, Asset Performance Management systems may leverage, among other
techniques, Preventive Analytics based on machine learning and pattern recognition to provide
early identification and warning of asset issues.

Some pipeline companies have implemented Predictive Asset Analytics with rotating
equipment to predict failures weeks or even months in advanced. This has allowed the pipeline
companies to schedule outages and plan maintenance in a controlled way with minimum
disruption, with personnel that have the correct expertise and safety certifications and with
suitable equipment needed for the maintenance. In addition to cost savings from avoiding
unplanned downtime, maintenance has also been less expensive by addressing issues before
more abrupt failures occur causing collateral damage to the asset.

It is difficult to quantify the value of failure avoidance, but estimates from Duke Energy
(for example) may be indicative of what can be expected. For example, Duke’s “PRiSM”
system predicted the failure of a compressor and inspection revealed there was minor damage
to the LP blades. The early intervention allowed them to avoid the damage to multiple stages
of blades, packing, and diaphragms which may have resulted if there was a severe blade
liberation. The estimated avoided cost was $4.1M on this “catch”. (Another single reported
“catch” is estimated to have avoided $34.5M in cost.)

TransCanada has reported at SGA 2018 savings of more than $10M per year with their
own inhouse built compressor analytics program.

The use of AI to process images to identify pipeline integrity issues has been described
in the literature since at least the mid 1990’s, and the improvements in the field of image
processing has improved pipeline integrity using optical methods as a result.

Analysis of data from Cathodic Protection equipment, corrosion monitoring and Inline
Inspections is also being used for early warning of potential pipeline failures such as leaks and
ruptures. Combining the data with external inspections provides additional metrics for more
accurate predictions and therefore improves the Integrity Management process.

An enabler for the development of self-driving vehicles is the ability to analyze the data
from an array of sensors to model the environment in which the vehicle is operating to create
enough “situational awareness” to allow autonomous control of the vehicle. One can expect
4
Rio Pipeline Conference and Exhibition 2019

that leveraging the large continuous streams of data available in Oil & Gas Pipelines could, in
time, create a comparable level of “situational awareness” to allow a measure of autonomous
control of pipelines.

2.4. Automation

Automation has always been prevalent in oil and gas pipelines, particularly in ensuring
personal and asset safety – examples include ESD automation, PLC based or hardwired safety
interlocks and so forth. But automation at higher levels such as process automation can be
considered.

Elements or building blocks of autonomous control – functionality to perform sequence


of actions based on triggering conditions – are emerging, for example, the ability to perform
automatic shutdowns based on unsafe conditions. Through analytics, expert systems and
artificial intelligence, system operation will become more automated from scheduling through
to delivery of the product. Ultimately, it is a journey to continuous improvement beginning with
better data and context for operator decision making, followed by recommendations and what-
if scenarios, gradually progressing to automated control during normal operations and finally
in abnormal situations.

For example, pipeline control features such as “Action Sequences” allow a chain of
complex multi-station operations to be programmed, with automatic state detection and the
ability to revert to “safe” situations on failure of an operation. This provides execution of
process automation such as performing a swing from one delivery station to another with all
the associated changes in valves, setpoints and so forth. Action sequences leverage line
blockage detection to identify the state of the pipeline network and whether a planned action
can be executed. SCADA systems can have reflex features to trigger sequences of actions
automatically based on a specific evaluation of criteria. Thus many automation subsystems can
be coordinated by the SCADA system to automatically trigger and safely execute operations.
(“Technical Assistance”).

When one adds the “situational awareness” derived from AI we described before, you
now have the ability to automate aspects of pipeline operations at another level – up to the level
of autonomous control (“Decentralized Decisions”).

The concept of autonomous control may be somewhat daunting, given the undeniable
need for safe operations in a high consequence industry (and one can argue this is no different
for self-driving vehicles, for example), but one can look to interim steps where one can leverage
the ability of a cyber-physical system to execute well known tasks in a near to perfect way.

An example is utilizing an automated process to control the compressors and flow


control valves when setting up to deliver against a contract. As the optimal execution of this set
of operations is based on achieving an appropriate level of flow to achieve the contract, the
relative experience level of different operators can result in different outcomes in terms of the
cost of execution – an inexperienced operator may operate in such a way as to cause an under-
take or over-take of gas. Automated processes can be programmed to achieve the optimal
balance of cost and reliable delivery. At the very least, problems in gas delivery volumes can
be projected and notified in a more timely way.

5
Rio Pipeline Conference and Exhibition 2019

2.5. Cloud Computing

Cloud Computing is a key technology element of Industry 4.0, both for manufacturing
as well as oil and gas pipelines. While many pipeline operators are hesitant to move SCADA-
related functions to “The Cloud”, there is an increased adoption of Cloud Computing
technology in general – at least with respect to business functions, decision support, logistics,
billing, and reporting. Data is now being published in increasing amounts to either public or
private cloud infrastructures to democratize critical pipeline data in realtime.
(“Interconnection”). But it is not only the data that is moving to Cloud Computing, it is also
applications.

The concept of a “Digital Twin” is being leveraged as a driving model for an approach
– data from cradle to optimized operations is available.

The information created during the design phase of an asset in a pipeline project is used
in construction, and the combination of the data is used to create a model which can be used for
operations as well as optimization of that asset (and even if re-engineering is required). This
data is continuously updated based on any changes to the assets as well as real-time data from
operations to ensure there a realistic digital model of the physical assets.

A simple example is using the information on a pipe segment such as the engineered MAOP,
lengths, diameters etc, together with the construction information such as material information,
elevations etc. together with PLC logic to create a Real Time Transient Model, (RTTM) which
is the “Digital Twin” of the asset in the field including its dynamic operational characteristics.
This RTTM can now be compared to data from field sensors which allows, for everything from
leak detection to survivability studies to process optimization. The Digital Twin is continually
tuned by analyzing the data from the sensors – for example, gradual sulphur or paraffin build
up or equipment wear-and-tear will cause changes in flow coefficients and meter factors.
Tuning keeps the RTTM in line with the real world, allowing the Digital Twin to be more
valuable as a true reflection of the field.

2.6. Augmented / Virtual / Mixed Reality (AR/VR/MR)

The realities of the changing demographics in oil and gas pipeline companies has
profound implications for skill development and retention. For example, where in the past
Control Room Operators would typically have 20 plus years of progressive experience, the new
reality is 5 to 10 year careers as Control Room Operators. This places a responsibility on the
Control Room management to develop skills without being able to leverage years of progressive
experience. In addition, the growth of data being processed makes the work more challenging
and Control Room management is expected to do more with less. The same applies to field
personnel. With more equipment in the field, and less experienced personnel, getting field
technicians to a site with the right training and experience on equipment is a growing challenge.

Augmented / Virtual / Mixed Reality (AR/VR/MR) is a set of technologies that is being


increasingly leveraged to address some of these challenges. By providing a very realistic Virtual
Reality (VR) based training environment, managers can provide low consequence high impact
training – using real world scenarios for not only typical operations but also abnormal or
emergency training. Operators and technicians can be trained not only on leveraging the
technology, but also the Standard Operating Procedures

6
Rio Pipeline Conference and Exhibition 2019

Using Augmented or Mixed Reality (AR/MR) systems, field technicians can be sent to
site and supported by the system in performing tasks – for example, by using cues from the
assets, the AR/MR system can provide the operating procedures for the specific model of the
asset and indicate to the technician where specific components of the equipment are located.
Remote support can also be provided through the system from SME’s that can observe and
provide inputs into the same AR/MR.

Naturally, with a Digital Twin approach, the cost to implement is lower as the
information needed for the VR is already captured and the realism is enhanced as the underlying
model is based on the continuously updated Digital Twin.

For leak detection, a combination of a high-fidelity simulation model and machine


learning can be used to improve leak detection outcomes. For example, thresholds for declaring
a leak based on a volume imbalance between simulated and real-time data can be tuned based
on historical data analytics.

4. Conclusions

Advances in Oil & Gas Pipeline operations can be mapped to the design principles of
“Industry 4.0”. Many of the same technologies which are enablers for the enhancement of
manufacturing to the level of “Smart Factories” are being leveraged in Oil & Gas Pipelines to
achieve operational excellence and one can argue that we are on the way to “Smart Pipelines”.

6. References

Helle, K., Lachey J., Pipeline management tools move online,


https://www.dnvgl.com/oilgas/perspectives/pipeline-management-tools-move-online.html,
Retrieved 2019/05/10.

Anda mungkin juga menyukai