ASME Conference Presenter Attendance Policy and Archival Proceedings

This online compilation of papers from the 2018 12th International Pipeline Conference (IPC2018) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference by an author of the paper, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Pipeline Safety Management Systems: Competency

The pipeline industry continues to look for ways to improve its compliance and performance. Management systems have increased prevalence in the pipeline industry, with recognition that carefully designed and well-implemented management systems are the fundamental method that should be used to keep people safe, protect the environment and align organizational activities.

Experience has shown significantly better success rates with management system implementation, both in terms of the quality and speed, when the person responsible for the design, implementation and sustainment of the management system has an integrated set of technical and enabling competencies. However, there is currently no standardized competency model that can be used to support a Management Systems Professional’s specialized knowledge and skills. The paper outlines the competencies needed by individuals to be effective in the design, implementation, measurement and evaluation of management systems.

Applying a ‘whole-person’ perspective, the model includes business, relational and technical competencies that contribute to performance excellence for management system practitioners, including outlining example behaviours at target level performance and proficiency, and supported by a defined body of knowledge.

This paper describes the Management System Competency Model, including how it can be used to create a position-specific development program for application within various organizations. This research establishes a basis for the creation of a practical, systematic and easy to use development road map for individuals and organizations who use or leverage a management system.

There is increasing pressure on the pipeline industry to be able to demonstrate that its asset management and engineering capability management are at a satisfactory level. This is needed to give policymakers, regulators and industry stakeholders confidence in the safety and environmental sustainability of petroleum pipelines.

Regulators, in particular, are seeking assurance from pipeline owner/operators that they have capable pipeline engineers designing, constructing, operating and maintaining petroleum pipelines. At present, there are no generally accepted approaches to recognising and developing pipeline engineering capability.

The paper will discuss three levels of capability recognition as: (1) registration – as pipeline engineers (not just in mechanical, civil or chemical engineers (overall standing level)) – (2) qualification (sub-discipline/job level) and (3) competency (task level). The most granular and useful of these is competency. This is because it is at the level that is most immediate: the task at hand.

Competency, the combination of knowledge and experience that leads to expertise, is increasingly seen as the best practice basis for learning, particularly for professionals. Significantly, once competencies have been defined in competency standards, they can become the building blocks used to define the requirements for both registration and qualification.

The Australian Pipelines and Gas Association (APGA) has developed a comprehensive competency system for both onshore and offshore sectors. There are 226 onshore competency standards and 57 offshore competency standards describing, in a succinct format, what is required to be competent.

The succinct format of the competency standards avoids the pitfalls of many other systems of competency description, providing enough information to be clear about what is required without unnecessary complexity. In addition to the detailed competency standards, the competency system has tools, resources and a progressive rating scale that make competency standards accessible and easily used. The competency system is characterised by such flexibility that, to date, APGA has identified 15 applications, all of which will add value to engineers and the companies that employ them.

The paper will explain, in detail, APGA’s Pipeline Engineer Competency System, how it works and how it can provide the building blocks for a wide range of tasks that support the training, development and recognition of pipeline engineers’ capabilities, including defining the requirements for registration and qualification.

The paper will provide case studies, based on the APGA Competency System, showing how it can be used to create requirements for qualifications and registration and to design in-house training and development plans.

Pipeline standards and regulations explicitly require personnel to be both competent and qualified to work on pipelines, but they neither define competent or qualified, nor provide methods or processes to demonstrate competence and qualifications.

This paper defines competence and qualification and introduces and describes “competency standards.” These standards are used to assess the competence of an individual and are an integral part of the process to qualify individuals as being competent. Individuals are proven to be qualified in a competency if they are successfully assessed against these standards.

The paper recommends the contents of a competency standard: the standard should clearly state its purpose and outcomes, and detail the knowledge, training, mentoring, and experience requirements, as well as an assessment method. Examples of these standards are presented, showing how competency standards provide a common definition of a competence and showing how competencies can be assessed against these standards. A case study of an assessment of an individual is also detailed.

The choice between a prescriptive and a performance-based competency standard is discussed, and it is shown that the choice is affected by the level of the competence, the complexity of the competence, the homogeneity of the industry, and the government regulator’s resources and capabilities to police the standard.

The paper explains that qualifications must be “portable”: as individuals move jobs, the qualifications they obtain need to be recognized by all companies. Portability is achieved by having the qualification “certified”. This certification is conducted by an independent body, which certifies that the processes followed (including any assessments) meet the requirements of the competency standard, and that the assessment and the award of the qualification have been audited and verified. Hence, a qualification is a two-step process: award and certification.

Pipeline Safety Management Systems: Industry/Pipeline Performance

Safety culture is not a new concept, with origins dating back to 1986 and the Chernobyl nuclear disaster.[1] The recognition of safety culture in organizations and its influence on incidents has been growing, with gaps in safety culture having been cited as a major contributory factor to recent failures in the oil and gas industry including the Piper Alpha event nearly a quarter of a century ago and was most recently identified as a causal factor in the 2010 Deepwater Horizon disaster.[2]

Many different approaches have been developed to measure and assess organizational attitudes and behaviours, with the goal of improving safety culture. Traditional approaches for measurement have focused on:

▪ Questionnaires or surveys.

▪ Interviews.

▪ Observations.

▪ Focus groups.

▪ Document analysis.

While these approaches have provided valuable information regarding safety culture, more progressive approaches are being considered by leading companies. The establishment of Safety Culture Indicators and Continuous Monitoring as a method for improving safety culture is becoming more prevalent. This new approach enables companies to leverage the management systems that already are in place for managing their increasingly complex operating environments. Regulators have recognized this too and are beginning to recommend that continuous monitoring be included in a company’s toolbox as an additional approach to assessing internal safety culture.

This paper describes a comprehensive safety culture maturity model created to provide organizations with a method to review their management system and assess their existing safety culture. The assessment aids the development of a suite of organization-specific indicators to facilitate application of a continuous monitoring approach for ongoing improvement of safety culture.

CEPA Integrity First® (Integrity First), led by the Canadian Energy Pipeline Association (CEPA) and a condition of membership, acts as a foundation for continual improvement, bringing our members together to share and implement leading practices in the areas of safety, environment and socio-economics. Integrity First includes three principles and ten priority areas (such as emergency management, pipeline integrity and water protection) where members collaborate, share leading practices and hold each other accountable.

Integrity First is a management systems approach designed by CEPA members for industry to achieve collaborative continual improvement. It supports the collective setting of priorities, plans, assessments and improvements.

While spreadsheets enabled the first rounds of assessments, CEPA required a solution that engaged multiple stakeholders over a complex timeline, coordinated activities clearly and precisely, while keeping the process transparent and efficient. The information generated is sensitive, so it must be kept secure while still being available for aggregation, reporting and reference. It needed to house communication tools so members could easily pull information and lastly, it needed to be easy to use.

In August of 2015, CEPA established a partnership with SPAN Consulting (SPAN) to address these challenges through its software as a service (SaaS) offering called Octane™.

This paper will review how CEPA designed and implemented a technical, web-based solution to enable an efficient, effective and transparent Integrity First with transformative impact. Specifically, through the use of this technology, there are now stronger communities of practice across industry with increased focus and effort on the opportunities to improve through real-time self-serve access to industry’s overall benchmarked performance, leadership and leading practices. CEPA’s commitment to enabling Integrity First is resulting in better adoption and improved performance.

We have all dealt with performance metrics in the pipeline industry. How do we measure operational excellence? Are we prioritizing the right corrective actions? Are our existing metrics fair and driving the right behaviors? Will they recognize success and actually show us and our clients that we are improving?

This paper describes how Enbridge Major Projects measures and knows our Quality is improving; how we prioritize, focus, and monitor Quality improvement.

Using our roadmap, your organization can transform existing data streams from anecdotal to well established facts that produce actionable results and drive business objectives.

To reach this outcome, Enbridge Major Projects quickly matured our Quality Culture by leveraging our strong Safety Culture and habits. On our journey to meaningful overall Quality metrics, Enbridge built a foundation through non-punitive incident reporting using incident resolution tools and a Cost of Quality model.

Cost of Quality models can be designed and executed in a variety of ways. This paper will focus on applying a model specifically suited for pipeline construction and operational activities. Key topics to be addressed include:

Examples will be provided for common pipeline applications, including valves, pipe, and other commodities and services. This approach has enabled Enbridge Major Projects to prioritize improvement actions and meet business objectives.

Applying a Cost of Quality model will enhance your operational excellence and greater adoption would provide the foundation for industry-wide Quality performance metrics that will recognize success and validate that Quality is improving in the pipeline industry.

In 2015 and 2017, Asset Integrity Management (AIM) System Audits were performed of a natural gas transmission and distribution company. The AIM systems were evaluated as part of a National Facilities Audit (NFA) for the Trinidad and Tobago T&T) Ministry of Energy and Energy Industries (MEEI) and for the company directly. To allow comparison of the company-sponsored audit with the NFA performed for the MEEI, the audit protocol and methodology used the same scoring system and benchmarking in both cases. This paper presents the results, divided as Notable Mentions and Opportunities for Improvement for the company. The findings are discussed in the context of the national oil & gas industry in Trinidad and Tobago, and the long-term vision and efforts of the company to improve its AIM performance.

The media and sections of the public have shown recently an acute interest in Pipeline operational performance incident statistics. Published data for North America shows that 99.999% of crude oil and petroleum products shipped by pipelines reach their destination safely. Some pipeline operators claim even better performance, 99.9996 % being one example. However, should failing to deliver 4 barrels of product for every million shipped be a legitimate cause for concern? If not how about the more general case of 1 per one hundred thousand?

Is pipeline performance being singled out unreasonably when compared to other threats to public and environmental wellbeing such as medical malpractice or industrial waste contamination? Evidence from Canada and elsewhere, indicates that, during their hospital stay, an appreciable number of patients, one in every 18, experience adverse events, such as medication error, injurious falls, infections, and other medical misadventures. Errors (mostly minor), in fulfilling pharmaceutical prescriptions show an even higher error rate — 1 in 4 in one recent study, yet the public appears to be unperturbed.

A common thread is determining what constitutes an acceptable level of risk whether individual or societal, voluntary or involuntary. Besides providing a broader context for pipeline risk, the paper explores the origin and intent of the environmental screening standard of 1 in 10−6, as well as the concept of setting risk tolerance to be as low as reasonably practicable — ALARP. The question of why there may be a reticence for many Pipeline Regulators to set, as other industries have, a prescriptive value for ALARP is considered.

Pipeline Safety Management Systems: Pipeline Safety/Integrity

This paper presents a safety case program approach used by Enbridge to assess proposed operational changes. This approach identifies the impact to threats, barriers, and consequences associated with a proposed change, and ensure the safety of the system is not compromised by following a plan-do-check-act methodology.

Enbridge integrity management uses a plan-do-check-act system. A key aspect of this is the integrity plan which is developed for each asset to balance integrity requirements (fitness for service) with business drivers (risk and asset plan). Completion of a safety case provides an independent check for each pipeline segment within the system, ensuring identified risks are managed to as low as reasonably practicable (ALARP). Additional mitigation actions are identified and implemented in the event that ALARP is not met.

Operational parameters are a key consideration in the development of the integrity plan and include variables such as flow rate, injection and delivery locations, service, maximum allowable operating pressures, temperature and pressure cycling. The safety case program acts as a check by considering the threats associated with the proposed change to operational parameters, and then identifies whether or not the current pipeline integrity barriers for that asset are sufficient for the proposed operation, and if ALARP is still achieved based on the safety case program assessment. Where current barriers are not sufficient, actions are identified and put into place as required. The program also considers alignment to set demonstrated program performance targets, including reliability targets such as probability of failure, and deterministic targets which are used to confirm the safety case status.

This paper details the operational change assessment process within the safety case program, and discusses the benefits where the process was used.

As a result of numerous stress corrosion cracking incidents in the 1980s and early 1990 the National Energy Board (NEB) held an Inquiry1 in 1995 on the SCC failure mechanism and how to prevent failures. One of the recommendations of the Inquiry was Companies were to develop a SCC management program to proactively identify and mitigate SCC. Based on the apparent success of the SCC programs in significantly reducing SCC failures, the NEB revised its Onshore Pipeline Regulations in 1999 (OPR-99)2 to require companies to develop an integrity management program (IMP) for all hazards.

This paper discusses the evolution of integrity management program (IMP) requirements and evaluates incident rates and other performance metrics to determine if there is evidence that IMPs have contributed to the improvement of safety of pipelines. The paper highlights the challenges associated with gathering incident and IMP performance metrics and evaluating the data to determine if there is a correlation between the implementation of IMP and pipeline safety. In addition, the analysis discusses the challenges associated with comparing data between different countries and regulatory jurisdictions. Suggestions for future improvement are identified.

Since the publication of API Recommended Practice (RP) 1173: Pipeline Safety Management Systems, in July 2015, the energy pipeline trade groups in North America (API, AOPL, AGA, INGAA, APGA and CEPA) have worked collaboratively to develop tools and programs to assist energy pipeline operators with the development and implementation of appropriate programs and processes. These resources include a Planning Tool, Implementation Tool and Evaluation Tool, as well as a Maturity Model that describes a continuum of implementation levels. The Planning Tool is used to compare an operator’s existing management system to the RP requirements and develop action plans and assign responsibilities to close gaps. It is intended to help operators achieve Level 1 maturity (develop a plan and begin work). The Implementation Tool is used to evaluate and summarize implementation status by question, element and overall, and helps track development of program implementation to Level 3 maturity. The Evaluation Tool plays two key roles addressing the conformity and effectiveness of the system. This tool is used to assess and report the level of conformity to the requirements, the “shall” statements, of the RP and possible Level 4 maturity. The Evaluation Tool also provides the means to appraise the effectiveness of an operator’s programs in achieving the objectives of the RP, asking the key question, “Is the system helping and driving improvement?” These resources can be supplemented by the voluntary third-party audit program developed by API and the Peer-to-Peer sharing process.

Pipeline Safety Management Systems: Regulatory

Current pipeline regulations in North America have changed significantly over the past several decades and will continue to change as public and regulatory scrutiny intensifies and new industry standards are developed (i.e. API RP 1173). As regulators assess the approach to take, they are increasingly looking at what other regulators are doing in their respective jurisdictions, including those at federal, state and provincial levels.

Despite historical commitments to conceptual models fostering cooperation between regulators and regulated entities, recent trends in the United States signify a departure from performance or outcome-based regulation toward a more prescriptive approach. Pipelines remain the safest method of transporting oil and natural gas.1 However, when pipeline incidents do occur, the consequences can be catastrophic and are often well publicized. Federal and state regulators are under increased pressure in the aftermath of high-profile incidents to assuage the concerns of legislators and the public at large.

This paper generally compares various regulatory models and the relative benefits and drawbacks of each. A more in-depth review of regulatory changes in the United States is examined, to analyze the potential intended and unintended consequences of the move towards more prescriptive pipeline safety regulations.

Gas pipelines and networks are subject to multiple regulatory governance arrangements. One regime is economic regulation which is designed to ensure fair access to gas markets and emulate the price pressures of competition in a sector dominated by a few companies. Another regime is technical regulation which is designed to ensure pipeline system integrity is sufficient for the purposes of public safety, environmental protection and physical security of supply. As was highlighted in analysis of the San Bruno pipeline failure, these two regulatory regimes have substantially different orientations towards expenditure on things such as maintenance and inspection which ultimately impact public safety.

Drawing on more than 50 interviews, document review and case studies of specific price determinations, we have investigated the extent to which these two regulatory regimes as enacted in Australia may conflict, and particularly whether economic regulation influences long-term public safety outcomes. We also draw on a comparison with how similar regulatory requirements are enacted in the United Kingdom (UK).

Analysis shows that the overall orientation towards risk varies between the two regimes. The technical regulatory regime is a typical goal-setting style of risk governance with an overarching requirement that ‘reasonably practicable’ measures are put in place to minimize risk to the public. In contrast, the incentive-based economic regulatory regime requires that expenditure should be ‘efficient’ to warrant inclusion in the determination of acceptable charges to customers. How safety is considered within this remains an open question.

Best practice in performance-based safety regimes such as those used in the UK and Australia require that regulators adopt an attitude towards companies based on the principle of ‘trust but verify’ as, generally speaking, all parties aim for the common goal of no accidents. Equally, in jurisdictions that favor prescriptive safety requirements such as the United States (US) the common goal remains. In contrast, stakeholders in the economic regulatory regime have significantly diverse interests; companies seek to maximize their individual financial returns and regulators seek to exert downward price pressures. We argue that these differences in the two regulatory regimes are significant for the management of public safety risk and conclude that minimizing risk to the public from a major pipeline failure would be better served by the economic regulatory regime’s separate consideration of safety-related from other expenditure and informed by the technical regulator’s view of safety.

Integrity Management Program (IMP) is a systematic and documented program for assuring asset integrity throughout the full life cycle of an asset. To ensure safe and reliable operation, the British Columbia Oil and Gas Commission (Commission) has been requiring its licensed pipeline operators through its regulations to develop and implement pipeline integrity management programs (IMPs) in accordance with Canadian Industry Standard CSA Z662. The auditing process, the collated results and findings from the IMP audit years (2011–15) were published in IPC 2016-64161[1].

Since 2016, the Commission has enhanced its IMP compliance assurance process, and aligned it with the management system approach using Deming’s model of plan-do-check-act (PDCA) for IMP components and incorporated a lifecycle approach that spans the entire lifecycle of a pipeline system from planning to abandonment. In addition, the Commission has adopted a multi-criteria decision-making approach when prioritizing which operators to audit. This method utilizes weighted rank approach and takes into account multiple factors, such as, previous IMP audit results, pipeline length and product, class location, incident frequency, and asset age. Through collaborative efforts with the University of British Columbia (Okanagan), an innovative risk based audit tool — Integrity Management Program Audit and Knowledge Tool (IMPAKT) has been developed to help evaluate the compliance of operators’ IMP in terms of the management system approach and its associated risk. This tool conducts three-dimensional analysis of IMP performance using the failure mode effect analysis (FMEA) technique and allows the Commission to generate a risk profile for each IMP component to determine which components are most critical, requiring immediate attention. The final audit results are presented as a Risk Priority Number (RPN), which is a product of severity, occurrence and action. An effective integrity management program requires a strong safety culture, therefore, safety culture aspects are incorporated into the risk based auditing tool, IMPAKT. This risk based evaluation process also allows the Commission to develop a compliance benchmark to make comparison between different operators’ IMP results for continuous performance improvement. This paper presents the innovative approach developed and implemented by the Commission for the IMP compliance oversight (auditing) process and implication of such changes.

The United States Department of Transportation (USDOT), Pipeline and Hazardous Materials Safety Administration (PHMSA), Office of Pipeline Safety recognizes there may be technologies and advancements not currently allowed by the federal regulations that can improve safety, and has processes to allow such technologies and advancements. These processes include Special Permits, State Waivers, and Other Technology Notifications. This paper describes observations and trends related to PHMSA’s accumulated data from the last few decades, and includes a summary of new technologies and innovative solutions that are not currently covered in codified standards or regulations.1

Over the last three decades, safety-critical industries (e.g. Nuclear, Aviation) have witnessed an evolution from risk-based to risk-informed safety management approaches, in which quantitative risk assessment is only one component of the decision making process. While the oil and gas pipeline industry has recently made several advancements towards safety management processes, their safety performance may still be seen to fall below the expected level achieved by other safety-critical industries. The intent of this paper is to focus on the safety decision making process within pipeline integrity management systems. Pipeline integrity rules, routines, and procedures are commonly based on regulatory requirements, industry best practices, and engineering experience; where they form “programmed” decisions. Non-programmed safety and business decisions are unique and “usually” unstructured, where solutions are worked out as problems arise. Non-programmed decision making requires more activities towards defining decision alternatives and mutual adjustment by stakeholders in order to reach an optimal decision. Theoretically, operators are expected to be at a maturity level where programmed decisions are ready for most, if not all, of their operational problems. However, such expectations might only cover certain types of threats and integrity situations. Herein, a formal framework for non-programmed integrity decisions is introduced. Two common decision making frameworks; namely, risk-based and risk-informed are briefly discussed. In addition, the paper reviews the recent advances in nuclear industry in terms of decision making, introduces a combined technical and management decision making process called integrity risk-informed decision making (IRIDM), and presents a guideline for making integrity decisions.

Workflows are the fundamental building blocks of business processes in any organization today. These workflows have attributes and outputs that make up various Operational, Management and Supporting processes, which in turn produce a specific outcome in the form of business value. Risk Assessment and Direct Assessment are examples of such processes; they define the individual tasks integrity engineers should carry out.

According to ISO 55000, achieving excellence in Asset Management requires clearly defined objectives, transparent and consistent decision making, as well as a long-term strategic view. Specifically, it recommends well-defined policies and procedures (processes) to bring about performance and cost improvements, improved risk management, business growth and enhanced stakeholder confidence through compliance and improved reputation. In reality, such processes are interpreted differently all over the world, and the workflows that make up these processes are often defined by individual engineers and experts. An excellent example of this is Risk Assessment, where significant local variations in data sources, threat sources and other data elements, require the business to tailor its activities and models used.

Successful risk management is about enabling transparent decision-making through clearly defined process-steps, but in practice it requires maintaining a degree of flexibility to tailor the process to the specific organizational needs. In this paper, we introduce common building blocks that have been identified to make up a Risk Assessment process and further examine how these blocks can be connected to fulfill the needs of multiple stakeholders, including data administrators, integrity engineers and regulators. Moving from a broader Business Process view to a more focused Integrity Management view, this paper will demonstrate how to formalize Risk Assessment processes by describing the activities, steps and deliverables of each using Business Process Model and Notation (BPMN) as the standard modeling technique and extending it with an integrity-specific notation we have called Integrity Modelling Language or IML.

It is shown that flexible modelling of integrity processes based on existing standards and best practices is possible within a structured approach; one which guides users and provides a transparent and auditable process inside the organization and beyond, based on commonalities defined by best practice guidelines, such as ISO 55000.

The standards for Indigenous engagement are evolving rapidly in Canada. The risks to project approvals and schedules, based on whether consultation has been complete, have been recently demonstrated by the denial of project permits and protests against projects. Indigenous rights and the duty to consult with affected Indigenous groups is based on the Constitution Act, 1982 and has been, and is being, better defined through case law. At the same time, international standards, including the International Finance Corporation Performance Standards and the United Nations Declaration on the Rights of Indigenous Peoples, are influencing government and corporate policies regarding consultation. The Government of Canada is revising policies and project application review processes, to incorporate the recommendations of the Truth and Reconciliation Commission of Canada; that Commission specifically called for industry to take an active role in reconciliation with Canada’s Indigenous peoples. Pipeline companies can manage cost, schedule and regulatory risks to their projects and enhance project and corporate social acceptance through building and maintaining respectful relationships and creating opportunities for Indigenous participation in projects.

The proposed Enbridge Line 3 Replacement Program would replace the aging pipeline from Hardisty, Alberta, Canada to Superior, Wisconsin, USA. For the Canadian route, an Ecological and Human Health Risk Assessment (EHHRA) was prepared for the National Energy Board (NEB) in Canada. In the United States, an Assessment of Accidental Releases (AAR) and the Supplemental Release Report were part of an Environmental Impact Statement (EIS) prepared for the Minnesota Public Utilities Commission (PUC) and Minnesota Department of Commerce, Energy Environmental Review and Analysis (DOC-EERA).

Computational oil spill modeling was used to assess the predicted trajectory (movement), fate (behavior and weathering), and potential effects (impacts) associated with accidental releases of crude oil along the proposed pipeline. This modeling included the 2-dimensional OILMAPLand and 3-dimensional SIMAP models. A total of 64 hypothetical release scenarios were investigated to understand the range of potential trajectories, fates, and effects that may be possible from multiple product types (Bakken, Federated Crude, and Cold Lake Winter Blend), released at any location, under varying environmental conditions.

Trajectory and fate modeling was used to predict the downstream movement and timing of oil, as well as the expected surface oil thickness, water column contamination, shoreline and sediment oiling, and proportion evaporated to the atmosphere. These results were then used to assess the potential environmental effects to demonstrate the variability of outcomes following a release under different release conditions.

There are few standards or regulations to help stakeholders consider land use and development in the vicinity of existing pipeline systems. Land use planning that considers the existence of pipeline systems can support the planning for and provision of emergency services and pipeline integrity. This approach can also promote public safety and awareness through consistent and collaborative stakeholder engagement early in the land use planning process.

In 2016, a CSA workshop was held with a variety of stakeholders impacted by land use planning around pipeline systems. The workshop identified that there was a need for consistency across the jurisdictions in the form of a national standard.

The main goal of the new CSA Z663 standard is to provide guidance and best practices for land use planning and development. It also addresses roles, responsibilities and engagement of all stakeholders to help establish a consistent approach to land use planning.

A review of CSA Z663 will illustrate how this document provides information, guidance and tools that are inclusive to all stakeholders. This paper will also highlight the history and key drivers behind the new CSA Z663 standard and provide an overview of the current scope and content. Finally, the paper will describe future considerations and additions to the standard.

The concept of the digital twin dates all the way back to the 1950’s when NASA, GE and other industrial manufacturers started creating abstract digital models of equipment to model their performance in simulations and maintain a record of the asset throughout its life span [1]. Over the years more and more industries have adopted the digital twin paradigm to improve traceability, maintenance, and analytics allowing for improved sustainment of the asset or equipment while reducing various risks identified during life cycle management. It has been found that collectively, the digital twin concept improves the overall net present value of an asset. The oil and gas industry has slowly been adopting the digital twin paradigm of asset life cycle management over the past two decades with the focus on facilities. Recently, field trials were completed to test and evaluate workflows and sensor platforms for the creation of a digital twin for pipelines. The trials resulted in highly accurate pipeline centerlines, weld locations, Depth to Cover (DoC) and ditch geometry capture in digital formats. This paper describes the methodologies used, and the results of an actual construction field trial with a comparison to traditional data collection methods for these attributes. The value of creating a pipeline digital twin during pipeline construction in near-real-time is discussed with an emphasis on the potential benefits to life cycle management and pipeline integrity.

Inline inspections following new pipeline construction completion as a means of ensuring specific quality requirements are fulfilled poses unique challenges when compared with inline inspections in operating pipelines. Construction contractors are often responsible for conducting a post-construction inline inspection as part of construction quality verification; however, construction contractors often lack expertise in planning and conducting inline inspections. Schedule constraints for conducting inline inspections, often introduced because of other prior construction delays, can contribute to poor planning and execution. The consequent undesirable outcome may be failed inspections further delaying pipeline construction completion, turnover to the Client, and final payments.

It is in the interest of all stakeholders to ensure inline inspections be completed in a timely manner and in a way that maximizes the likelihood that the needed pipeline data will be successfully acquired. It is crucial for post-construction inline inspection success, that all stakeholders poses basic knowledge of operational requirements and inspection proceedings. Additionally, adequate planning of the inline inspection proceedings can greatly mitigate the risks associated with the inline inspection.

To ensure necessary considerations and the division of responsibilities is clear and understood among all stakeholders; a Post-Construction ILI Execution Plan is prepared. The Inline Inspection Contractor is responsible for completion of the Post-Construction ILI Execution Plan in consultation with other stakeholders. The contents of the Post-Construction ILI Execution Plan include project information, run conditions, and stakeholder contact information. Moreover, it defines the assignment of stakeholder responsibilities and involvement for all inspection planning and execution aspects.

The pipeline sector is facing a multi-faceted challenge regarding its workforce. Valuable knowledge is being lost as increasing numbers of technical experts and long-term employees exit the industry (due to retirement). Concurrently, the public spotlight is focused on the environmental impact of the pipeline industry. Therefore, robust construction of new pipelines and effective maintenance of aging infrastructure is increasingly important. Herein lies the challenge — How does the industry transfer the knowledge required to ensure that personnel have suitable competency to maintain the integrity of the pipeline system? A scenario where new personnel efficiently gain knowledge through experience is critical.

An important aspect of achieving this is a more systematic and thoughtful approach to knowledge transfer. As part of its fundamental methodology for developing training and alternate methods for knowledge transfer, the team launched an initiative to review the literature and current industry approaches. This was done as a key input to developing a “Knowledge Taxonomy.” This tool simplifies the process for selecting the optimal method for effectively transferring key technical knowledge based on the desired level of competency (e.g., awareness building vs. mastery).

Specifically, the team identified a number of consistent themes and combined them with both sound educational theory and industry experience to develop a tool in the form of a practical framework. This Knowledge Transfer Taxonomy was then applied to a specific knowledge gap in industry as a case study. This paper will

1. Summarize, at a high level, the results of the literature review and current approaches;

2. Describe the framework (i.e., Knowledge Taxonomy) developed by the team;

3. Discuss a case study involving the application of this framework to a specific and real challenge; and

Through this work, the team identified and developed specific strategies and tactics to effectively overcome some of the barriers to knowledge transfer. These experiences will be shared in the context of a specific situation that typifies the current challenges industry is facing in effective knowledge transfer.

Air drying is used after dewatering to dry a pipeline or piping facility before commissioning it with natural gas. This process typically involves blowing dehydrated air through the pipe sections until they are determined to be suitably dry. The question addressed in this paper is: how dry is dry? A common metric used to judge the pipe section’s dryness is the drying air’s outlet water dew point. Typically, air drying continues until a suitably dry low water dew point, such as −40°C, is measured at the outlet of the pipeline or facility. However, there is currently a lack of understanding of how this final outlet water dew point relates to the remaining water and thus the subsequent start up of the pipeline or facility. If the outlet water dew point is higher than required, issues may arise upon start up; e.g., hydrates could form along the pipeline or at downstream facilities. Conversely, if the outlet water dew point is lower than required, unnecessary time would have been spent in drying, and hence higher cost.

This paper advocates an approach to determine when air drying is complete that considers the start-up phase. The approach consists of two parts. In the first part, the air drying parameters (drying air flow rate, inlet water dew point, etc.) and the final outlet water dew point are used to quantify the volume and surface area of water remaining after the drying process is completed. In the second part, the evaporation of this water into the gas flowing through the pipeline/facility after commissioning and start up is modeled as a function of the gas flow rate, temperature, pressure and inlet water content. Then, the water content of the gas at the delivery points is calculated. This increase can then be evaluated in reference to the water content specifications at the delivery points. The approach is exemplified by a 31 km NPS 48 pipeline over a mountainous terrain.

Commissioning pressure tests are a critical life-of-asset record. Successfully achieving an acceptable pressure test can be challenging both at an execution and documentation perspective. This paper aims to assist in streamlining the approach to pipeline commissioning pressure tests between operators to increase efficiency and drive consistency across the pipeline industry. Key lessons learned from the planning stages through to the quality control turnover are highlighted. Lessons learned, respective to pressure tests, include: road map of Canadian regulations, tabulated equipment requirements, suggested instrumentation setup, template checklist for test plans, outlined company to contractor responsibilities, as well as a proposed internal process to manage and accept completed tests.

This paper describes a new design method to obtain the inelastic deformation of pipelines induced by temporary ground deformations. The proposed design method consists of an elastic solution and a strain conversion procedure which was developed to predict the inelastic strain distribution by using the elastic solution and a stress-strain curve. Roundhouse type, yield-plateau type, and trilinear stress-strain curves are considered. Validation of the proposed method is conducted by comparing the results predicted by the proposed method with the results obtained by finite element analyses.

Ovality in a pipe results in a stress concentration and may present a pipeline integrity concern. If ovalization is generated during manufacturing or transportation and found during the girth welding process before the pipe is buried, replacement of the oval segment is usually the solution. However, ovalization is sometimes also found in buried pipelines via in-line-inspection (ILI) either before the pipeline is put into service or during regular pipeline maintenance. The mitigation and/or replacement of oval pipes after burial can be expensive. It is vital to identify what constitutes an unacceptable ovality level and to remediate only those that pose a threat of failure during the service life of the pipeline. The existing assessment approaches for oval pipes, such as that in API 579, generally require knowledge of the amount of ovality at zero pressure. However, the ovality of buried pipes usually is only available from ILI runs carried out at a non-zero internal pressure. The internal pressure tends to push the pipe back toward a circular shape, a phenomenon known as re-rounding. As a result, the ILI-reported ovality is smaller than that under zero pressure. Described in this paper is a new approach which can be used to assess the integrity of oval pipe segments based on ovality either as measured with no pressure in the pipe or as reported by ILI conducted at an elevated level of pressure. The approach considers both burst failure and fatigue damage. The accuracy of the approach was verified by finite element analysis (FEA). With the help of this new approach, a general discussion about the threat of ovalization to the integrity of a pipeline is provided. The analysis indicates that ovalization in amounts usually observed does not reduce the burst pressure in most pipelines, but fatigue damage could be a concern. The fatigue damage due to ovalization is sensitive to pipe geometry, amplitude of pressure variation, and the minimum pressure level within the pressure cycles.

It has become increasingly difficult to successfully develop pipeline projects in North America. This stems from complex matters including environmental opposition, Indigenous rights, regulatory uncertainty, investor indecision and evolving policy. To manage these challenges, developers are advised to consider a route development methodology that provides both optionality and defensibility. This can be achieved through a process that characterizes the landscape based on level of constraint related to environmental and social factors, construction and operational limitations, strategic drivers and cost. Such a process must be analytically robust and able to adapt to new information and priorities emerging throughout the development phase. Particularly in the case of large-scale pipeline projects, traditional routing methods may prove too costly and time-consuming to undertake this analysis in a practical manner. Consequently, proponents may be left with fewer and less defensible route options.

Recently, the Aurora Pipeline Team sought to advance preliminary corridor routing under a paradigm of maximum optionality and defensibility in evaluating pipeline routes across northern British Columbia, inclusive of strategic interconnections. Implementing Golder Associates Ltd. automated routing decision support system called “GoldSET” the team was able to rapidly perform a robust corridor options analysis covering over 400,000 km2. This systematic, data-driven process involved subject matter expert assessment of the level of constraint or opportunity associated with individual data layers in consideration of multiple, thematic scenarios. Having consolidated and mapped the aggregated level of constraint across northern BC, routes were generated along paths of least constraint with segments tested for agreement across multiple scenarios. In total, 72 routes comprising more than 50,000 km in total length were developed and evaluated for feasibility. This refinement process ultimately resulted in an interconnected network of approximately 180 pre-screened route segments totaling nearly 12,237 km of potential routes. The advantage provided in subsequent stages of the project was the ability to recognize, quantify and evaluate the tradeoffs between segments, and adapt the route as fatal flaws were encountered. During ensuing, constructability-focused phases of the routing process, optionality had been pre-established, and route changes were able to be made quickly where required. The automated process, in companion to subject matter expert participation, also provided a clear and defensible rationale as to why routes were considered optimal, and how potential impacts to sensitive features were addressed. The evaluation was completed in far less time and more cost-effectively than otherwise possible with traditional methods.

Horizontal directional drilling (HDD) has become the preferred method for trenchless pipeline installations. Drilling pressures must be limited and a “no-drill zone” determined to avoid exceeding the strength of surrounding soil and rock. The currently accepted industry method of calculating hydraulic fracturing limiting pressure with application of an arbitrary safety factor contains several assumptions that are often not applicable to specific ground conditions. There is also no standard procedure for safety factor determination, resulting in detrimental impacts on drilling operations. This paper provides an analysis of the standard methods and proposes two alternative analytical models to more accurately determine the hydraulic fracture point and acceptable drilling pressure. These alternative methods provide greater understanding of the interaction between the drilling pressures and the surrounding ground strength properties. This allows for more accurate determination of horizontal directional drilling limitations. A comparison is presented to determine the differences in characteristics and assumptions for each model. The impact of specific soil properties and factors is investigated by means of a sensitivity analysis to determine the most critical soil information for each model.

CSA Z662, Oil and gas pipeline systems, defines class location as “a geographical area classified according to its approximate population density and other characteristics that are considered when designing and pressure testing piping to be located in the area.” In other words, the purpose of class location designations is to identify areas where specific measures are considered necessary to enhance public safety. Designations range from Class 1 (rural) to Class 4 (urban with high-rise buildings).

The current class location framework relies mainly on a location factor (L) to represent reliability. Higher reliability is achieved by using more resistant pipe — that is thicker and/or stronger — to reduce the probability of failure from operational hazards, such as corrosion and mechanical damage caused by line strikes. Currently, the need for a particular level of reliability is driven principally by the number of people impacted.

This paper discusses possible measures that can be implemented in the next edition of Z662 that, beyond requiring thicker pipe for certain products, will strengthen the class location designation system by considering the potential impact radius of an ignited gas pipeline rupture, as well as the occupancy and nature of buildings within assessment areas. The paper also discusses possible changes to improve environmental protection by introducing the concept of a designated geographical area (DGA) and associated requirements, enhancements to valve spacing requirements, and the handling of changes to class location designations for existing pipelines through interim measures and retroactivity.

Oil Storage facilities (terminals) that receive fluids from pipelines or inject fluid into them, are usually designed with a lower pressure rating than the actual pipeline between these facilities. This is mostly due to the fact that the pressure expected in the terminal is much lower than the pressure required to transport the oil. However, these terminals are still subject to pressure surges caused by abnormal transient events during normal operations. In cases where the surge pressures exceed the allowed operating pressure of the equipment, a relief system can be installed to mitigate these surges to acceptable levels.

When constructing a new terminal or altering an existing one, the hydraulic calculations of these terminals are generally based on design values of the project, such as maximum and minimum flow rates. The hydraulic studies and simulations that are normally done by companies are based on steady state conditions, however, to design intrinsically safe facilities, the system’s entire operating envelope should be considered at the design stage of the project. Once transient analysis results show the need to install a pressure relief device, the proper location of this equipment is critical for the effectiveness of the surge relief system to mitigate overpressures properly.

The effect of flow rate, piping configuration, and initial pressure profiles were simulated and compared to determine their impact on pressure surges and on the critical devices along the flow path. Secondly, simulations were done with the relief system installed in different locations along the terminal pipe and the resulting changes in maximum pressure surges.

The objective of this paper is to show the importance of a detailed transient analysis based not only on design parameters but also on operational scenarios to mitigate surge overpressures in a more cohesive manner. The secondary objective of the paper is to discuss key parameters that need to be considered for selecting the location of the surge relief valve to ensure critical devices are safe during the upset conditions. The analysis presented in this paper is applicable across a broad configuration of oil facilities.

Thermal stress induced in a buried pipeline due to temperature variation is of great concern in Canada due to its extreme cold winter and warm summer. Thermal stress decreases by increasing the pipe’s burial depth while the interaction forces due to ground displacement increase by increasing the burial depth. As a result, the optimum burial depth of a pipeline is of great importance to pipeline companies to minimize interactions between the pipeline and soil in case of temperature variations and ground displacements. Thermal stress is estimated from a heat transfer analysis considering the phase change in the soil using COMSOL. Soil-pipeline interaction based on 1984 ASCE Guidelines [1] is used for considering the effects of ground movements. The combined stress on the pipeline is estimated as a function of burial depth and is presented in a curve for design purposes. Numerical analysis by ABAQUS shows the adequacy of the presented curve.

When tied to drilling results, geophysical surveys of trenchless water crossings provide important information on subsurface geotechnical conditions, including bedrock elevation and the locations of zones of granular material within overburden. Because the terrain can change quite dramatically at water crossings, it is difficult to acquire geophysical data that is continuous between the geotechnical boreholes. The resulting data gaps can decrease confidence in understanding the site geotechnical conditions, which increases uncertainties in the detailed engineering design of the trenchless water crossing (e.g., HDD, or MTBM method). We demonstrate here how some of the technical challenges associated with acquiring continuous geophysical data at water crossings can be overcome. These include the use of suspended ERT cables, and complementary waterborne ERT and seismic refraction surveys. To illustrate the efficacy of these techniques, we present case-studies from proposed HDD crossings of three different types of water bodies at sites in British Columbia and Alberta.

The traditional approach of managing project performance is with the use of Earned Value Management. There is a recent trend towards the expansion of traditional Earned Value Management practices to include the concept of Earned Schedule.

Whereas Earned Value provides insight as to how the project is trending in relation to the plan by assessing cost and schedule variances, Earned Schedule focuses on the time element of schedule performance throughout the project execution phase.

Earned Value, although very effective at providing visibility to cost performance, is not as transparent when it comes to schedule performance over time. Case in point, at completion, irrespective as to how work progressed on the schedule (ahead or behind plan) at completion, the schedule performance index will always be 1.0.

Earned Schedule overcomes this drawback, providing useful tools to report on schedule performance, and providing visibility to the project state from which to base informed decisions.

To perform the analysis, Earned Schedule analysis incorporates detail from the baseline and forecast schedules as well as the integrated project management cost report (earned versus planned). In addition to looking at Earned Schedule metrics, other key metrics are factored into this approach to assess overall schedule performance.

Key metrics derived from the schedule and highlighted in this approach include:

• Critical Path Length Index (CPLI)

• Baseline Execution Index (BEI)

• Total Float Consumption Index (TFCI)

• To Complete Schedule Performance Index (TSPI)

• Predicted Forecast Finish Date (PFFD)

• Schedule Performance Index (time) (SPIt)

• Independent Estimate At Complete (time) (IEACt)

The intent of these metrics is to identify trends and assist in predicting project outcomes based on past performance. Since this approach is highly dependent on the schedule data, the more compliant a schedule is to industry best practices the better the quality of the results. The metrics are negatively impacted by recent re-baselining as this causes us to lose historical performance detail.

Frequent analysis of the schedule execution reporting metrics defined above provides transparency of project performance and brings visibility to early risk triggers in support of a proactive approach to project execution monitoring and control.

This paper will present a case study demonstrating how additional transparency through this approach highlighted a potential schedule risk. This increased visibility allowed the project team to reprioritize and implement proactive corrective actions to mitigate any potential impact to the project In Service Date (ISD).

With the current economic pressures being faced by the oil and gas sector, organizations are increasingly required to become more competitive on their capital projects. Enbridge has implemented the practice of Value Management (VM) to help achieve the needs and expectations of stakeholders with the least possible resources.

VM is a systematic approach that is used by a multidisciplinary team to improve the value of a project (or aspects of a project) through the analysis of its functions, and is most effective when applied at the planning and development stages. A value study enables the expected performance (i.e. the desired functions) of a project to be clearly identified at the onset, and assesses a range of possible solutions/alternatives against the functions required by the owner.

While VM is commonly used in the manufacturing industry, as well as on transportation and municipal projects, few examples of its application in the oil and gas sector were found. Enbridge researched a variety of VM best practices and created a framework that compliments existing company practices.

This paper also highlights how the value methodology was recently applied to a capacity expansion project at the Front End Engineering and Design (FEED) stage. Our approach to the various elements of a value study will be discussed, including pre-workshop activities, the VM workshop, and post-workshop activities.

Enbridge has seen significant benefits from the VM studies completed on projects to-date. Given the broad applicability of the value methodology, it is believed that our approach can also be successfully applied in other areas (e.g. improving business processes).

Strain Based Design: Poster

This paper discusses the effects of local deformation, dent, and strain hardening properties on strain capacity in compression of a line pipe.

Compression tests were conducted using two pipes with the nominal diameter of 400mm. These pipes had roundhouse type stress-strain curve, and correspond to L290 grade in Spec 5L of API (American Petroleum Institute) standards. One pipe was a plain pipe without dent, The other was a dented pipe. The depth of the dent was about 3% of the diameter. The test results explain that the strain capacity can be reduced by 25% due to the effect of dent.

A series of finite element analyses were conducted to investigate the compression behaviors. The strain capacity in compression was defined as the longitudinal critical remote strain whose strain distribution was free from the effects of a dent. At first, that finite element analyses were verified that they could reproduce the results of compression tests. Next, the size of dent were changed on that finite element analyses model, some different case were analyzed in order to investigate the changes of the strain capacity in compression. The strain capacity, the longitudinal critical remote strain, decreased to about a half in case of 3%-depth dent, compared with a plain pipe.

Seismic integrity of the pipeline with a dent is discussed in accordance with the seismic design guideline issued by Japan Gas Association. In case of the strong earthquake, “Ground Motion Level-1”, the dented gas pipeline is safe, even if the depth of the dent is 10% of the diameter. In case of the maximum earthquake, “Ground Motion Level-2”, the gas pipeline might buckle longitudinally in soft ground.

This study explores the capability of a computational cell methodology and a stress-modified, critical strain (SMCS) criterion for void coalescence implemented into a large scale, 3-D finite element framework to model ductile fracture behavior in tensile specimens and in damaged pipelines. In particular, the cell methodology provides a convenient approach for ductile crack extension suitable for large scale numerical analyses which includes a damage criterion and a microstructural length scale over which damage occurs. A series of tension tests conducted on notched tensile specimens with different notch radius for a carbon steel pipe provides the stress-strain response of the tested structural steel from which the cell parameters and the SMCS criterion are calibrated. To investigate ductile cracking behavior in damaged pipelines, full scale cyclic bend tests were performed on a 165 mm O.D tubular specimen with 11 mm wall thickness made of a pipeline steel with very similar mechanical characteristics to the structural steel employed in the tension tests. The tubular specimen was initially subjected to indentation by 3-point bend loading followed by a compressive axial loading to generate large localized buckling in the dented region. The axial loading was then reversed to a tension loading applied until a visible ductile crack could be observed in the pipe surface. These exploratory analyses predict the tensile failure load for the pipe specimen associated with ductile crack initiation in the highly damaged area inside the denting and buckling zone which is in good agreement with experimental measurements.

Oil and gas pipelines are commonly made of steel pipes manufactured through the UOE process. This process starts with a flat steel plate, bends it into a U shape, then bends it further to form an O shape, welds the seam, and then radially expands (E) the pipe. The process induces significant residual stresses in the pipe wall. Such stresses have conventionally been ignored in past finite element analyses aimed at quantifying buckling strain thresholds. The present study develops a numerical technique to investigate the effect of the residual stresses induced in the UOE process on the local buckling strains of pipes. Two types of nonlinear 3D FEA models are developed to quantify the buckling strains of pipes under imposed bending deformation. The first model starts with a flat plate, models the UOE process to capture the residual stresses, and then subjects the pipes to imposed bending deformation, the second model assumes the pipe is free from residual stresses. Comparisons are then performed between the buckling strains predicted by both models.

The strain capacity of pipes under combined loading is an important research topic within strain-based design. While strain capacity equations assume homogeneous pipe material properties, a realistic pipeline shows scatter in mechanical properties. Test and simulation programs have indicated that this “pipe heterogeneity” may reduce the tensile strain capacity under uniaxial loading by a factor up to two. To date, its effect in other scenarios (compressive; combined internal pressure and axial plastic deformation) has received little attention. To investigate these scenarios and compare them with uniaxial tensile loading, Europipe, Salzgitter Mannesmann Forschung (SZMF) and Soete Laboratory, UGent have set up a large-scale test program on UOE pipe X70 (OD = 1219 mm, WT = 17.5 mm), comprising a full-scale pressurized bend test and curved wide plate tension tests. All tested welds joined pipes with nominally equal pipe grade from the same pipeline project, with strongly different actual properties. Optical full field strain measurements by means of digital image correlation reveal local effects of pipe heterogeneity on the strain distribution in the vicinity of the girth weld. All tests showed pronounced degrees of non-uniformity in strain development, in some cases even inhibiting plastic deformation in the stronger material as its weaker counterpart collapses. Implications of the observations with respect to strain-based design are discussed.

Recently high grade pipeline project have been planned in hostile environment like landslide in mountain area, liquefaction in reclaimed land or the frost heave in Polar Regions. Geohazards bring large scale ground deformation and effect on the varied pipeline to cause large deformation. Therefore, strain capacity is important for the pipeline and strain based design is also needed to keep gas transportation project in safe. High grade steel pipe for linepipe tends to have higher yield to tensile (Y/T) ratio and it has been investigated that the lower Y/T ratio of the material improves strain capacity in buckling and tensile limit state. In onshore pipeline project, pipe usually transported in 12 or 18m each and jointed in the field. Girth weld (GW) is indispensable so strength matching of girth weld towards pipe body is important.

In this study strain capacity of Grade X70 high strain pipes with size of 36″ OD and 23mm WT was investigated with two types of experiments, which are full scale pipe bending tests and curved wide plate tests.

The length of the specimen of full scale bending tests were approximately 8m and girth weld was made in the middle of joint length. A fixed internal pressure was applied during the bending test. Actual pipe situation in work was simulated and both circumferential and longitudinal stress occurred in this test. Test pipes were cut and welded, GTAW in first two layer and then finished by GMAW. In one pipe, YS-TS over-matching girth weld (OVM) joint was prepared considering the pipe body grade. For the other pipe, intentionally under-matching girth weld (UDM) joint was prepared. After the girth welding, elliptical EDM notch were installed in the GW HAZ as simulated weld defect. In both pipe bending tests, the buckling occurred in the pipe body at approximately 300mm apart from the GW and after that, deformation concentrated to buckling wrinkle. Test pipe breaking locations were different in the two tests. In OVM, tensile rupture occurred in pipe body on the backside of buckling wrinkle. In UDM, tensile rupture occurred from notch in the HAZ. In CWP test, breaking location was the HAZ notch. There were significant differences in CTOD growth in HAZ notch in these tests.

Pipeline construction activities and in-service interference events can frequently result in dents on the pipe. The pipelines can also experience high longitudinal strain in areas of ground movement and seismic activity. Current assessment procedures for dents were developed and validated under the assumption that the predominant loading is internal pressure and that the level of longitudinal strain is low. The behavior of dents under high longitudinal strain is not known. This paper discusses work funded by US DOT PHMSA on the assessment of dents under high longitudinal strain.

Parametric numerical analyses were conducted to identify and examine key parameters and mechanisms controlling the compressive strain capacity (CSC) of pipes with dents. Selected full-scale tests were also conducted to experimentally examine the impact of dents on CSC. The focus of this work was on CSC because tensile strain capacity is known not to be significantly affected by the presence of dents. Through the parametric analyses and full-scale validation tests, guidelines on the CSC assessment of dented pipes under high longitudinal strain were developed.

Over the past 15 years, extensive studies have been conducted on the tensile strain capacity (TSC) and compressive strain capacity (CSC) of pipelines. The existing studies were mainly targeted at the design and construction of new pipelines. However, the impact of anomalies (e.g., corrosion anomalies) on the TSC and CSC has not been explicitly and adequately considered.

This paper summarizes work performed as part of a major effort funded by the US Department of Transportation Pipeline and Hazardous Materials Safety Administration (DOT PHMSA) aimed at examining the impact of corrosion anomalies on the TSC and CSC of pipelines. In this work, the strain capacities were examined analytically, and the analytical work was compared to results from selected full-scale tests.

Based on the summarized work, guidelines were developed for assessing the TSC and the CSC of corroded pipes. The guidelines are applicable to different types of corrosion anomalies, including circumferential grooves, longitudinal grooves and general corrosion. The strain capacities can be calculated using the key material properties and dimensions of pipe and corrosion anomalies as inputs.

Existing corrosion assessment models were developed and validated under the assumption that internal pressure was the principal driver for burst failure and that longitudinal strain levels were low. The impact of moderate to high levels of longitudinal strain on burst capacity had not been explicitly considered.

This paper summarizes work performed as part of a major effort funded by the US Department of Transportation Pipeline and Hazardous Materials Safety Administration (DOT PHMSA) aimed at examining the impact of longitudinal strain on the integrity of pipelines with corrosion anomalies. This paper focuses on the burst pressure of corroded pipes under high longitudinal strains. It is known that longitudinal tensile strain does not reduce the burst pressure relative to that of pipes subjected to low longitudinal strains. Therefore, existing burst pressure models can be considered adequate when the longitudinal strain is tensile. However, longitudinal compressive strain was found to lead to a moderate reduction in burst pressure. Numerical analyses were conducted to study the effect of longitudinal compressive strain on the burst pressure of corroded pipes. A burst pressure reduction formula was developed as a function of the longitudinal compressive strain.

Full-scale tests were conducted to confirm the findings of the numerical analysis. Guidelines for assessing the burst pressure of corroded pipes under high longitudinal compressive strains were developed from the outcome of numerical analysis and experimental tests. The guidelines are applicable to different types of corrosion anomalies, including circumferential grooves, longitudinal grooves and general corrosion.

Wrinkles may form in pipelines experiencing high longitudinal strains in areas of ground movement and seismic activities. Current assessment procedures for wrinkles were developed and validated under the assumption that the predominant loading was internal pressure and that the level of longitudinal strain was low. The impact of wrinkles on the burst pressure of pipes under high longitudinal strain is not known. This paper describes work funded by US DOT PHMSA on the assessment of burst pressure of wrinkled pipes under high longitudinal strain.

Both numerical analyses and full-scale tests were conducted to examine the burst pressure of wrinkled pipes. The numerical analysis results were compared with the full-scale test data. The effect of wrinkles on burst pressure were discussed. The biaxial loading conditions in the pipe were found affect the burst pressure of wrinkled pipes.

This paper presents the basic concept and verification tests results of a novel method designed to prevent failures of buried pipelines subjected to compressive deformations which are usually caused by ground movements. In this method the boundary conditions of the buried pipes are modified by installing soft elements next to the pipe before backfilling. With the new boundary conditions, the pipe response under large compressive forces will be in form of a stable global buckling mode with a predefined deformed shape. This behavior prevents rapid increase in the compressive axial force that causes local buckling, wrinkling, and subsequent softening, and strain localization. By using this method, pipes can have an extended compressive hardening response that absorbs large compressive displacements. The evaluation of this concept and its performance level were studied through a series of lab tests on 4-1/2 inch pipe specimens under simulated field conditions. The test results confirmed the anticipated performance of this technique which can evolve into a design method.

Strain based design concepts have been extensively used for subsea pipelines for both installation and service. However, most onshore transmission pipelines are designed assuming a maximum longitudinal stress, typically 90% SMYS. Some onshore pipelines have been designed for a limiting axial strain generated by causes such as seismic activity, frost heave, discontinuous permafrost or landslides. Models have been developed to predict the axial strain capacity in both tension (usually limited by the girth welds) and compression (where the limit is local buckling of the pipe wall).

In service monitoring of a pipeline initially designed on a stress basis may reveal that strains approaching or exceeding the design level are occurring, or are predicted to occur in the future. In these cases the pipeline operator will have to assess if the pipeline is fit for continued service. In principle strain based design approaches could be adapted for such an assessment.

Strain based design approaches place more onerous demands on the linepipe and the girth welds, but for a new pipeline these requirements can be addressed during design, material specification, procurement and weld procedure qualification. However, for an existing pipeline the data required to use strain based approaches may not be readily available. Some strain capacity models are only valid over a restricted range of inputs and so cannot be used in all cases. Hence there is a need to develop guidance for assessing the fitness for purpose of a stress based design pipeline that is found to be experiencing high axial strains.

The European Pipeline Research Group (EPRG) has initiated a program to develop such guidance. This paper presents the results of the first stage of this program. The requirements for data such as inspection records, weld metal fracture toughness and parent pipe mechanical properties are considered. A flow chart has been developed to guide operators when assessing an existing pipeline found to be subject to high strains, and a gap analysis identifies areas where additional work is required.

The Wapiti River South Slope is located 25 km southwest of Grande Prairie, AB. The slope is 500 m long and consists of a steep lower slope and a shallower upper slope, both of which are located within a landslide complex with ground movements of varying magnitudes and depths. The Alliance Pipelines Ltd. (Alliance) NPS 42 Mainline (the pipeline) was installed in the winter of 2000 using conventional trenching techniques at an angle of approximately 8° to the slope fall line. Evidence of slope instability was observed in the slope since the first ground inspection in 2007. Review of the available geotechnical data indicates two different slide mechanisms. In the lower slope, there is a shallow translational slide within a colluvium layer that is draped over a stable bedrock formation. In the upper slope, there is a deep-seated translational slide within glaciolacustrine and glacial till deposits that are underlain by pre-glacial fluvial deposits. Both the upper and lower slope landslide mechanisms have been confirmed to be active in the past decade.

Large ground displacements in the order of several meters between 2012 and 2014 in the lower slope led to a partial stress relief and subsequent slope mitigation measures in the spring and summer of 2014, which significantly reduced the rate of ground movement in the lower slope. Surveying of the pipeline before and after stress relief indicated an increase in lateral pipeline deformation (in the direction of ground movement) following the stress relief. This observation was counter-intuitive and raised questions regarding the effectiveness of partial stress relief to reduce stresses and strains associated with ground movements.

Finite element analysis (FEA) was conducted in 2017 to aid in assessing the condition of the pipeline after being subject to the aforementioned activities, and subsequent ground displacement from July 2014 to December 2016. This paper presents the assumptions and results of the FEA model and discusses the effect of large ground displacement, subsequent stress relief and continued ground displacement on pipeline behaviour. The results and findings of the FEA reasonably match the observed pipeline behaviour before and after stress relief. The FEA results showed that while the lateral displacement of the pipeline that was caused by ground movement actually increased following the removal of the soil loading, the maximum pipeline strain was reduced in the excavated portion. The results also indicated that ground displacement in the upper slope following the stress relief had minimal effect on pipe stresses and strains in the lower slope.

Pipelines in transmission pipeline networks often traverse land slopes along the right-of-way; especially near water crossings. While the vast majority of these slopes are stable, some might have a potential for instability related movements. Accordingly, pipelines subjected to these movements are susceptible to strain overload which may cause loss of containment in terms of buckling and/or tensile elongation failure modes. In order to analyze the risk of failure of pipelines due to slope movement it is beneficial to establish probabilistic approaches that can predict the likelihood of failure at each site given both aleatory and epistemic uncertainties. Estimation of such likelihood would support prioritization of integrity mitigation actions and confirm pipelines’ safety. There is a gap in pipeline literature in terms of available probabilistic approaches to analyze, assess, and manage such an integrity threat. Two probabilistic approaches are presented herein; a qualitative ranking analysis of slope hazards (QuRASH) and a semi-quantitative analysis of slope hazards (SQuASH). QuRASH is a qualitative approach that adopts site scores based on available slope characteristics, historical movements, expert opinion, and mitigation strategies. SQuASH is a reliability-based explicit limit state approach. Both approaches were applied to a large simulated sample of slope crossings that exhibit characteristics representative of North America transmission pipeline slope crossings. The resulting probabilities of failures were directly compared to those predicted based on expert judgement. The high ranked sites compared favorably with those evaluated by experts to exhibit elevated threats. This successful comparison provides a certain level of confidence in the proposed approaches.

Current understanding of pipe-soil interaction during large ground movement events is insufficient due to their infrequency and the complexity of the infrastructure. Pipeline operators currently rely on a fully coupled continuum model of a landslide and pipeline interaction, or, more commonly, on a simplification of this interface using structural beam style soil-springs to transfer soil loads and displacements to the pipeline.

The basis for soil-springs are laboratory studies based largely on clean sand or pure clay, and flat ground. Owing to the use of manufactured soils and flat ground, the soil-pipe interface modelling may not be valid for landslides.

The loading of a pipeline in a landslide, and how the soil-spring factors should change with space and time are reviewed and may differ from commonly adopted guidelines. Physical modelling in research is emerging to study landslides and pipelines utilizing fully instrumented scale models. In the absence of fully instrumented field pipelines, physical modelling should be used to validate continuum models.

Previous practices at Enbridge Pipelines Inc. regarding allowable deformation and/or strain in pipelines were closely mirrored to current North American standards and did not effectively manage the time dependent aspect of active slope movement and other similar progressive external force phenomena applied to a pipeline. In addition, variability in soil spring values and/or soil displacements utilized in FEA assessments can lead to significant variability between predicted pipeline strain demand and actual conditions. Furthermore, the risk tolerance of a potential and significant deformation that could impact the serviceability of one pipeline in a Right of Way (ROW) including multiple pipelines was not well defined. Shutting down several pipelines in a ROW until it could be proven that they are fit-for-service is time consuming. In addition, it requires a good amount of engineering work to address the significant uncertainty in confirming if a pipeline has not deformed beyond a safe limit when the latest deformation in-line inspection predates significant soil movement.

A better tool was required to allow a more precise assessment. A better fitness-for-purpose approach has been developed and used to both predict when a timely repair would be required and conservatively set the In-line Inspection (ILI) re-inspection interval to monitor the condition of the pipe. This approach allowed to step away from the greater uncertainty associated with understanding the impact of soil movement on the pipe integrity. This paper presents the methodology used by Enbridge to redefine its fitness-for-purpose methodology using proven strain-deformation correlation models and strain rates estimated through multiple arrays of strain gauges. The discussion will include how the safety targets were re-engineered to account for multiple pipelines in a ROW.

Risk and Reliability

In the light of recent experience of wildfires in Alberta and British Columbia, Alliance Pipeline has strengthened their emergency preparedness in dealing with external fire events that have the potential to affect above-ground facilities connected with their high pressure natural gas pipeline system. As part of this initiative a quantitative methodology has been developed that enables the effects of a wildfire on an above-ground pipeline facility to be assessed.

The methodology consists of three linked calculations which assess:

1. the severity of the wildfire, based on information from the Canadian Wildland Fire Information System,

2. the transmission of thermal radiation from the wildfire to the facility, and,

3. the response of equipment, structures and buildings to the incident thermal radiation.

The predictions of the methodology agree well with the actual damage observed at a lateral block valve site following a wildfire in 2016. Application to example facility types (block valve sites, meter stations and compressor stations) has demonstrated that, in general, damage is only predicted for more vulnerable items such as cables.

The sensitivity of the predictions of the methodology to the input parameters and key modelling uncertainties has been examined. This demonstrates that the results are sensitive to the distance of the facility from the tree line and the assumed vegetation type. This shows the importance of verifying the location relative to the vegetation and selecting the appropriate vegetation type from the Canadian Wildland Fire Information System for site specific assessments. The predictions of the methodology are particularly sensitive to the assumed flame temperature. However, a value has been chosen that gives good agreement with measured thermal radiation values from wildfires.

Of the mitigation options considered, the most effective and practical is to increase the distance to the tree line. This measure has the advantage of reducing radiation levels for all items on the site. Even though the work shows that failure of exposed pipework due to wildfires is unlikely, maintaining the flow within pipes is recommended as this increases the radiative flux at which failure is predicted to occur. However, as failure of cables and hence control systems would occur at a lower flux levels the fail-safe actions of such systems needs to be confirmed. Shielding of cables or items of equipment in general is likely to be impractical but could be considered for particularly vulnerable equipment or locations.

In-line inspection (ILI) data is commonly used in corrosion growth models (CGMs) to predict the corrosion growth in energy pipelines. A hierarchical stochastic corrosion growth model is considered in this paper which considers the variations in the corrosion growth, both spatially and temporally, the inherent measurement error of the ILI tools as well as the model uncertainties. These uncertainties are represented as unknown model variables and are often inferred using a Bayesian method [1], [2] and samples of the unknown parameters’ posterior probability density functions (PDFs) are obtained using Markov Chain Monte Carlo (MCMC) sampling techniques [3].

ILIs can result in massive data sets. In order for MCMC-based inference techniques to yield reasonably accurate results, many samples (approaching infinity) are required. This fact in addition to the massive data sets exponentially increases the scale of the inference problem from an attainable solution to a potentially impossible one that is limited by today’s computing power. For this reason, MCMC-based inference techniques can become inefficient in the cases where ILI datasets are large. The objective is to propose variational inference (VI) as an alternative to MCMC to determine a Bayesian solution for the unknown parameters in complex stochastic CGMs. VI produces approximations of the posterior PDFs by treating the inference as an optimization problem. Variational inference emerged from machine learning for Bayesian inference of large data sets; therefore, it is an appropriate tool to use in the analysis of mass pipeline inspection data[4]–[7].

This paper introduces VI to solve the inference problem and provide a solution for a hierarchical stochastic CGM to describe the defect-specific corrosion growth experienced in pipelines based on excessively large ILI datasets. To gauge the accuracy of the VI implementation in the model, the results are compared to a set of values generated using a stochastic gamma process that represents the corrosion growth process experienced by the pipe.

Current regulations for prediction and management of potential delayed failures from existing pipeline dents rely primarily on depth and conservative assumptions related to threat interactions, which have shown limited correlation with industry failures. Such miscorrelation can lead to challenges in managing effectiveness and efficiency of pipeline integrity programs. Leading integrity techniques that entail detailed assessment of complex dent features rely on the use of finite element analysis, which tends to be inefficient for managing large pipeline systems due to prohibitively complex modeling and analysis procedures. While efforts are underway to improve dent assessment models across the industry, these often require significant detailed information that might not be available to operators; moreover, they suffer scattered model error which makes them susceptible to unclear levels of conservatism (or non-conservatism). Nevertheless, most techniques/models are deterministic in nature and neglect the effect of both aleatory and epistemic uncertainties. Operators typically utilize conservative assumptions based on subject matter experts’ opinions when planning mitigation programs in order to account for different types of uncertainties associated with the problem. This leads to inefficient dig programs (associated with significant costs) while potentially leaving dents on the pipeline which cannot be quantitatively risk assessed using current approaches. To address these concerns, the problem calls for a dent assessment framework that balances accuracy with the ability to assess dent and threat integration features at a system-wide level with available information in a practical timeframe that aligns with other integrity programs.

This paper expands upon the authors’ previously published work regarding a fully quantitative reliability-based methodology for the assessment of dents interacting with stress risers. The proposed semi-quantitative reliability model leverages a strain-based limit state for plain dents (including uncertainty) with semi-quantitative factors used to account for complex geometry, stress riser interactions, and operating conditions. These factors are calibrated to reliability results from more detailed analysis and/or field findings in order to provide a simple, conservative, analytical-based ranking tool which can be used to identify features that may require more detailed assessment prior to mitigation. Initial validation results are provided alongside areas for continued development. The proposed model provides sufficient flexibility to allow it to be tailored/calibrated to reflect specific operator’s experience. The model allows for a consistent analysis of all types of dent features in a pipeline system in a short period of time to support prioritization of features while providing a base-level likelihood assessment to support calculation of risk. This novel development supports a dent management framework which includes multiple levels of analysis, using both deterministic and probabilistic techniques, to manage the threat of dents associated with stress risers across a pipeline system.

Much of North America, and indeed much of the global landscape, is comprised of either locally or regionally steep slopes, river valleys, and weak or unstable geology. Landslides and ground movements continue to impact pipelines that traverse these regions. Pipeline integrity management programs (IMP’s) are increasingly expecting quantitative estimates of ground movement or pipe failure as part of pipeline risk management systems. Quantitative analysis usually relies on one or more of statistics, physical models, and expert judgment. Statistics incorporate ground and pipe behavior (for hazard and vulnerability respectively) over a broad area to infer local probabilities. They carry the weight of big data, but the local application is almost certainly incorrect (variability even for regions exceeds 2 orders of magnitude). Detailed geotechnical (hazard) and soil-pipe interaction and stress (vulnerability) models provide rigorous results, but require substantial effort and/or expert judgment to parameterize the inputs and boundary conditions. We present herein a structured tool to calculate probability of failure (PoF) using expert judgment supported by known, instrumented or observable conditions and statistics (where available). We provide a series of tables used as a basis for nodal calculations along a branch path of a decision tree, and discuss the challenges and results from actual application to over 100 sites in the Interior Plains. The method is intended to be a practical informative approach based on, and limited by, data inputs. It is a flexible fit for purpose assessment that takes advantage of the best available data, however, the method relies on the user to articulate a level of confidence in, or the basis of the results.

In Australia (and the UK), pipeline operating companies have a regulatory obligation to ensure that their assets are designed, constructed, operated and maintained so that risk to people and the environment is as low as reasonably practicable (ALARP). In many routine cases, demonstration that risk is ALARP is a matter of compliance with relevant technical standards. There are some cases, however, that are more complex.

If a pipeline has been subject to significant urban encroachment and does not conform to current design standards for this service, how does a pipeline operator decide whether risk controls are sufficient? In Australia, rather than either ‘grandfathering’ requirements or mandating retrospective compliance with new standards, operators are required to ensure pipelines are safe and that risk levels are acceptable. The answer in cases such as this is a matter of judgment and we have legal, moral and reputational responsibilities to get decisions such as this right. There is currently no formal requirement in the US for pipeline risks to be ALARP, although the concept is gradually being introduced to US industry safety law. Examples include US offshore well control rules, California refinery safety regulations and the nuclear sector concept of ‘as low as reasonably achievable’.

In this paper, we demonstrate application of the ALARP process to a case study pipeline built in the 1960s that has been heavily encroached by urban development. The Australian risk-based approach required formal ALARP assessment including consideration of options to reduce pressure, relocate or replace the pipeline, or increase the level of physical or procedural protection.

Current and predicted operating conditions on this existing pipeline allowed reduction in operating pressure in some of the encroached segments, sufficient to achieve the equivalent of current Australian requirements for ‘No Rupture’ in high consequence areas for new pipelines. In other areas this was not achievable and a lesser degree of pressure reduction was instigated, in combination with physical barrier protection. The physical barrier slabbing comprised over 7 km of 20 mm thick high-density polyethylene (HDPE) slabs, buried above the pipeline. This approach was new in Australia and required field trials to confirm effectiveness against tiger tooth excavators and rotary augers.

These upgrades to the case study pipeline have significantly decreased the risk of pipeline failure, by reducing both likelihood and consequences of accidental impact. In combination with rigorous procedural controls such as patrol surveillance and community liaison, real risk reduction has been achieved and ALARP has been demonstrated.

Natural gas pipeline network system is a critical infrastructure connecting gas resource and market, which is composed with the transmission pipeline system, underground gas storage (UGS) and liquefied natural gas (LNG) terminal demand. A methodology to assess the gas supply capacity and gas supply reliability of a natural gas pipeline network system is developed in this paper. Due to random failure and maintenance action of the components in the pipeline network system, the system can be in a number of operating states. The methodology is able to simulate the state transition process and the duration of each operating state based on a Monte Carlo approach. After the system transits to other states, the actual flow rate will change accordingly. The hydraulic analysis, which includes thermal-hydraulic simulation and maximum flow algorithm, is applied to analyze the change law of the actual flow rate. By combining the hydraulic analysis into the simulation of the state transition process, gas supply capacity of the pipeline network system is quantified. Furthermore, considering the uncertainty of market demand, the load duration curve (LDC) method is employed to predict the amount of demand for each consumer node. The gas supply reliability is then calculated by comparing the gas supply capacity with market demand. Finally, a detailed procedure for gas supply capacity and gas supply reliability assessment of a natural gas pipeline network system is presented, and its feasibility is confirmed with a case study. In the case study, the impact of market demand uncertainty on gas supply reliability is investigated in detail.

Risk assessment is an effective and commonly practiced process in industry, including oil and gas sector, as a basis for designing new pipeline terminals and stations and managing integrity of existing facilities. A holistic risk assessment method, which could be qualitative or quantitative, includes both the likelihood and consequence assessments of an undesired event. Prior to 2015, Enbridge Pipelines employed a qualitative risk assessment algorithm to assess the likelihood and consequence of a failure of liquids pipeline facilities.

Over the past decade Enbridge has identified a number of shortcomings with the qualitative approach, necessitating the development and use of Quantitative Risk Assessment (QRA) to support consistency and defensibility in risk-informed decision making. A QRA requires rigorous quantitative algorithms to measure public and environmental safety, and potential business consequences of an undesired event at a facility. While significant literature has been produced, and considerable effort has been expended to quantify the potential impacts of a flammable product release on public safety, very limited work has been done on the quantitative measurement of environment related impacts. In particular, limited research has been successful in aggregating environmental consequences, public safety and business consequences to estimate the total consequence of a liquid hydrocarbon release within a pipeline facility.

The consequence assessment of an unwanted event conducted through QRA can be combined with the associated likelihood to provide a quantitative measure of risk. This risk level may be used to support organizations in making risk informed decisions and in analyzing and treating facility risks, specifically in the:

• Identification of top risk facilities and high consequence functional areas;

• Identification of assets posing the most risk and worst case consequences;

• Understanding of system reliability risk and opportunities to optimize facility operation;

• Prioritization of facility maintenance projects in the capital and operating budget processes;

• Supporting regulatory requirements and expectations;

• Presentation of risk down to the equipment or component level; and

• Understanding of residual risk and achieved risk reduction.

This paper describes the development of a consequence model that monetizes the quantitative measure of public and environment safety, and potential business losses for a liquid product release at pipeline facilities. The proposed model characterizes the severity of impact of released product, expressed in dollars per event, as a function of system volume, proximity and category of receptors, asset location, and available controls.

The Enbridge Liquids Pipeline system is comprised of a large number of facilities including storage terminals, pump stations, injection sites, and delivery sites. Given the vast amount of small diameter piping (SDP) within company Pipeline facilities, SDP represents a significant portion of total facility integrity risk. An event such as equipment failure or product release can cause significant business impacts, and adverse consequences to the environment and/or safety of operations personnel. A quantitative risk based approach is required in order to establish robust, risk-based plans and programs to maintain the integrity of these SDP sections.

Small diameter piping lengths are relatively short. Consequently, it is impractical to use SDP length as a unit of likelihood and risk measure. Instead, the preferred methodology is to determine the total number of assemblies for each type of SDP. In support of this approach, an inventory of SDP sections throughout the system has been gathered. For illustrative purposes, an example of a small diameter section would be a pressure transmitter branch connection. The isolatable section that would be risk assessed would start from the surface of the main station piping connection and continue up to the transmitter.

This paper presents the framework for likelihood and consequence assessment of SDP based on the system description above. This framework quantitatively estimates the risk of SDP failure and risk-ranks SDP sections in support of implementing and establishing a system wide Risk Based Inspection and Maintenance program for SDP.

Properly characterizing the consequences of pipeline incidents is a critical component of assessing pipeline risk. Previous research has shown that these consequences follow a Pareto type distribution for gas distribution, gas transmission and hazardous liquid pipelines where low probability – high consequence (LPHC) events dominate the risk picture. This behavior is driven by a combination of deterministic (e.g. pipe diameter, pressure, location factors, etc.) and random factors (e.g. receptor density at specific time of release, variable environmental factors at time of release, etc.). This paper examines how the Pareto type behavior of the consequences of pipeline incidents arises and demonstrates how this behavior can be modeled through the use of a quantitative pipeline risk model. The result is a more complete picture of pipeline risk, including insight into LPHC events. Use of the modelling approach for integrity management is discussed.

The application of reliability-based structural integrity enables the process of quantitative risk assessment as part of pipelines’ integrity management program (IMP). This paper explores two topics that present challenges in terms of the practical adoption of a reliability-based IMP. The first challenge is the balance between perceived and true risk when implementing a quantitative reliability-based integrity model. This is a cornerstone for building stakeholder confidence in the calculated probability of failure (PoF) which is applied to safety and economically driven integrity decisions. The second challenge is the assurance that all relevant sources of uncertainty have been incorporated, which is essential for ensuring an accurate representation of the risk of failure of the pipeline. The level of conservatism (i.e. sufficient margin of error to maintain safety) incorporated when addressing these challenges may create a situation where calculated PoFs become inflated; becoming disproportionate to the failure history and contradictory to the current safe operation of pipelines being modeled. Two different PoF calibration approaches are proposed as practical options to address these challenges. The first method calibrates model error using an operator’s in-service failure history (i.e. failures that occurred under normal operation). The second method uses a set of failure data (including hydrostatic test failures and in-service failures) as selected by the operator considering key factors to ensure adequate representation of their specific pipeline system. These options will be demonstrated by assessing the integrity reliability of a hypothetical pipeline system. This work is expected to help evaluate the feasibility of challenging current practices regarding practical inclusion of epistemic uncertainty in integrity reliability analysis of pipelines.

Pipeline stations, as an important part of long-distance pipeline systems, include lots of facilities which are highly concentrated and always operate continuously. Risk assessment is an important foundation work for the risk management of these stations. Since various uncertainties exist during the quantitative risk assessment (QRA), this paper explores the theories and approaches of QRA for station accidents, and also introduces some specific mathematical theories for quantification and dealing with uncertainties. This paper combines uncertainty theory effectively with the QRA for gas distribution stations, analyzes the uncertain factors in the QRA of gas distribution station, and establishes Bayesian update model for estimating basic events’ failure rates and probabilities of failure on demand based on generic failure data and plant-specific data. And it also offers conversion method among conjugate prior distribution of different types. Besides, probabilistic estimation model is set up by the combination of fuzzy set theory, expert judgments and fuzzy group decision making. The paper builds Fuzzy Bow-Tie quantitative model for distribution station under dependency relationships, and proposes the sensitivity analysis method for the accident model based on fuzzy importance index, fuzzy uncertainty index and minimal cut sets importance index.

Saudi Aramco gas pipeline location classification are designed with a similar approach to the American Society of Mechanical Engineers (ASME) B31.8, which segments the pipeline length and counts the population for each segment. For the segments width, ASME utilizes a fixed distance, i.e., 400 m, while Saudi Aramco uses the pipeline Rupture Exposure Radius (RER), a consequence modeling driven distance similar to ASME’s Potential Impact Radius (PIR). The design factors (i.e., wall thickness requirements) are selected based on the population density within the defined segments, while also affecting the number of segments and emergency isolation valves required along the pipeline.

Previously, Saudi Aramco pipelines safety standards set two default RER values to be used in the pipeline design based on conservative estimates. Based on the pipeline diameter, the RER is set at 1,000m or 2,000m for less than 24″ pipeline and greater than or equal to 24″ in size, respectively.

Saudi Aramco standard defined RER by modelling the downwind dispersion distance at ground level in case of a pipeline full bore rupture to the limit of ½ the lower flammable limit (LFL) of the released vapor cloud, which was shown to be smaller than the standardized values.

As sweet gas pipeline systems are hugely expanding to accommodate the increase in domestic demand in the Kingdom of Saudi Arabia, an efficient method for calculating RER was developed and introduced to the standard. For future pipelines, lower RER distances resulted in more flexibility in route selection, lower pipeline location class, and hence thinner wall thicknesses, less emergency isolation valves required, and longer span between sectionalizing valves, which all translate to cost savings. Existing pipelines currently require less upgrades when encountering urban development in their route, have less number of High Consequence Areas (HCAs) and better repair prioritization.

By statistically analyzing and modeling the Saudi Aramco gas pipeline network, this paper discusses the development of an empirical formula that is representative and less conservative for estimating pipelines flammable gas cloud dispersion ½ LFL. The resulted calculation method had been developed utilizing consequence modeling software, and is expressed as a simple formula as a function of the pipeline pressure and diameter. The established method is currently adopted by Saudi Aramco pipeline safety standards, and resulted in a reduction of 74% of the average pipelines RER, with a standard deviation of 4 meters from the consequence modeling results, and minor diversion in consequence distances when compared to international standards calculation methods such as ASME PIR.

In South America, there is not a unique standard that regulates the Design, Operation, Maintenance and Integrity Management of Pipelines. Most of the countries had developed their own regulations and standards based mainly on the ASME Standards. These standards (like ASME B31.8 and ASME B31.8S) are being developed and updated considering the experience of different operators, but the results not always consider the difficulties in terms of social and cultural aspects of construct and operate pipelines in South America. Expansion of existing residential and commercial areas, or the construction of new developments near these pipelines can change a Location Class 1 into a Class 2 or Class 3 location. This development is not always predictable, besides the efforts of the South American Pipelines Operators made to coordinate this expansions with the local authorities, the growth in these countries are not well planned and the Operators are forced to face the situation without anticipation and without a backup of the regulations. Then the operators are unexpectedly left with a pipeline that no longer meets the requirements of its design code.

ASME B31.8 establishes alternatives to adequate this changes into the design code: reducing the maximum allowable operating pressure of a pipeline, pipeline replacement increasing the wall thickness or by re-routing it away from the population. Those alternatives have high costs and significant operational difficulties, especially when the social conditions are not favorable. Additionally, some of these options do not even effectively solve the problem. Lowering operating stress levels do not always address the higher risk levels or safety concerns caused by the change in class. Increasing wall thickness, can lower probability of failure for a pipeline but not for all the combinations of threats, which depend on site specific conditions.

The Pipeline Integrity Management System shall address all the threats as it is specified in ASME B31.8S, ensuring human safety as its primary objective. Third Party Damage is an important threat which in most of the pipelines around the world has caused the larger number of incidents. To manage this threat, risk assessments have been employed successfully to determine risk based on land use zones, proximity to utilities, alignment markers, one call and dig notification, surveillance intervals, among other variables.

Calculating the risk to a specific pipeline near to a population after the mitigation activities are implemented, it may be shown that this pipeline has no more risk than other pipelines operating entirely in accordance with the design codes. Risks must be maintained “as low as reasonably practicable”, using cost benefit analysis to achieve these criteria.

The reduction of the risk is accomplished by implementing additional mitigation plans, allowing to effectively use maintenance resources in areas where they will have the highest impact on risk. This paper shows how risk and engineering assessments and their consequent mitigation plans may be used to justify the safe operation of a pipeline without changing its original operating pressure following a change of class designation, exemplified with a case study from South America.

As the primary means of refined products transportation, multi-product pipeline plays a vital role in connecting refineries to local markets. Once disruptions occur, it will cause security issue on oil supply to downstream markets, even on the economy and stability of society. Based on the conventional reliability theory and detailed scheduling method of multi-product pipeline considering hydraulic constraints, this paper firstly proposes a multi-module systemic approach for the supply reliability analysis of multi-product pipeline under pump units failure conditions. Pump units are important corollary equipment in multi-product pipeline and their failure would affect the pipeline normal operation and downstream oil supply greatly. The approach includes three modules, namely, pump units analysis module, pipeline system analysis module and reliability evaluation module. In the pump units analysis module, Failure Mode and Effects Analysis (FMEA) method is adopted to analyse the correlations between pump units failure modes and causes. The Monte Carlo simulation method is employed to generate different failure scenarios based on the estimated failure rate of pump units. In the pipeline system analysis module, the detailed scheduling method of multi-product pipeline is adopted to calculate the maximum supply capacity for all delivery stations under a specific scenario. Due to the difficulty in solving detailed scheduling problem considering hydraulic constraints directly, two mixed integer linear programming (MILP) models are established. In the reliability evaluation module, the indexes of shortage, probability and adequacy are calculated to analyse the supply reliability quantitatively from global perspective and individual perspective. Finally, the proposed approach is applied to a real-world multi-product pipeline in Zhejiang, China. It is proved that this approach could provide significant guidelines for the supply reliability analysis of multi-product pipeline.

The use of integrity reliability science is becoming a prevalent element in the pipeline integrity management process. One of the key elements in this process is defining what integrity reliability targets to achieve in order to maintain the safety of the system. IPC2016-64425 presented different industry approaches around the area of defining reliability target levels for pipelines. It discussed the importance of setting operators’ specific integrity target reliability levels, how to choose such targets, and how to determine the safety of a pipeline asset by comparing the probability of failure (PoF) against an integrity permissible probability of failure (PoFp) while keeping an eye on the estimated expected number of failures. Building upon the previous discussion, this paper reviews a risk-based approach for estimating integrity reliability targets that account for the consequence of a potential release. Given available technical publications, the as low as reasonably practicable (ALARP) concept, and operators’ specific risk tolerances, there is room for improving the communication of integrity reliability along with selected targets. The paper describes how codes, standards, and operators set reliability targets, how operator specific targets can be chosen, and how industry currently recommends liquid pipelines reliability targets. Moreover, the paper proposes different approaches to define practical reliability targets coupled with an integrity risk-informed decision making framework.

Internal corrosion modeling of oil and gas pipelines requires the consideration of interactions between various parameters (e.g. brine chemistry, flow conditions or scale deposition). Moreover, the number of interactions increases when we consider that there are multiple types of internal corrosion mechanisms (i.e. uniform corrosion, localized corrosion, erosion-corrosion and microbiologically influenced corrosion). To better describe the pipeline internal corrosion threats, a Bayesian network model was created by identifying and quantifying causal relationships between parameters influencing internal corrosion. One of the strengths of the Bayesian network methodology is its capability to handle uncertain and missing data. The model had previously proven its accuracy in predicting the internal condition of existing pipelines. However, the model has never been tested on a pipeline in design stage, where future operating conditions are uncertain and data uncertainty is high. In this study, an offshore pipeline was selected for an internal corrosion threat assessment. All available information related to the pipeline were collected and uncertainties in some parameters were estimated based on subject matter expertise. The results showed that the Bayesian network model can be used to quantify the value of each information (i.e. which parameters have the most effect now and in the future), predict the range of possible corrosion rates and pipeline failure probability within a given confidence level.

Third-party damage (TPD) is any damage to underground infrastructure that occurs during work unrelated to the asset. In 2015, there were 10,107 TPD incidents in Canada causing over a billion dollars in estimated damage. TPD is the leading cause of failure for distribution gas pipelines; since distribution pipelines are generally located in areas with high population densities, TPD has significant safety and economic implications. In this study, a probabilistic model is developed to qualify the probability of failure of distribution pipelines due to TPD. The model consists of a fault tree model to quantify the probability of hit given the occurrence of third-party excavation activities and the methodology to evaluate the probability of failure given hit. Fault tree analysis (FTA) is a top down, deductive failure analysis method which uses Boolean logic to combine a series of basic events to analyze the state of a system. Earlier prior research demonstrated the ability of a FTA to quantify the probability of TPD occurring on natural gas transmission pipeline systems. These models allow for a quantitative analysis of preventative measures and, in conjunction with current practices, facilitate a predictive method to plan and optimize resource allocation for damage mitigation and emergency preparedness. The developed TPD model is validated using the data provided from a region in Southwest Ontario. The model will provide distribution companies with a practical tool to identify third-party damage hot spots, develop proactive third-party damage prevention measures, and prioritize damage repair activities using a risk-based approach.

Pipeline engineers routinely perform risk assessments using a linear approach that begins with data collection, progresses through threat identification, and concludes with risk assessment. This linear risk assessment process leads to some inefficiencies. For example, since all data is gathered in the first step, inconsequential data might be collected during the data gathering process that diverts resources from other pipelines. This paper presents a different approach, where data is gathered iteratively based on its risk reduction value derived from a sensitivity analysis and data collection cost. Each time data is gathered; future risk predictions become more certain. This process is stopped when the cost of data gathering activities outweighs the benefit to risk predictions.

To improve the safety of a pipeline system, engineers use different methods to diagnose the hazardous pipeline accidents. However, most methods ignore the time dependence of pipeline failures. The aim of this paper is to provide a novel approach to analyzing the hazardous liquid pipeline incidents’ temporal structure. The database of hazardous liquid spillages from the US between 2002 and 2018 is collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The result suggests that the whole oil pipeline incident sequence cannot be modeled as a Poisson (random and independent) process, which means that a hazardous liquid pipeline incident is not statistically independent from the time elapsed since the previous event. But the serious pipeline failures are random and unpredictable. The analysis also indicates that the equipment failure, corrosion, material failure and incorrect operation are the four leading failure causes, responsible for most of the total incidents. The study provides insights into the current state of hazard liquid pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines.

The ‘CO2SafeArrest’ Joint Industry Project (JIP) was set up with the twin aims of: (1) investigating the fracture propagation and arrest characteristics of steel pipelines carrying anthropogenic carbon dioxide (CO2), and (2) investigating the dispersion of CO2 following its release into the atmosphere. The project involves two full-scale burst tests of 24-inch, X65 buried line pipes filled with a mixture of CO2 and nitrogen (N2). An overview of the CO2SafeArrest JIP and details of the fracture propagation and arrest investigation appear elsewhere in two companion papers.

This paper presents the experimental investigation and computational fluid dynamics (CFD) simulations of the dispersion of CO2 following its explosive release into the atmosphere over the terrain at the test site in the first test.

The setting up of the experiment and the CFD model is described in detail, including the representation of terrain topography and weather (wind) conditions, and the condition at the ‘inlet to the dispersion domain’. The modelling was carried out prior to the actual event, and simulated the dispersion of the CO2 cloud for different wind speeds and directions. This analysis confirmed that the sensor layout set up to obtain spot measurements CO2 concentration over the terrain at the site was adequate.

The predicted and experimental values of CO2 concentration at the nominated locations over the duration of the dispersion were found to be in good agreement. Results of this study are expected to be used in developing a generalized model for the dispersion of CO2 and for estimating the ‘consequence distance’ for such events. It is noted that this distance is necessarily a function of time due to the highly transient nature of the event.

A limit states design approach has been developed for geotechnical loads. The approach uses a strain based design format and requires the user to develop probability distributions for the maximum strain demand and minimum strain capacity. Checks are provided for both local buckling and tensile rupture, which are calibrated to meet specified risk-consistent reliability targets. The safety factor and the criteria used to define the characteristic strain demand and capacity are defined as functions of the reliability target and the coefficients of variation of the strain demand and capacity. The checks are calibrated for a wide range of target reliability levels and distributions to cover most cases related to slope creep, landslides, frost heave and thaw settlement. They can also be applied to seismic deformations, subject to confirmation that the strain demand and capacity distributions fall within the range of calibrated cases. The design checks provide guidance on how to account for the spatial and temporal characteristics of different geotechnical loading processes, including distinction between sudden and gradual load application, and between known and randomly located loading sites. The limit states checks can be used to design new pipelines and assess the safety of existing ones. Application to slope movements is demonstrated by a set of examples.

Statistical data available from several international sources, such as the reports provided by the European institutions EGIG, CONCAWE and UKOPA, as well as by the American Department of Transportation – DOT, indicate that pipelines represent the safest mode of transportation for hydrocarbons and other dangerous products when compared to other alternatives, such as road, rail, waterway, etc. Operators ensure a high level of safety of their pipelines by investing large amounts of effort and resources in accident prevention, efficient contingency procedures, environmental protection and reliability along the life cycle of their assets.

However, the pipeline industry, both in Brazil and abroad, is frequently asked to demonstrate their safety performance both by environmental and regulatory agencies, as well as by society, considering the assets already in operation and also those that will still be built (new pipelines). Such requests are based on the most opened and detailed communication between pipeline operators and the other stakeholders involved.

In this context, the organized and standardized collection of data related to pipeline failure events, such as failure mechanisms and their consequences, along with relevant and specific data regarding the assets and their operations, is essential to foster the process of knowledge construction on this topic. It allows generating consistent information both to meet the stakeholders’ requests and to improve risk management of pipelines by the operators, ultimately supporting decision-making.

Therefore, this work aims to create a Brazilian Pipeline Incident Database, considering firstly gas pipelines and oil pipelines operated by TRANSPETRO, a PETROBRAS Subsidiary. This research intends to study the characteristics, the architectures, the assumptions and the principles adopted by the international pipeline failure databases currently available, considered here as benchmarks, in order to propose an analogous and specific structure for the reality of the Brazilian Pipeline System.

Reliability-based corrosion assessment criteria were developed for onshore natural gas and low vapor pressure (LVP) pipelines as part of a joint industry project. The criteria are based on the limit states design (LSD) approach and are designed to achieve consistent safety levels for a broad range of pipeline designs and corrosion conditions.

The assessment criteria were developed for two corrosion limit states categories: ultimate limit state, representing large leaks and ruptures; and leakage limit state, representing small leaks. For the ultimate limit state, a safety class system is used to characterize pipelines based on the anticipated severity of failure consequences as determined by pressure, diameter, product, population density and environmental sensitivity. Since the leakage limit state does not result in significant safety or environmental consequences, a single reliability target, applicable for all pipelines at all locations is used.

The assessment criteria formulations are characterized by three elements: the equations used to calculate the characteristic demand (i.e. operating pressure) and capacity (i.e. burst pressure resistance at a corrosion feature); the characteristic values of the key input parameters for these formulas (such as diameter, pressure and feature depth); and the safety factors defining the characteristic demand as a ratio of characteristic capacity. The process used to calibrate safety factors and characteristic input parameter values that meet the desired reliability levels is described, and an assessment of the accuracy and consistency of the resulting checks in meeting the reliability targets is included.

The assessment criteria include two methods of application: feature-based and section-based. The feature-based method divides the allowable failure probability equally between all features. It is simple to use, but conservative in nature. It is suitable for pipelines with a small number of corrosion features. The section-based method considers the failure probability of the corrosion features in a pipeline section as a group, and ensures that the total group failure probability is below the allowable threshold for the section. This method produces less conservative results than the feature-based method, but it requires more detailed calculations. It is suitable for all pipelines, and is particularly useful for those with a large number of features. The practical implications of the application of these criteria are described in the companion paper IPC2018-78608 Implementation of Reliability-based Criteria for Corrosion Assessment.

The US regulations and Canadian standards require that a System Wide Risk Assessment (SWRA) be performed for all pipelines. Typically, an annual SWRA is performed by operators and used to identify high risk sections. Appropriate identification of these high risk sections is expected to avoid significant failures, particularly in higher consequence locations. With current heightened public awareness levels and related regulatory oversight even a failure, such as a rupture, with relatively low safety and environmental consequences is considered undesirable. Post failure analysis often examines SWRA results to investigate if SWRA is identifying such locations appropriately. Are SWRAs developed with the intention of avoiding these failures? How can we ensure SWRA achieves these expectations?

This paper examines the purpose of SWRA and takes a data driven approach to critically assess its effectiveness. In the 21st century, where vast amounts of data are being generated through inspections, patrolling, monitoring, and management systems, TransCanada’s approach seeks to leverage all the evidence or leading indicators of high risk and imminent failures. However, data and subject matter expert opinions are not perfect and complete. Understanding these limitations and inadequacies, yet optimizing in the face of them, requires an honest representation of reality with considerations to limits of applicability and probable blind spots, together with clear decision-making to achieve a well-defined purpose.

This paper will describe the six-year evolution of a quantitative SWRA approach with a built in continuous improvement cycle. Examples of learning from failures, assessments, and analytical studies and how they were incorporated into the SWRA are demonstrated. Also the development of meaningful risk targets and their applications are explained. The particular details for scenarios where risk criteria have been exceeded in both high consequence and low consequence locations are examined and interpreted such that maintenance teams can address issues appropriately. The value of bringing all relevant data to a common risk platform is also demonstrated. In the 21st century, where data availability will only increase, appropriate holistic incorporation of these multiple data sets is critical to identify where multiple threats interact. Depending on how likelihood of failure and the consequences of failure are combined, the resultant risk could potentially be high (i.e. different risk measures). Therefore, it is important to cover all risk measures that are relevant and develop criteria that govern these risk measures.

The implementation of a holistic SWRA to make the best optimized decisions possible is demonstrated in practical situations where inputs are imperfect and vast data sets need to be combed for meaningful indicators.

Safety case is utilized within the Enbridge Pipeline Integrity Management Program as a means to provide evidence that the risks affecting the system have been effectively mitigated (LeBlanc, et al. 2016). The safety case is an independent, evidence-based assessment based on system integrity management processes applied across all pipelines. This paper describes the process in which safety case methodology was implemented to manage geohazard threats. The benefits of assessing geohazard and other integrity threats will also be discussed. The safety case report documents the opportunities to address the identified problems in addition to the relationship between hazards, implemented controls, and associated susceptibility.

To demonstrate that adequate safety controls for geohazard threats have been incorporated into the operational and maintenance phase of the pipeline system, the geohazard management component of the safety case was assessed using a bowtie diagram. The results gave visibility to the geohazard program and its effectiveness. Predefined safety performance metrics with probabilistic and deterministic criteria are evaluated to confirm the geohazard program’s continued effectiveness.

Results from the safety case assessment identify opportunities for improvement and provide a basis for revision to maintenance, assurance and verification programs. Ultimately the assessment demonstrates that geohazard threats in the pipeline system are being recognized and assessed. The assessment provides evidence that adequate resources and efforts are allocated to mitigate the risk and identifies continuous improvement activities where needed. The safety case report generated as the final portion of an integrity management framework demonstrates risk is as low as reasonably practicable (ALARP).

Stress Corrosion Cracking (SCC) is a time dependent mechanism. Three conditions are required at the same location for the formation of SCC namely, susceptible material, susceptible environment and sufficient stress. Pipe age, operating stress level and coating type are significant parameters in determining the susceptibility to near-neutral pH SCC; whereas, additional parameters such as operating temperature and distance from compressor station are considered for high pH SCC. Environmental conditions such as soil type, topography and drainage have also shown correlation to SCC susceptibility. Several integrity assessment methods can be used to identify SCC on pipeline including hydrostatic testing, in-line inspection (ILI), and direct assessment (DA). Because the occurrence of SCC is a complex phenomenon and it depends on many parameters, it is important to develop a risk assessment model that can systematically incorporate all relevant evidences of SCC in a sensible way. This paper presents a robust risk assessment model for SCC, which uses evidence from failure histories, observation from assessments (i.e., digs, pressure tests, and ILIs), and mechanistic understanding of SCC (i.e. susceptible coating, pipe material, stress level, soil properties, etc.). This risk model is transparent and updateable, which allows incorporation of new scientific learnings and findings of SCC.

Pipeline design and integrity management programs are employed to ensure reliable and efficient transportation of energy products and prevent pipeline failures. One of the failure modes that has received attention recently is pipeline fatigue due to pressure cycling in liquid pipelines, promoting through wall cracking and the release of product. Being able to estimate the leakage rate and/ total release volume are important in evaluating the consequence of developing a through wall crack, operational responses when incidents occur, and remedial action strategies and timelines. Estimates of leak rates can be used in pipeline system threat and risk assessment, evaluation of leak detection system sensitivity, development of Emergency Response Plans and strategies, and post-event evaluation.

Fracture mechanics techniques consider the response of crack-like features to applied loading such as internal pressure, including estimation of crack mouth opening. Considering the differential pressure across the pipe wall and the crack opening area, estimated from the crack mouth opening, the flow of fluid through the crack can be conservatively estimated. To understand the conservatism of this analytical estimate of leakage rate, full-scale testing has been completed to evaluate the leakage rate through dent fatigue cracks of differing lengths under a range of internal pressures, and compare the empirical measured results to the analytical/theoretical estimates. The test procedure employed cyclic internal pressure loading on an end-capped pipe with a dent to grow fatigue cracks through the pipe wall thickness. Once a through wall crack was established, the internal pressure was held constant and the leakage rate was measured. After measuring the leakage rate, cyclic loading was employed to grow the crack further and repeat the leakage rate measurement with the increased crack length.

The results of this experimental trial illustrate that the tight fatigue crack resulted in a discontinuous relationship between leakage rate and pipe internal pressure. Measureable leakage did not occur at low pipe internal pressures and then increased in a nonlinear trend with pressure. These results illustrate that a liquid pipeline with a through wall fatigue crack operating at a low internal pressure, or one having taken a pressure reduction, can have low leakage rates. The data and results presented in this paper provide a basis for an improved understanding and describing the leakage rate estimates at pipeline fatigue cracks, and providing insights into leakage rates and how to conservatively estimate them for fatigue crack consequence evaluation.

Pipeline risk models are used to prioritize integrity assessments and mitigative actions to achieve acceptable levels of risk. Some of these models rely on scores associated with parameters known or thought to contribute to a particular threat. For pipelines without in-line inspection (ILI) or direct assessment data, scores are often estimated by subject matter experts and as a result, are highly subjective. This paper describes a methodology for reducing the subjectivity of risk model scores by quantitatively deriving the scores based on ILI and failure data.

This method is applied to determine pipeline coating and soil interaction scores in an external corrosion likelihood model for uninspected pipelines. Insights are drawn from the new scores as well as from a comparison with scores developed by subject matter experts.

The United Kingdom Onshore Pipeline Operators Association (UKOPA) was formed by UK pipeline operators to provide a common forum for representing operators interests in the safe management of pipelines. This includes providing historical failure statistics for use in pipeline quantitative risk assessment and UKOPA maintain a database to record this data.

The UKOPA database holds data on product loss failures of UK major accident hazard pipelines from 1962 onwards and currently has a total length of 21,845 km of pipelines reporting. Overall exposure from 1952 to 2016 is 927,351 km years of operating experience with a total of 197 product loss incidents since 1962. The low number of failures means that the historical failure rate for pipelines of some specific diameters, wall thicknesses and material grades is zero or statistically insignificant. It is unreasonable to assume that the failure rate for these pipelines is actually zero.

In addition to product loss incidents, the UKOPA database contains extensive data on measured part wall damage that did not cause product loss, unlike the European Gas Incident data Group (EGIG) database, which also includes the UK gas transmission pipeline product loss data. The data on damage to pipelines caused by external interference can be assessed to derive statistical distribution parameters describing the expected gouge and dent dimensions resulting from an incident. Overall external interference incident rates for different class locations can also be determined. These distributions and incident rates can be used in structural reliability based techniques to predict the failure frequency due to external interference for a given set of pipeline parameters.

The current distributions of external interference damage were derived from data up to 2009 and presented as Weibull distributions for gouge depth, gouge length and dent depth. Analysis undertaken for the COOLTRANS CO2 pipeline project, undertaken by National Grid in the UK, has identified several improvements to the recommended UKOPA approach to external interference failure frequency prediction. This paper summarises those improvements and presents updated damage distribution parameters from data up to 2016.

A critical review of quantitative risk analysis (QRA) models used in the pipeline industry was conducted as part of a project titled “Critical Review of Candidate Pipeline Risk Models”, which was carried out for the U.S. Department of Transportation Pipeline and Hazardous Materials Safety Administration (PHMSA). Guidelines for the development and application of pipeline QRA models were developed as a part of this project, following an extensive literature review and an industry survey.

The guidelines provide a framework for performing QRA for natural gas and hazardous liquids transmission pipelines, and address risk estimation, which involves estimating the failure frequency and failure consequences. They are intended to assist operators in developing new QRA models, and in identifying and addressing gaps in their existing models. They are also intended to help regulators evaluate the accuracy, completeness, and effectiveness of the QRA models developed by operators.

A limit states design approach for onshore pipelines has been developed as part of a multi-year joint industry project (JIP). As part of this project, reliability-based design rules were developed for geotechnical loads, including landslides, slope creep, seismic loads, frost heave and thaw settlement. In consideration of the modelling complexity of the soil movement mechanisms and pipe-soil interaction, and to allow for flexibility to incorporate future model developments, the design rule formulation is directly based on the distribution parameters of the strain demand and capacity of the pipeline.

This paper describes the approach used to develop the strain demand and capacity distributions that are required to apply the design rules, as well as the applicable range of distribution parameters. Slope creep was selected as a basis for demonstrating the proposed process, as this loading mechanism occurs more frequently and the data to characterize the necessary uncertainties is available. General guidance related to the development of the strain demand distribution parameters for other geotechnical loads is also provided.

Northern Offshore and Production Pipelines: Northern and Production Pipelines

Low temperature and high pressure conditions in deep water wells and sub-sea pipelines favour the formation of gas clathrate hydrates which is very undesirable during oil and gas industries operation. The management of hydrate formation and plugging risk is essential for the flow assurance in the oil and gas production. This study aims to show how the hydrate management in the deepwater gas well testing operations in the South China Sea can be optimized. As a result of the low temperature and the high pressure in the vertical 3860 meter-tubing, hydrate would form in the tubing during well testing operations. To prevent the formation or plugging of hydrate, three hydrate management strategies are investigated including thermodynamic inhibitor injection, hydrate slurry flow technology and thermodynamic inhibitor integrated with kinetic hydrate inhibitor. The first method, injecting considerable amount of thermodynamic inhibitor (Mono Ethylene Glycol, MEG) is also the most commonly used method to prevent hydrate formation. Thermodynamic hydrate inhibitor tracking is utilized to obtain the distribution of MEG along the pipeline. Optimal dosage of MEG is calculated through further analysis. The second method, hydrate slurry flow technology is applied to the gas well. Low dosage hydrate inhibitor of antiagglomerate is added into the flow system to prevent the aggregation of hydrate particles after hydrate formation. Pressure Drop Ratio (PDR) is defined to denote the hydrate blockage risk margin. The third method is a recently proposed hydrate risk management strategy which prevents the hydrate formation by addition of Poly-N-VinylCaprolactam (PVCap) as a kinetic hydrate inhibitor (KHI). The delayed effect of PVCap on the hydrate formation induction time ensures that hydrates do not form in the pipe. This method is effective in reducing the injection amount of inhibitor. The problems of the three hydrate management strategies which should be paid attention to in industrial application are analyzed. This work promotes the understanding of hydrate management strategy and provides guidance for hydrate management optimization in oil and gas industry.

In previous studies, the atmospheric temperature was generally assumed to be constant during a period (commonly a month) for the numerical simulation on the buried hot oil pipeline. The rationality of this assumption is controversial due to the absence of quantitative results, and thus it needs to be further verified or investigated to make atmospheric temperature approximation more convincing. In this study, based on the changing trend of actual atmospheric temperature, three mathematical models are established and their expressions are presented according to different approximations. And the relationships among these three expressions are obtained by utilizing mathematical derivation. On the basis of three atmospheric temperature models, the weakly unsteady single oil transportation and strongly unsteady batch transportation are numerically simulated, respectively. According to numerical results, the oil temperature at the pipeline ending point and the soil temperature field are compared for these three models. In order to make comparisons more convincing, the influences of the physical properties of crude oil, operation parameters, pipeline parameters and pipeline environments on the deviations of numerical results are compared and analyzed. Finally, based on all comparisons on the deviations of numerical results, the conclusions are drawn, which can provide beneficial reference for the choice of atmospheric temperature models in future numerical simulation study on the buried hot oil pipeline.

The temperature drop of waxy crude oil after a shutdown is the basic premise for restarting relative mechanical calculation. However, computational accuracy has been paid much more attention excessively in the relevant techniques proposed in the previous researches for this calculation but ignoring the practicability of the calculation results. In this paper a new mathematical model is established for a buried hot crude oil pipeline during shutdown with the simplified complex physical process of oil cooling process reasonably, in which the heat transfer mode of crude oil is divided into pure convection heat transfer and pure heat conduction with stagnation point temperature neglecting the difference of radial temperature. The quasi periodic property theory of soil temperature field is referenced to be as the boundary condition for the thermal influence region. A numerical solution with a structured grid and an analytical solution under polar coordinate are respectively applied for the soil region and other regions including pipe wall, wax layer and insulation layer. The finite volume method is adopted to discretize the heat transfer control equation at the same time the boundary conditions are treated by the additional source term method. The simulation results of the new model are verified by a temperature field tested experiment, especially analyzing the temperature deviation between the simulation and the equivalent mean value of actual oil temperature. At last the effect of buried depth of pipeline on the temperature profiles during normal operation and the temperature drop process of crude oil were investigated based on the simplified model.

Northern Offshore and Production Pipelines: Offshore Pipelines and Risers

Effective pipeline design and regular maintenance can assist in prolonging the lifespan of subsea pipelines, however the presence of marine vessels can significantly increase the risk of pipeline damage from anchor hazards. As noted in the Health and Safety Executive – Guideline for Pipeline Operators on Pipeline Anchor Hazards 2009. “Anchor hazards can pose a significant threat to pipeline integrity. The consequences of damage to a pipeline could include loss of life, injury, fire, explosion, loss of buoyancy around a vessel and major pollution”.

This paper will describe state of the art pipeline isolation tooling that enables safe modification of pressurised subsea pipelines. Double Block and Bleed (DBB) isolation tools have been utilised to greatly reduce downtime, increase safety and maximise unplanned maintenance, providing cost-effective solutions to the end user. High integrity isolation methods, in compliance with international subsea system intervention and isolation guidelines (IMCA D 044 / IMCA D 006), that enable piggable and unpiggable pipeline systems to be isolated before any breaking of containment, will also be explained.

This paper will discuss subsea pipeline damage scenarios and repair options available to ensure a safe isolation of the pipeline and contents in the event of an incident DNV GL type approved isolation technology enables the installation of a fail-safe, DBB isolation in the event of a midline defect.

The paper will conclude with case studies highlighting challenging subsea pipeline repair scenarios successfully executed, without depressurising the entire pipeline system, and in some cases without shutting down or interrupting production.

Natural gas exploitation has been increasing progressively and the pipeline community are facing more challenging demands to ensure safe and reliable operations. In that direction, gas fields in very harsh environments are demanding material and welding procedure selections to comply with a combination of important requirements such as toughness at low temperature, sour environment, very low hardness, manual ultrasonic inspection (for UOE longitudinal weld soundness assurance) and others. Looking forwarding big challenges, Tenaris Confab has been successfully working to continue improving the know how regarding plate to pipe mechanical properties behavior, through steel selection using TMCP plates, welding consumables definition and process control to assure material performance. Considering this scenario, the main challenge is to comply with a combination of toughness and hardness requirements, assuring the material soundness through manual ultrasonic testing after 48h. These combination lead to a careful selection of welding consumable to add the right content of alloy element at the welding pool aiming a specific weld metal chemical composition after dilution. The alloy element selection has to be considered due to the aimed final microstructure at the weld metal, i.e. increases acicular ferrite, in order to achieve the toughness, hardness and manual ultrasonic performance for delayed hydrogen cracking (DHC); it is important to avoid grain boundary ferrite (GBF) nucleation. High wall thickness and high heat input increases residual stress after pipe welding, high residual stress combined to poor microstructure and hydrogen, is a perfect scenario for DHC. To avoid hydrogen cracks, a robust pipe forming process and welding concept is needed to give enough energy to diffuse hydrogen out from weld metal. Quality controls were applied to strict hydrogen content such as welding consumable specifications, evaluating the correlation curve between flux moisture and diffusible hydrogen, flux temperature control and others. As a result of those actions, good mechanical properties were achieved and overcoming the hydrogen cracking performance during automatic and manual ultrasonic testing confirm a robust pipe forming and welding procedure for demanding projects.

Technology plays a critical role in the oil and gas sector, and the pipeline industry is no exception. Maintaining the integrity of high pressure oil and gas pipeline requires the use of advanced technologies. A challenge that confronts every pipeline operator is the risk posed in the deployment of unproven technologies, especially those associated with the inspection, assessment, monitoring, and rehabilitation of their systems. Use of unproven technologies and concepts puts pipeline operators at risk.

The concept of Technology Readiness Levels (TRLs), commonly used in the aerospace and defense industries, provides the pipeline industry with a proven means for evaluating and assessing technologies used to enhance integrity management efforts. This paper presents details on technology readiness levels ranging from Proof of Concept to System Operation. The adoption and implementation of the TRL approach will minimize operator risk and foster the deployment of advanced technologies, thus enhancing the safe operation of high pressure pipelines. Three TRL-oriented case studies will be included evaluating the monitoring of pipelines using fiber optics, inspection using three-dimensional imaging, and reinforcement using optimized composite technologies.

Shibboleth is an access management service that provides single sign-on protected resources.
It replaces the multiple user names and passwords necessary to access subscription-based content with a single user name and password that can be entered once per session.
It operates independently of a user's location or IP address.
If your institution uses Shibboleth authentication, please contact your site administrator to receive your user name and password.