Posted on January 7, 2022 by Lisa Ward

Hello and welcome to Rose & Associates’ blog on Assurance!

By Marc Bond

Collectively Rose & Associates (R&A) has 100 years of experience leading and serving on E&P assurance teams. Along with our many years of consulting and supporting industry assurance teams, R&A is in a unique position to share our learnings and observations on the subject. Our aim is to help improve the effectiveness of assurance and subsequent decision-making, leading to more predictive portfolios.

Throughout this article series, we aim to discuss the concept of exploration assurance, exploring many facets such as the case for assurance, the role of assurance in decision-making, recommended assurance best practices, assurance team behaviors and biases, personal experiences, and pitfalls and challenges. Contributors will include R&A professionals as well as industry leaders experienced with assurance.

We welcome feedback and personal experiences dealing with assurance and encourage you to post your comments.

ASSURANCE: WHAT IS IT AND WHY IS IT SO IMPORTANT

Assurance describes the process of providing objective, independent, and consistent reality and perspective checks on exploration project characterization. When performed well, it can provide justified confidence to decision-makers in investment decisions and enhance predictive accuracy.

The assurance process begins with a review of a team’s assessment, then the assurance team offers recommendations for further technical work that can clarify the uncertainty, and lastly, all reach a collaborative prediction. Given their independence from the work and wider perspective, the assurance team is not as easily influenced by some of the biases associated with the pride of prospect ownership nor local management influence.

I recently finished reading Kahneman et al Noise: A Flaw in Human Judgment (2021). Since the 1970s Professor Kahneman has established himself as the thought leader on decision theory (including a Nobel prize in Economics), so any book he writes is a must-read when you are in our line of work. Whilst the authors discussed assurance techniques, there was no mention of the concept of assurance. This one practice would have gone a long way in alleviating many of the inconsistent decisions the authors addressed. Frustratingly, there are few peer-reviewed publications covering assurance.

In all instances where judgments and decisions are needed, consistent and predictable outcomes depend upon the accuracy of predictions. Accurate predictions lead to a consistently reliable portfolio, worthy of repeat funding. Left to the individual assessor or team, the predicted outcomes are often inconsistent and inaccurate. The reason for this has many causes, biases being a key component (see attached link to the blog series on bias). The widespread overestimation of resources is a common problem for the oil and gas industry, resulting in loss of value.

Validation of technical assessments by assurance has been shown to contribute to consistent and predictive portfolio management, and thus improved business performance. When done well, assurance will provide additional and diverse perspectives on prospect assessment, share better practices, identify weaknesses in evaluations and assumptions, provide alternatives, and foster consistency. The assurance team should be seen as the ally of the technical team, with a common goal of delivering maximum success from the company’s exploration portfolio.

The assurance process should utilize an integrated approach working with the technical teams to ensure best practices and consistent evaluations. In addition to assistance from staff on the characterization of opportunities, the assurance team will also interact with management to aid in opportunity comparison and support their decision-making.

The following figure outlines the assurance process.

The process starts with the framing of the exploration project to determine the appropriate work program for evaluation. As input into assurance, there should be methods to detect and rectify flaws in analysis and ensure all products (e.g., reservoir models, seismic interpretation and mapping, etc.) adhere to the appropriate standards.

One of the cornerstones of the assurance process is early engagement. The assurance team will consider the key subsurface risks and uncertainties. They will then provide guidance and support to the technical team for best practice pre-drill resource and chance characterization. During the assurance review, the assurance team aims to validate the evaluation (e.g., resource distribution and chance of success), providing confidence to the decision-makers.

After consistency is introduced, calibration comes about from performance tracking, designed to analyze outcomes relative to predictions and capture learnings that feedback in future assessments.

The assurance process should be fit for purpose. Organizations should be clear on what the business requires from assurance, and the process should be designed to deliver these objectives. For all of the stages, there may be multiple cycles depending on the scope and complexity of the project. For example, large complex opportunities typically require several reviews, whereas a small, simple, or inexpensive opportunity may only need a single review.

Criticisms of Assurance Exist, Such as:
  • Assurance is a distraction and can create delays in project completion.
  • Assurance requires dedicated, experienced personnel who could be used elsewhere in the business.
  • Assurance team members are not as familiar with the basin/play/prospect to the degree of the technical team.
  • Assurance teams may focus on only the ‘numbers’, ignoring the geology and potential of an opportunity.
  • Assurance team members may be biased.
  • Feedback from the assurance team may be dogmatic and stifle creativity.
  • The influence of assurance experts on assessment detracts from the role of the supervisor in managing the technical evaluation.
  • The divergence between the assurance and technical teams’ views can create additional stress and delay in the system.

Whilst these are indeed important considerations, they are not a problem with assurance itself. Any flaws with the design or implementation of the assurance process can all be addressed and alleviated.

Marc was the Subsurface Assurance Manager with BG Group for 6 years, responsible for the company-wide subsurface assurance for all projects, including Exploration, Appraisal and Development ventures and conventional and unconventional resources. He helped create the Risk Coordinators Network in 2008, which remains active. Following the assurance role, he was the Chief Geophysicist at BG.

REFERENCE CITED
Kahneman, Daniel, Sibony, Olivier, and Sunstein, Cass, 2021, Noise: A Flaw in Human Judgment, William Collins, 454p.

This Assurance Blog series is coordinated and edited by Marc Bond, Gary Citron, and Doug Weaver.

Posted on November 10, 2021 by Lisa Ward

By Jim Gouveia, Marc Bond, Jeff Brown, Mark Golborne, Bob Otis, Henry S. Pettingill, and Doug Weaver


Consider the figure to the right under the lens of predicted net feet of pay. How have you been trained to handle plots such as this one? Here at R&A, we recommend a seven-step approach:

01. Plot the P10 and P90 predictions.

02. Draw a line between the predictions and extrapolate it to the resulting P1 and P99 values.

03. ‘Reality check’ these end members.

On the low end, would the predicted thickness contribute to meaningful sustainable flow? Assuming our area is unknown and highly correlated to our concept of sustainable flow, we get pragmatic and think about the ability to effectively complete the zone, without extraordinary measures. For example, would we be able to effectively complete a 1-foot interval in a homogenous sand which is underlain by 100 ft of water? There is no silver bullet for P99; it will be driven by user experience and play-specific knowledge, with consideration for variables such as permeability, Kv/Kh ratio, depth, infrastructure, viscosity, etc.

One of our biggest challenges when we consider net pay is defining how we differentiate geological successes from commercial successes. (We are talking about the average Net Pay across the productive trap area and not at the planned well location).

On the high end, could one make an optimistic yet realistic prospect map that could house such a thick Average Net Pay? In our collective experience at R&A, the projected P1 value can be unrealistically high—it’s not about the maximum thickness somewhere within the trap, it’s the maximum average thickness across the productive trap area.

04. Adjust the ‘reality check’ high and low members and pragmatically assume these are your new P1 and P99 end members.

05. Plot the ‘reality checked’ P1 and P99 values and redraw the line. The original P10 and P90 values are frequently not preserved and should be thought about as simply serving the purpose of initiating the process. Determine your resulting ‘reality checked’ P10, P50, and P90 values.

06. Inspect your measures of central tendency, the P50, and Mean values. For the P50, ask yourself, “When we think of this prospect, does it feel reasonable that half the time we expect to get a result larger than this value, and half the time less?” In a lognormal distribution, which best characterizes net pay, the P50 is halfway through the frequency, not halfway through the distribution’s parameter values. The Median, which is not synonymous with the P50, is based on sampled data. The reader is advised to always use the P50 which is based on the fitted data. When we are dealing with limited data sets, there can be a significant difference between the Median and the P50.

07. Compare the derived prospect Mean to the distribution of geologically analogous discovered prospects. Predicting a Mean outcome, which lies above the upper ten percent of your analogous discoveries, demands a technically unbiased explanation of why the prospect will have an Average Net Pay that exceeds 90% of the values previously encountered in the play.

Ultimately, exploration organizations need to deliver what they predict, and numerous industry look-back studies have demonstrated that the approach outlined in this example is highly effective in achieving that goal.

The values within our ‘reality checked’ P10 and P90 outcomes represent an “80% confidence interval.” In the E&P industry, we advocate setting the goal for our predictions based upon that range, particularly for the performance of our portfolios (more on that in a later R&A blog).

In a future blog, we will address measures of uncertainty and discuss reality checks based on our ratio of the P10 to P90.

Posted on September 23, 2021 by Lisa Ward

By Jim Gouveia, Marc Bond, Jeff Brown, Mark Golborne, Bob Otis, Henry S. Pettingill, and Doug Weaver

STATEMENT OF THE PROBLEM

Few industries are fraught with more uncertainty than prospect exploration in E&P. Our formal education guided us with the notion that unless we provide a precise answer, we have ‘failed’ to meet expectations. This is exasperated by investor and leadership’s need for certainty in their investment decisions. When we face an uncertain prediction, we need methods that decouple our minds from trying to jump to ‘the answer’ and instead capture a pragmatic range of possible outcomes.

The present value of our drilling prospects is primarily driven by their probability of realizing commercial success, our corporate discount rate, commodity prices, capital expenses, operating expenses, and our share of the commodity’s cash flow after taxes and royalties. Each of these key parameters is riddled with uncertainty throughout a project’s lifetime.

Modeled ranges better inform decision-makers by fairly representing the spectrum of possible outcomes. Many experts argue that decision-makers simply require confidence in the mean outcome. Portfolio theory advises that given a great number of repeated trials and an unbiased estimation, our firms will deliver the aggregated mean outcome. Whilst portfolio theory is sound, it presupposes two realities that do not exist in the world of exploration. First, our predictions are free of bias. Without a probabilistic basis, grounded by ‘reality checks,’ our forecasts have repeatably proven to be optimistically biased. Second, that there are enough repeated trials to make the aggregate prediction valid over time. No one (especially currently) is drilling enough exploration wells to support high statistical confidence in a program’s ability to deliver a mean outcome.

As decision-makers, we need standardized evaluation techniques upon which we can confidently make our best business investment decisions. That requires that subjective words and phrases such as ‘good chance,’ ‘most likely,’ ‘excellent,’ ‘low risk’ and ‘high confidence’ be eradicated from our presentation of E&P opportunities and replaced with probabilities that have a common definition across all disciplines and projects. The traditional industry consensus is the use of P10 and P90, which in the predominantly used ‘greater than’ convention, represents our optimistic but reasonable high side and our pessimistic but reasonable low side values respectively.

In a prior blog, we introduced our industry-standard method of providing ’P90’ and ‘P10’ values to bracket the ranges of all possible prediction outcomes. Studies have consistently shown that we are not particularly good at making such predictions and tend to underestimate the uncertainty in what we are assessing. For most E&P parameters, this presents itself as having our P10 to P90 ranges too narrow and optimistically high. Until the Petroleum Resources Management System (PRMS) update in 2010, industry guidance for validation of a probabilistic distribution was for the user to compare their probabilistically derived P50 to their deterministic (based upon their best guess) P50. It should not come as a surprise to learn that early probabilistic methods were flawed, as they were based on the belief that in the face of all the inherent subsurface uncertainty, we as subsurface professionals (even those of us who were newly graduated) had an innate ability to directly estimate a P50. Unfortunately, this antiquated belief persists to this day.

So how do we better derive our probabilistic ranges? Let us first bear in mind that we are trying to pragmatically capture the full range of possible outcomes in our predictions. We know that many of our subsurface distributions are often best represented by normal or lognormal distributions. We also know that both normal and lognormal distributions go to positive infinity and that an infinite reserve or rate is neither possible nor pragmatic. On the low end, lognormal distributions approach zero and normal distributions go to negative infinity. At the high end, both lognormal and normal distribution go to infinity. As we are building a distribution to characterize a geological success, we can eliminate the known low and high ends of both distributions either by truncation of outcomes above and below certain thresholds, or our preference at R&A, spike bounding. In spike-bounded distributions, randomly sampled values in excess of the selected high-end limit or low-end limit are respectively set as equal to the selected high- and low-end boundary values. As such the values are ‘bounded’ at the low and high ends of the distribution.

In the exploration realm, we are dealing with a relative dearth of data. Industry experience shows that in the “Exploration space,” we can use the P1 and P99 values of our distributions as the pragmatic end members for our distribution. In practice, we estimate our P90 and P10 values. We then extrapolate these values to a P99 and P1 value. We try to ensure that the P99 value represents the smallest meaningful result – the minimum geological success value – and the P1 value represents the largest geologically defensible result. Our P1 and P99 values are intended to represent a blending of geologic pragmatism and conceptualization. It is a trivial academic debate as to whether these high-end members are or should be our P2, P3 or P0.05 outcomes, the same logic applies to the low-end P99. The use of P1 and P99 should be thought of as a pragmatic spike bounding of the end members (‘reality checks’) of our input distributions.

In summary, our input ranges must capture the entire range of possible outcomes, and industry experience has taught us that to effectivity capture that range, we should consistently employ specifically defined low-side and high-side inputs (e.g., P99, P90, P1, and P10)

In our next article in this series, we will work through an example which addresses Average Net Pay.

Posted on June 11, 2021 by Lisa Ward

by Gary Citron, Senior Associate

“It is amazing what you can accomplish if you do not care who gets the credit.” –Harry Truman

In any successful business, some individuals make significant contributions but remain out of the spotlight. As R&A’s risk analysis software company transitions to a new generation of products, we cannot move forward without recognizing the immense impact R&A Senior Associate Roger Holeywell had as our chief programmer during our MS Excel-based era (1997 to 2020).

HOW IT BEGAN

Looking for topical yet practical training around 1994, Marathon’s Roger Holeywell attended the AAPG prospect risk analysis class that Pete Rose, Ed Capen and Bob Clapp taught. After completing this classic course, Roger had an epiphany. Converting the concepts, and formulae to MS Excel over the next few weeks, Roger created what became Marathon’s first standardized prospect characterization software. Roger convinced Marathon to widely distribute the software he built. By 1996 Marathon had a standard, consistent package for their prospects.

CREATING A SOFTWARE BUSINESS

By 1995 risk analysis concepts became rooted in many larger companies, who similarly built their software packages. The Amerada-Hess Exploration VP phoned Pete to ask if anyone had built software to apply the concepts taught. Pete approached Mobil, Conoco, and Marathon with that request. In short order, Mobil and Conoco said no, but Roger, after checking with Marathon management, replied “Sure, Marathon will license theirs.” Within a couple of weeks, Marathon received $10,000 from Hess, and Hess was a happy client with a new software tool. A couple of months later, Roger called Pete to inform him that Marathon no longer wanted to directly sell software but was willing to partner with Pete because of his extensive contacts. A 1997 contract between Marathon and Pete’s LLC, Telegraph Exploration, provided for a 50-50 revenue split, with Marathon retaining ownership of the code, and Telegraph handling all business matters. To help manage the growth, Pete approached Roger to run the new software company, Lognormal Solutions (LSi), owned by the newly formed Rose & Associates (R&A) in 2000. Roger declined that offer, as he wanted to focus on progressing his career at Marathon.

However, in what became a win-win situation, Roger expressed interest in continuing to raise the profile of his progeny. By 2001 Roger received written approval from Marathon to work for LSi, further progressing the software, but on his own time (nights, weekends, and selected vacation days). By January 2005, Marathon executed an addendum to the 1997 agreement relinquishing all ownership rights of the software to LSi. Roger’s retirement from Marathon in 2015 allowed him to become a full-time Senior Associate and programmer at R&A.

In addition to the prospect software, Roger also coded the original versions of multiple zone aggregation software. These products evolved into Multi-Zone Master (MZM) and Multi-Method Risk Analysis (MMRA). MMRA and Multi-Zone Master would be bundled with a versatile utility program Roger created (Toolbox) to gather, condition, and analyze data to fashion inputs into MMRA software. The toolbox features myriad curve fitting capabilities and calculation of hydrocarbon fluid expansion and shrinkage attributes.

R&A’s software was particularly attractive to companies of mid-size market capitalization. But to serve smaller companies who wanted a consistent characterization platform to demonstrate their savvy Roger built the Essentials Suite in 2004, which had prospect software with two-zone aggregation capability, limited data plotting, and a portfolio aggregator. A much lower price point with basic capabilities admirably serves these smaller companies.

How did all this work get done essentially through one person who worked as a full-time Marathon employee? The answer is PWP (Pajama Weekends Programming). While at home over the weekends Roger’s attire was strictly unadulterated pajamas-only. The design and programming sessions were hardly a picnic, as Roger shared some of the major challenges they tackled. For example, how to keep the software working optimally after Microsoft released versions of Excel that conflicted with the code to create pervasive security or performance issues? Roger experienced one of his most gratifying programming moments in 2005, harnessing Microsoft’s VBA to provide an internal Monte Carlo simulator.

PLAY IT AGAIN, SAAM

Incredibly, the energy described above that Roger infused into R&A software constituted about 50% of his moonlighting time. The remaining 50%, Roger spent coding SAAM (Seismic Amplitude Analysis Module), the software product generated by R&A’s DHI Consortium. In mid-2000, when many of our clients wanted to see a consistent set of industry-derived best practices around amplitude characterization for chance of success determination (commonly part of ‘risking’), Pete Rose turned to Mike Forrest to geologically weave seismic amplitude anomalies into the fabric of prospect chance characterization. We planned from the start to have Consortium best practices coded into software, so we reached out to Roger to serve as a programmer.

For SAAM, Roger programmed an innovative interrogation process that facilitates a systematic and uniform grading of the amplitude anomaly, beginning with the geologic setting, a preliminary chance assessment solely from geology (Pg), and salient amplitude attributes the seismic survey is designed to extract. SAAM requests the exploration team to answer questions about AVO classification, seismic data quality, rock property analysis, analogs, and potential pitfalls. Thus, SAAM successfully institutionalizes a thorough process they might otherwise avoid or forget. Through the Consortium-derived parameter weighting system, SAAM registers the impact of data quality and seismic observations as well as any rock properties modeling, to determine a modifier to the initial Pg. This modification or ‘Pg uplift’ is now calibrated by over 350 drilled wells. Success rates recorded in SAAM’s database can be critically compared to the forecast success rates to further calibrate the weighting system employed. The Consortium remains a vital industry gathering. Roger attributes longevity to the breakthrough thought that meetings should be member-driven, designed around presentations about a prospect by a member company. During Consortium meetings Roger populates a SAAM file for each prospect in real-time during the presentation, and then the members discuss if they would drill the prospect guided by the SAAM inputs and outputs. Finally, the company reveals the drilling results. All inputs, outputs, and results are added to the database. SAAM’s architecture and workflow were based on a collaborative framework from the very beginning.

THANK YOU, ROGER

It’s hard to fathom the magnitude of such varied software-related contributions built solely in his ‘spare time,’ so for all he has done, here’s a toast to Roger Holeywell, an unsung hero working behind the scenes creating value from risk analysis.

Posted on May 4, 2021 by Lisa Ward

Abridged from a presentation by David Cook and Mark Schneider.
View the original poster
Read the full paper

Rose & Associates introduced the Pwell concept at the AAPG 2017 convention in Houston and released the methodology in our Multi-Method Resource Assessment (MMRA) program in early 2018. This method is now available in RoseRA, our current prospect risk analysis software. The Pwell function in RoseRA provides insights into the balance between the chance of success and the potential resources when a well is drilled in a location that is downdiped from the crest of the structure.

Pwell helps users address key issues related to:

Choosing the best downdip location, giving your discoveries a higher probability of finding Estimated Ultimate Recovery (EUR) hydrocarbons exceeding the Minimum Commercial Field Size (MCFS)

Understanding the impact on the chance of geologic and commercial success

Ensuring that, in the event of a dry hole, there will be “no regrets” about potential up-dip volumes tempting a decision-maker to drill a sidetrack or new well up-dip.

The Pwell function in RoseRA requires that an Area distribution be modeled using either the Area-vs-Depth or Area x Net Pay rock volume estimating methods. Users can input the closing contour area at the proposed well location and simulate it to calculate new metrics for the well location including:

The Pwell chances of geologic and commercial

Given a discovery, the range of resources up-dip and downdip from the well location.

Given a dry hole, the range of “attic” resources up-dip from the well location and the up-dip chance of commercial success.

In the following example, the team has proposed a downdip well location at 2,000 acres. The total prospect has a Pg = 50% and a Geologic Mean of 56 mmbo based on an area distribution with P90 of 1,000 acres and P10 of 6,000 acres. The commercial MCFS is 25 mmbo which results in the probability of commercial success being 37.1% and a commercial mean of 70 mmbo. Input the downdip closing contour area of 2,000 acres into RoseRA and run the simulation to observe the results.

Figure 1: Input Area of Closing Contour at Proposed Well Location
Figure 2: Downdip Geologic and Commercial Chance of Success are reported.

The Pwell chance of success decreases the further downdip the well is drilled. In this example, the Pwell geologic chance of success at the 2,000-acre downdip location is 34.0%, much lower than the crestal Pg of 50%. The tradeoff is that the proportion of the commercial downdip resources is high (95.5%), making the Pwell commercial chance (32.5%) only slightly below the Pwell geologic chance. Given success, you are almost guaranteed a commercial accumulation by drilling at the 2,000-acre location.

There is still the possibility of resources being present up-dip if the well location is dry. What is the commercial chance of success for the potential up-dip resources given a dry hole?

Figure 3: Updip Geologic and Commercial Chance of Success are reported.

Pg for the up-dip resources remains the same as the original crestal Pg of 50%. About 41% of the up-dip resource distribution exceeds the MCFS so the Pwell commercial chance is 20.6%. Would that Pwell commercial chance tempt management to drill an expensive sidetrack or a second well to determine whether commercial resources are present? Management would need to know the up-dip resource distribution to make an informed decision.

Use Pwell to investigate the up-dip and downdip geologic and commercial resources as shown in the Pwell log probit below.

Figure 4: The Pwell Log Probit chart shows the Updip (black) and Downdip (blue) resources and Updip Commercial (green) resources.

Updip geologic resources (black) range from P99 of 5 mmbo to a P01 of 66 mmbo with a Mean of 25 mmbo. Downdip geologic resources (blue) range from P99 of 18 mmbo to a P01 of 282 mmbo with a mean of 79 mmbo. The red shading indicates the overlap between the up-dip EUR distribution from P65 to P01 and the downdip EUR distribution from P99 to P45. Resources in excess of about 71 MMBO are not achievable in the up-dip EUR distribution. What is the consequence of the distribution overlap region on decision-making?

With the commercial MCFS of 25 mmbo, the up-dip commercial resources (green) show the up-dip Pmcfs is 41.1% and the up-dip Pc is 20.6%. This commerciality possibility might tempt management to spend additional capital to test for an up-dip commercial accumulation. If this is the case, perhaps the well should be drilled further up-dip to minimize total drilling capital.

Another metric that has been used in the industry to gauge whether or not to spend additional capital is the “No Regrets” resource, which is calculated as the up-dip productive area above the drilled location multiplied by the up-dip mean net pay and the up-dip mean oil yield. Figure 5 shows the “No Regrets” resource is 38 mmbo. Notice that this method provides a deterministic value to aid in decision-making instead of the full up-dip distribution and additional insight that RoseRA provides.

RoseRA provides the ability to model multiple zones, multiple wells, and multiple downdip Pwell areas, shown as vertically stacked zones in a single file. It simulates and reports all entities within the same simulation. Figure 5 shows the up-dip and downdip chances and EURs as increments from the crest to 2,500 acres. Determine which downdip location balances risk, volume, and value for your company.

Figure 5: Sensitivity Analysis of Multiple Well Locations Can Be Done within a Single Simulation

The RoseRA Pwell function reports clear tabular and graphical outputs so that the rationale for drilling the exploration well downdip can be discussed and communicated with team members and management. Pwell helps facilitate the chance of making a commercial discovery while minimizing the need to spend additional exploration capital. Such scenarios might involve drilling a downdip appraisal well to confirm commerciality, or in the event of a dry hole, avoid drilling an up-dip sidetrack or additional well to determine whether there is a “left-behind attic” resource. A dry hole with the potential for up-dip commercial EUR creates a real conundrum. The Pwell analysis with up-dip and downdip distributions can help clarify a better decision.

This methodology can also help users select locations for appraisal wells. The only difference from the exploration example above is that the crestal exploration discovery results in a prospect Pg = 100%, and the Pwell analysis is performed using the updated resources distribution following the discovery well.

Contact Phil Conway and David Cook at Rose & Associates today to get a demo or more information.

The implementation of Pwell in RoseRA is built on the methodology described in the following reference:

 Schneider, F. M., Cook, D. M., Jr. (2017, April 2-5), Drilling a Downdip Location: Effect on Updip and Downdip Resource Estimates and Commercial Chance [Poster session], AAPG 2017 Annual Convention, Houston, TX, United States.

http://www.searchanddiscovery.com/documents/2017/42102schneider/ndx_schneider.pdf

Posted on April 26, 2021 by Lisa Ward

by Creties Jenkins, Partner at Rose & Associates

Take 30 seconds to memorize these 10 items from a shopping list, and then write down how many you can recall: milk, yogurt, croissants, bananas, muffins, coffee, ham, jelly, cheese, and eggs. Most people will be able to remember about 7 items. Now make a note to construct this list again from memory in 24 hours. How many do you think you’ll recall? I’ll be impressed if you can list 5 or more!

The problem, of course, is that this list is being held in your short-term memory (STM) and doesn’t get transferred to long-term memory (LTM). As you can see, STM has a severe capacity limitation. However, this can be overcome, in part, by informational grouping. So if you noticed that the list above consists of breakfast items, you can create this organizational structure in your LTM to help you more easily recall the items next time.

This interconnectedness of information is a cornerstone of memory. Think of memory as a massive, multi-dimensional web in which data is retrieved by tracing through the network. Retrievability is influenced by the number of storage locations as well as the number and strength of the pathways. The more frequently a path is followed, the stronger the path becomes.

This can be illustrated by comparing the abilities of chess masters and ordinary players. If you randomly place 20-25 chess pieces on a board for 5-10 seconds, each of these groups will only be able to recall the positions of about six pieces. However, if the positions are taken from an actual game, the masters will be able to reproduce nearly all of the positions whereas the ordinary players will still only be able to recall the positions of six pieces.

The masters are using their LTM to connect individual positions into recognizable patterns that ordinary players do not see. This ability to recall patterns that relate facts to each other and broader concepts (such as strategy) is critical for success in chess. But what about the business world?

Decision makers and subject matter experts often see themselves as the equivalent of chess masters. They believe the data and experiences contained in their LTM allow them to uniquely perceive patterns and draw inferences that give them a competitive advantage. What’s forgotten many times, is that unlike chess, the permissible moves in the oil and gas business are constantly changing.

Once you start thinking about a given challenge, the same pathways that led you to a successful outcome in similar challenges will get activated and strengthened. This can create mental ruts that make it difficult to process different perspectives, including those that could lead to a better outcome. Our memories are seldom reassessed or reorganized in response to new information, making it difficult to modify these existing patterns.

To overcome this, you need a wider variety of patterns to reference and greater processing of new information to fully understand its impact. This means reaching out to a wider network of experts and devoting more time and effort to develop a deeper understanding, as well as implementing procedures that facilitate this including framing sessions, peer reviews, and performance lookbacks.

These procedures, as well as others, are discussed and practiced in our Mitigating Bias, Blindness and Illusion in E&P Decision Making course. Please consider joining us for a virtual or in-person offering, either as an open enrollment or internal session at your company.

Reference excerpted for this blog: Heuer, Richard J., Jr., 1999, Psychology of Intelligence Analysis, Center for the Study of Intelligence.

Posted on March 18, 2021 by Lisa Ward

by Marc Bond, Senior Associate

Today I would like to talk with Creties Jenkins, co-creator of our Mitigating Bias, Blindness and Illusion in E&P Decision Making to gain another perspective on bias and how they impact our interpretations and decisions. Creties is a Partner at Rose & Associates with 35 years of diverse industry experience. As a geological engineer, he compliments my geoscience background.

Marc: Creties, welcome to my Understanding and Overcoming Bias blog. I appreciate you taking the time to give our readers some of your insights on our course.

Marc: I’d like to ask you what inspired you to put together the Mitigating Bias course.

Creties: First off Marc, thank you for the opportunity to provide some commentary for the bias blog. My primary inspiration for the Mitigating Bias course was Pete Rose’s AAPG Distinguished Lecture called “Cognitive Bias, the Elephant in the Living Room of Science and Professionalism”, which can be viewed on YouTube. He made the point that our lack of objectivity, due to errors in thinking, contributes to underperforming projects and portfolios. He also noted that the biggest challenge is convincing technical and management professionals that they are subject to bias, and concluded his talk by calling for a renewed commitment to the ‘rigor of the scientific method’. This is where our course picks up to provide some practical guidance.

Marc: In the course, we talk about Illusions. Can you give us some more insights?

Creties: We define an ‘Illusion’ as a misleading belief based on a false impression of reality. We focus on the Illusions of Potential, Knowledge, and Objectivity. Illusions are fueled by biases—we anchor on supporting data, we ignore disconfirming information, and we become overconfident in the expected result. My grandson, who’s a big superhero fan, was crushed when the Superman cape he ordered didn’t give him the ability to fly around the house. It never occurred to him that if this was real, friends and family members would already be using them. He was blinded by his own reality, which can happen to us as well.

Marc: Can you give an example?

Creties: All of us have seen Executive and Technical presentations touting the game-changing advantages of a given project, transaction or technology in our industry. We’ve come to expect that companies will overstate their knowledge and potential of these opportunities in order to generate investor buzz. But more importantly, we see companies believing their own press and not thinking critically enough about their proposed investments or having processes in-place to rigorously assess them and apply the lessons learned to new projects. The “Shale Revolution” in North America is a good example of companies repeatedly overpromising and underdelivering.

Marc: Do you see a relationship between Illusions and Cognitive Biases?

Creties: I do think that cognitive biases fuel illusions. We focus on small bits of data and analogs (information bias) that favor our intent (anchoring bias), ignore conflicting information (confirmation bias), convince us that our strategic plan is correct (framing bias) and that fame and glory will follow (motivational bias). So we think opportunities are better than they are (illusion of Potential), that we understand them more deeply than we do (illusion of Knowledge), and that we’re being honest and impartial in our resulting decisions (illusion of Objectivity). Without a constant awareness of this state and the application of mitigation techniques, we teach in our course, this sequence is all but certain to repeat itself. Just about every person reading this can recall at least one project in their company that followed this pattern with a disastrous result. And yet the cause and cure still receive scant attention.

Marc: What is one of your most surprising observations when teaching the course?

Creties: What’s most surprising to me is how few companies are interested in assembling case studies of their project failures and understanding the role that cognitive errors like ‘Illusions’ played. These case studies are really powerful because you have to admit that if a failure happened once in your company, it could happen again without some changes. I saw this first-hand at ARCO where the Illusion of Knowledge (mistaking familiarity for real understanding) led to a failed waterflood project because of unrecognized connected natural fractures. The inability to learn from this led a decade later to a billion-dollar failure of a miscible gas injection project for the same reason.

Marc: What is your biggest learning from teaching the course?

Creties: How prominent and impactful these cognitive errors are. We’ve presented this course nearly 100 times to everyone from field personnel to executives and nearly every attendee (based on course reviews) sees this problem within their company. Yet most companies are not addressing it or think it’s sufficient for personnel to simply have awareness. I did a half-day leadership version for one company and was told afterward that the attending geoscience managers favored a 2-day mitigation course for their reports, while the engineering managers favored a 1-day awareness course for their people. This led one of the geoscience managers to remark that geoscientists were interested in addressing the problem while the engineers were only interested in identifying it in others!

Marc: And could you leave us with a final message for our readers?

Creties: We provide our course attendees with an understanding of the different types of cognitive errors along with examples and steps to mitigate them in their daily work. But to create change, everyone in the organization needs to have a common vocabulary and processes (e.g., framing sessions, peer assists, performance lookbacks) that will expose and lessen the impact of cognitive errors. HR departments understand how these errors affect hiring, performance reviews, promotions, and employee interactions. We need the same recognition and desire for change on the technical side.

Check out more of Marc’s articles on bias and illusion on his LinkedIn profile.

Posted on February 24, 2021 by Lisa Ward

by Creties Jenkins, Partner

Quickly say the words in these three triangles.

If you didn’t pronounce the repeated words, you’re not alone. Nearly everyone fails to do so. But why? Our familiarity with these phrases causes us to predict the fourth word from the first three and ignore what’s in between. It demonstrates that events consistent with our expectations are processed easily while those that contradict them are ignored or distorted.

Our expectations have many diverse sources, including past experience, professional training, cultural norms, and organizational frameworks. These predispose us to pay particular attention to certain kinds of information and to organize and interpret it in certain ways. We are also influenced by the context in which information arises. Hearing footsteps behind you in the office hallway is very different from hearing them behind you in a dark alley!

These patterns of perception tell us subconsciously what to look for and how to interpret it. Without these patterns, it would be impossible to process the volume and complexity of data we receive every day. But we need to be aware of the downsides associated with these patterns:

01. Your view will be quick to form but resistant to change. Once you form an expectation for your project, this conditions your future perceptions.

02. New information will simply be assimilated into your existing view. Gradual, evolutionary change associated with new data often goes unnoticed, which is why a fresh set of eyes can reveal insights overlooked by someone working on the same project for many years.

03. Initial exposure to ambiguous information interferes with accurate perception. The greater the initial ambiguity, and/or the longer you’re exposed to it, the clearer the succeeding information must be before you’re willing to make or change an interpretation.

04. We tend to perceive what we expect to perceive. It takes more unambiguous information to recognize an unexpected outcome than an expected one.

05. Organizational pressures resist changing your view. Management values consistent interpretations, particularly those that promise added value to investors!

Fortunately, some techniques can help us overcome these downsides. We need to list our assumptions and chains of inference. We have to specify sources of uncertainty and quantify risk. Key problems should be examined periodically from the ground up. Alternative points of view should be encouraged and expounded.

These techniques, as well as others, are discussed and practiced in our Mitigating Bias, Blindness, and Illusion in E&P Decision-Making course. Please consider joining us for a virtual or in-person offering.

Reference excerpted for this blog: Heuer, Richard J., Jr., 1999, Psychology of Intelligence Analysis, Center for the Study of Intelligence.

Posted on February 17, 2021 by Lisa Ward

by Henry S. Pettingill, Marc Bond, Jeff Brown, Peter Carragher, Mark Golborne, Jim Gouveia, and Bob Otis

One of the common questions that teams ask us when reviewing subsurface projects is, “How should we set our input ranges for volumetrics?” This article introduces a new series that will address that question.

STATEMENT OF THE PROBLEM

Many published works over the years have documented the importance of predicting what we find for our oil and gas portfolios, in both the exploration and development phases of the project life cycle. While it is impossible to repeatedly find exactly ‘the number’ for every prospect, well, or development, it is possible to get close the prediction on a portfolio basis. The fundamental concept is that for both prospects and portfolios, we can state our prediction in ranges, and additionally in terms of measures of central tendency (our ‘expectation’, commonly the arithmetic mean) and dispersion around that central tendency. The consequence of this latter statement is that we are able to give leadership ‘one number’ for an expectation – which is usually what both the executive suite and the investment community want from us.

So why do we use ranges? Simply put, it has been repeatedly shown that our predictions are better in the long run if, instead of using single values for each input parameter, we employ ranges to ultimately derive confidence intervals and a single value of our ‘expectation’ for that parameter (e.g. Pettingill, 2005). It also makes the elimination of systematic estimating bias more effective, as statistically rigorous post-mortem analyses become possible.

HISTORY OF EMPLOYING PROBABILISTIC RANGES FOR PREDICTIONS

As far as we can tell, the pharmaceutical industry led the way in the 1920s by employing ranges in probabilistic predictions. The first employment of these methods in oil and gas volume prediction was documented in the late 1960s (for instance, Newendorp, 1968).

While most of the major upstream petroleum companies were employing these methods, they advanced and gained wider usage following seminal publications in the 1970s and 1980s by Ed Capen, Pete Rose, Bob Megill, Paul Newendorp, and others (see references below). These references have withstood the test of time and remain relevant today, so we highly recommend reading them.

Later works validated the concepts by demonstrating that pre-drill predictions as a whole were improved with the implementation of probabilistic ranges. These are well documented by Otis and Schneidermann (1997), Johns (1998), Ofstad et al. (2000), McMaster and Carragher (2003) and Pettingill (2005).

Fast forwarding to 2021, what have we observed over the years with respect to the application of these concepts? First, there are still many questions that arise on the topic from the upstream community. Second, some of the digital-era staff have not received an in-depth education on the fundamental concepts that drive input ranges. And finally, it seems like many of us who learned these methods have forgotten some of the fundamentals (or at least become rusty!).

HOW DO WE ADDRESS THE ISSUE OF SETTING RANGES FOR VOLUMETRICS?

Three fundamental concepts define the modern concepts for pre-drill volumetric assessment: 1) the jump from single deterministic input parameters to probabilistic inputs, 2) the use of continuous probability distributions (e.g., normal, lognormal), and 3) employing confidence intervals associated with these input distributions, and as a consequence, the final output distribution of recoverable resources.

These probabilistic ranges can be characterized by parameters such as probability percentiles (P10, P90, etc.), mean, variance or standard deviation, and P10:P90 ratios (Figure 1). We will expound on these in a future blog. [note: in this blog, we define P10 as high and P90 as low]. The biggest benefit from using these probabilistic ranges is the ability to state our predictions in terms of confidence, e.g. “I have a 90% chance of finding __ mmboe or greater, a 50% chance of getting __ mmboe or greater, and a 10% chance of getting __ mmboe or greater”. Another great advantage is the ability to characterize the distribution with a single mean value or the expectation that encompasses the entire output distribution: the mean, which is the average outcome. This allows us not only to understand the anticipated value of our portfolio as if we drilled our prospect thousands of times but also allows us to objectively compare and rank projects within a portfolio.


Figure 1. Probability graphs (lognormal distribution shown): a) probability density graph, b) cumulative probability graph. Note that we are using the “greater than or equal to” convention, with P90 as the low-end percentile. The concept applies to both the input parameter as well as the output distribution.

FURTHER TOPICS IN THIS SERIES

This series will evolve as we go along, so a precise schedule of topics is not set (of course, that will be driven largely by the comments from our readers!). Future topics being contemplated are:

— What should our modeled ranges represent?

— Different approaches to assessing uncertainty in Net Rock Volume (NRV)

— How to handle input parameters that use Averages (net pay, Phi, Sw, etc.)

— How to select ranges in the prospect area

Spatial Concepts: how to jump from a map to a volumetric input distribution

— Direct Hydrocarbon Indicators (DHIs): specific considerations when choosing volumetric inputs

— Important statistical concepts: Mean, Variance, Standard Deviation, and P10/P90 ratios

ACKNOWLEDGEMENTS

We wish to thank all the authors of the works cited here, as well as the countless others not cited, for their contributions to this topic and our industry. They are the true heroes of this journey of learning. We also extend our gratitude to our colleagues at Rose and Associates.

FURTHER READING

— Capen, E. C. (1976), The difficulty of assessing uncertainty, Journal of Petroleum Technology, August 1976, pp. 843-850.

— Carragher, P. D. (1995), Exploration Decisions and the Learning Organization, Society of Exploration Geophysicists, August 1995, Rio de Janeiro.

— Johns, D. R., Squire, S. G., and Ryan, M. G. (1998), Measuring exploration performance and improving exploration predictions—with examples from Santos’ exploration program 1993-96, APPEA Journal, 1998, pp. 559-569.

— Megill, R. E. (1984), An Introduction to Risk Analysis, 2nd Edition. PennWell Publishing Co. Tulsa.

— Megill, R. E. (1992), Estimating prospect sizes, Chapter 6 in: R. Steinmetz, ed., The Business of Petroleum Exploration: AAPG Treatise of Petroleum Geology, Handbook of Petroleum Geology, pp. 63-69.

— McMaster, G. E. and Carragher, P. D. (1996), Risk Assessment and Portfolio Analysis: the Key to Exploration Success. 13th Petroleum Conference, Cairo Egypt, 1996.

— McMaster, G. E. and Carragher, P. D. (2003), Fourteen Years of Risk Assessment at Amoco and BP: A Retrospective Look at the Processes and Impact, Canadian Society of Petroleum Geologists / Canadian Society of Exploration Geophysicists 2003 Convention, Calgary Alberta, June 2-6.

— Newendorp, P. (1968). Risk analysis in drilling investment decisions. J. Petroleum Technology, Jun. pp. 579-85.

— Ofstad, K., Kittilsen, J.E., and Alexander-Marrack, P., eds. (2000), Improving the Exploration Process by Learning from the Past, Norwegian Petroleum Society (NPS) Special Publication no. 9. Elsevier, Amsterdam, 279 p.

— Otis, R. M., and Schneidermann, N. (1997), A Process for Evaluating Exploration Prospects, AAPG Bulletin v. 81, No. 7, pp 1087-1109.

— Pettingill, H.S. (2005) Delivering on Exploration through Integrated Portfolio Management: the Whole is not just the Sum of the Holes. SPE AAPG Forum, Delivering E&P Performance in the Face of Risk and Uncertainty: Best Practices and Barriers to Progress. Galveston, Texas, Feb. 20-24, 2005.

— Rose, P. R., (1987), Dealing with risk and uncertainty in exploration: how can we improve?, AAPG Bulletin, vol. 71, no. 1, pp. 1-16.

— Rose, P. R. (2000), Risk Analysis in Petroleum Exploration. American Association of Petroleum Geologists.

Posted on January 21, 2021 by Lisa Ward

by The Deep Coal Consortium (Deep Coal Technologies Pty Ltd, Rose & Associates, and Cutlass Exploration)

The Deep Coal Consortium is pleased to announce the completion of the first systematic review of the volumetric potential of the Gidgealpa Group Coal Measures in the deep portions (>3,000 feet) of the Cooper Basin based upon detailed analysis of well data (n=1,300). The Gidgealpa Group was evaluated in ten assessment intervals based on regional well correlation.

The study provides both deterministic and probabilistic estimates of in-place and technically recoverable gas and liquids, and (for the first time) Prospective Resources. Results are presented in tabular form and as a set of maps for each assessment interval and for the Group.

Low (P90), Median (P50), and High (P10) map realizations were generated for each assessment interval and volumes were aggregated to estimate the Group resource potential. An example of one interval is shown below.

The innovative methods used to calculate the map-based uncertainty in volumetric potential are discussed in URTEC 198304-MS.

The Deep Coal Play contains world-class volumes of potentially commercial gas and condensate. This volumetric study provides a strategic spatial context for decision-making as efforts to commercialize this play accelerate in the months ahead.

In addition, the data can be used to more fully characterize specific portions of the play, such as company acreage. The Consortium can provide technical support for such customized analysis.

For details contact David Warner of DCT.

Download this article as a PDF.