#### 2019-SEP-18 Event-Study Method, Similiarity, Susceptibility Ranking
Talk 2: Using unsupervised machine learning to tag oil and gas pressure drop methods used in commercial flow simulators, by Pablo Adames
Oil and gas engineers rely on flow simulators to design and troubleshoot pipelines, wells, and thus the facilities needed to achieve production and operation targets within the safety and economic constraints. Commercial software simulators offer a wide choice of calculation methods for the pressure drop and liquid holdup at every point of the system, these are usually referred to as
flow correlations for historical reasons. The difference in the numerical results of the simulations can vary significantly as a function of the flow method selected, leaving everything else constant. Unsupervised classification methods can be used to discover similarities in the results of the flow correlations available. Once the methods are tagged as belonging to a class of methods based on the similarity of the results,
a priory knowledge can be used to assign meaning and make recommendations on the more consistent classes of methods to use in a particular production scenario. For this study, Schlumberger’s PIPESIM was used to assess 35 different methods on a model built from field data in the public domain, a metric was defined to assess similarity, the machine learning results were compared to the empirical knowledge, and consistent results were identified for this specific production scenario. The processing of the text files from the simulator and the subsequent statistical analysis and visualizations were done in R and the code is presented in reproducible research format using the package
RStudio. The data and files are also available in
Talk 3: (Cancelled)
Community Development - Calgary Artificial Intellgience Meetup by Drew Gilson,
Drew Gillson is a technologist, entrepreneur, and community leader. In addition to organizing the Calgary Artificial Intelligence Meetup, Drew works for Looker, a data analytics software company that is being acquired by Google. Drew has been an active member of the Calgary innovation ecosystem since the dark ages of 2001. He’ll share some highlights and learnings from his remarkable journey, in the hope that it will inspire you to also take the road less traveled.
Talk 1: Rig State Detection by David Shakleton
Over the past two or three years Independent Data Services (IDS) have shifted from ‘proof of concept’ projects to global live rollouts of their “in-time” (near real-time) lean automated reporting (LAR) and drilling performance monitoring (DPM) services. Some of our projects begin their lives in R, as RStudio and Shiny apps lend themselves beautifully to the rapid prototyping of ideas - by being able to ingest large amounts of data, and perform analytics accessed through an easily-adjustable user interface (UI). The first step to realizing these automated reporting & analytics services in the upstream oil & gas industry is ‘rig state detection’ (RSD) - a process where data is taken from key sensors on the rig and run through logic-based and/or machine learning algorithms to determine what the rig, drill string, etc., are doing. Rig states are then used to build activity descriptions for daily reporting, and generate charts and dashboards for key drilling parameters, with web-based charts and dashboards available from anywhere, any device within moments of the event. IDS have made further strides to automate daily operational reporting by automatically ingesting pdf/Excel/WITSML/etc. data to auto-populate much of the daily report.
Talk 2: Model Agnostic Approach by Alastair Muir
The real value of Machine Learning to businesses is when it is used to create a deep understanding of the problem. The predictive power of modern machine learning algorithms comes at the cost of decreased transparency. This is why Black Box solutions, however accurate, are not immediately accepted by themselves. You should come away from this session with a toolkit you can use to probe and understand your models. This could be for regulatory requirements, change management resistance, or just general acceptance. We will be using a typical example of a non-linear process modeled using different traditional statistical and deep learning models. So, we are going with the “model agnostic” approach.
Talk 3: Asset Failure Susceptibility Ranking, using LambdaMART, by Busayo Akinloye
The electric distribution system is one of the most diverse systems in the electrical grid. It consists of both overhead and underground assets. Growing power quality and reliability expectations from regulatory authorities and customers demand minimal downtime of equipment. Metrics such as System Average Interruption Frequency Index (SAIFI), and System Average Interruption Duration Index (SAIDI) are closely being monitored by electric utilities and form a major part of the business’ performance indices. These growing expectations, coupled with aging assets and budget constraints require innovative and cost-effective ways to realize actionable intelligence in order to optimize spending, while improving or maintaining the quality and reliability of the electric grid. Data analysis offers a unique solution that is reproducible across all asset infrastructure of an electric grid. It employs complex machine learning and statistical algorithms to extract actionable insights and learnings from historical data. These insights will help utilities better allocate both financial and human resources to the most failure susceptible assets, truly making data-driven decisions. I will discuss the development of an asset failure susceptibility ranking system on the Calgary area Underground Residential Distribution (URD) System. This system employs the supervised ranking system used by information retrieval systems. The framework of this ranking system can be applied to all distribution system assets (equipment), due to the reproducible nature of the statistical algorithms it employs.