Categories
General Statistical Applications Podcasts (audio)

Why Predictions of Surgery-Duration are So Poor, and a Possible Remedy (Podcast)

Accurate prediction of surgery-duration is key to optimal utilization of operating theatres. Yet, current predictions, based on best available statistical and AI techniques, are highly inaccurate. This causes operating rooms worldwide to operate in a sub-optimal mode. Based on personal experience, supported by recently published three peer-reviewed articles, we believe that the poor state-of-the-art of current predictive methods for surgery-duration is traceable to a single cause. What is it? What is the remedy?

Literature

[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.

[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .

[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .

[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141

[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”Wikipedia. April 16, 2021.

Categories
General Statistical Applications

Why Surgery-Duration Predictions are So Poor, and a Possible Remedy

(Related podcast:  Why Predictions of Surgery-Duration are So Poor, and a Possible Remedy (Podcast)  ).

Operating theatres are the most expensive resource at the disposal of hospitals. This renders optimizing scheduling of surgeries to operating rooms a top priority. A pre-condition to optimal scheduling is that accurate predictions of surgery-duration be available. Much research effort has in recent years been invested to develop methods that improve the accuracy of surgery-duration predictions. This ongoing effort includes both traditional statistical methods and newer Artificial Intelligence (AI) methods. The state-of-the-art of these methods, with relevant peer-reviewed literature, have recently been summarized by us in a new entry on Wikipedia, titled “Predictive Methods for Surgery Duation”.     

Personally, I was first exposed to the problem of predicting surgery-duration over thirty years ago, when I was involved in a large-scale project encompassing all governmental hospitals in Israel (at the time). Partial results of this effort had been reported in my published paper of 1986, and further details can be found in my more recent paper of 2020. Both articles are listed in the literature section at the end of this post (for podcast listeners, this list may be found on haimshore.blog).

My second involvement in developing predictive methods for surgery-duration was in more recent years, culminating in three peer-reviewed published papers (Shore 2020, 2021 ab; see references below).

Surgery-duration is known to be very highly volatile. The larger the variability between surgeries, the less accurate the prediction may be expected to be. To reduce this variability, newly devised predictive methods for surgery-duration tend to concentrate on subsets of surgeries, classified according to some classification system. It is assumed that via this classification, prediction accuracy may be enhanced. A common method to classify surgeries, implemented worldwide, is Current Procedural Terminology (CPT®). This coding system delivers, in a hierarchical fashion, particular codes to subsets of surgeries. In doing so, variability between surgeries sharing same CPT code is expected to be reduced, allowing for better prediction accuracy.

A second effort to increase accuracy is to include, in the predictive method, certain factors, known prior to surgery, which deliver variability to surgery-duration. It is hoped that by taking account of these factors, in the predictive method, unexplained variability in surgery-duration will be reduced, thereby enhancing prediction accuracy (examples will soon be given).

A third factor that influence accuracy is the amount of reliable data, used to generate predictions. Given recent developments in our ability to process large amounts of data, commonly known as Big Data, Artificial Intelligence (AI) methods have been summoned to assist in predicting surgery times.

These new methods and others are surveyed more thoroughly in the aforementioned entry on Wikipedia.

The new methods notwithstanding, current predictive methods for surgery-duration still deliver unsatisfactory accuracy.

Why is that so?

We believe that a major factor for the poor performance of current predictive methods is lack of essential understanding of what constitute major sources of variability to surgery-duration. Based on our own personal experience, as alluded to earlier, and also on our professional background as industrial engineers, specializing in analysis of work processes (of which surgeries are an example), we believe that there are two sets of factors that generate variability in surgery-duration: A set of major factors and a set of secondary factors. We denote these Set 1 and Set 2 (henceforth, we refer only to variability between surgeries within a subset of same code):

Set 1 — Two Major Factors:

  • Factor I. Work-content instability (possibly affected by variability in patient condition);
  • Factor II. Error variability.

Set 2 — Multiple Secondary Factors, like: patient age, professional experience and size of medical team, number of surgeries a surgeon has to perform in a shift, type of anaesthetic administered. 

Let us explain why, in contrast to current practices, we believe that work-content instability has critical effect on prediction accuracy, and why accounting for it, in the predictive method, is crucial to improving current accuracy, obtained via traditional methods.

To prepare predictions for any random phenomenon, assumed to be in steady-state, the best approach is to define its statistical distribution and estimate its parameters, based on real data. Once the distribution is completely defined, various statements about the conduct of the random phenomenon (like surgery-duration) can be made.

For example:

  • What is the most likely realization (given by distribution’s mode);
  • What is the middle value, which delivers equal probabilities, for any realization, to be larger or smaller than that value (expressed by distribution’s median);
  • What is the probability that any realization of the random phenomenon exceeds a specified value (calculated by the cumulative density function, CDF)?

Understanding that complete definition of the distribution is the best approach to predict surgery-duration, let us next explain what type of distributions can one expect in the two extreme states, regarding the two major factors of Set 1:

State 1. There is no variability in work-content (there is only error variability);

State 2. There is no error (error variability is zero; there is only work-content variability).

The two states define two different distributions for surgery-duration.

The first state, State 1, implies that the only source of variability is error. This incurs the normal distribution, for an additive error, or the log-normal distribution, for a multiplicative error (namely, error expressed as a percentage).

State 2, lack of error variability, by definition can only materialize when there is no typical value (like the mode), on which error can be defined. Since no definition of error is feasible, error variability becomes zero. For work-processes, like surgery, this can happen only when there is no typical work-content. In statistical terms, this is a state of lack-of-memory. An example is the duration of repair jobs at a car garage, relating to all types of repair. The distribution typical to such situations is the memoryless exponential.

We learn from this discussion, that any statistical model of surgery-duration, from which its distribution may be derived, needs to include, as extreme cases, both the normal/lognormal distributions and the exponential distribution.

This is a major constraint on any model for the distribution of surgery-duration. It has so far eluded individuals engaged in developing predictive methods for surgery-duration. Lack of knowledge of basic principles of industrial engineering, as well as total ignorance regarding how instability in work-content of a work process (like surgery) influences the form of the distribution, these probably constitute the major culprit for the poor current state-of-the-art of predicting surgery-duration.

In Shore (2020), we have developed a bi-variate model for surgery-duration, which delivers not only the distributions of surgery-duration in the extreme states (State 1 and State 2), but also the distributions of intermediate states, residing between the two extreme states. The two components of the bi-variate model represent work-content and error as two multiplicative random variables, with relative variabilities (standard deviations) that gradually change as surgery-duration moves from State 1 (normal/lognormal case) to State 2 (exponential case).

What do we hope to achieve with publishing of this post (and the accompanying podcast)?

We hope that individuals, engaged in developing predictive methods for surgery-duration, internalize the grim reality that:

  1. Unless their predictive method allows for the normal/lognormal and for the exponential to serve as exact distributions of surgery-duration at the extreme states;
  2. Unless their predictive method allows intermediate states, spanned on a continuous spectrum between the two extreme states, to converge smoothly to these states (as in Shore, 2020),

unless these two conditions be met, the likelihood for the accuracy of predictive methods for surgery-duration to improve anytime soon, this likelihood would remain, as it is today, extremely slim.

Literature

[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.

[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .

[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .

[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141

[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”. Wikipedia. April 16, 2021.

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

My Trilogy of Articles on Surgery Times – Now Complete (Published)

The third of three papers on modeling, monitoring and control of surgery times has just been published. Links to all three papers are given below.

Most recent paper, published on line Dec. 13, 2021, introduces a new methodology to estimate operating-room utilization-rate for differently distributed surgery-times: http://10.1080/00207543.2021.2009141

Second paper, published on line December 03,  2020, introduces a new methodology to monitor surgery duration, using Statistical Process Control (SPC): https://doi.org/10.1002/qre.2813.

First paper, published on line, May 07, 2020, develops a new statistical model for surgery time: https://doi.org/10.1080/23737484.2020.1740066.

 

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

Predictive Methods for Surgery Duration (new entry in Wikipedia)

Below you may find a link to a new entry in Wikipedia, written by me sometime ago (now accepted to be published):

https://en.wikipedia.org/wiki/Predictive_methods_for_surgery_duration

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

SPC Scheme to Monitor Surgery Duration (new article)

My new paper, published on line 03 December 2020. The paper introduces a new methodology to monitor surgery duration, using Statistical Process Control (SPC). It may be found at:  https://doi.org/10.1002/qre.2813.

This is the second in a series of three papers addressing surgery duration. The first paper, published on line 07 May 2020:

An explanatory bi-variate model for surgery-duration and its empirical validation

It may be found at: https://doi.org/10.1080/23737484.2020.1740066.

A third paper, addressing estimation of operating-room utilization-rate for differently-distributed surgery times, is under review.

Categories
General Statistical Applications

An explanatory model for surgery-duration

Forecasting surgery-duration (SD) accurately is a pre-condition for efficient utilization of operating theatres. An explanatory model may provide a good tool to produce such forecasts.

In this post, I deliver essential details of a new article, published recently in a peer-reviewed journal (Shore, 2020; see details below). A new explanatory model for SD is developed, and empirically validated, using a database of ten-thousand surgeries, performed in an Israeli hospital.

The new publication indeed complements a previous article on the same subject, published by me over thirty years ago (Shore, 1986; see details below).

One may realize that this article in practice presents a general model for performance-time of any of the three possible categories of work-processes: Repetitive, semi-repetitive and non-repetitive/memoryless. However, applying the model does not require specifying in advance which category the work-process belongs to. This becomes apparent as a result of data analysis.

Part of the Abstract and a link to the new article are provided below (please share).

Enjoy it!


Article title:  An explanatory bi-variate model for surgery-duration and its empirical validation

Journal: COMMUNICATIONS IN STATISTICS: CASE STUDIES, DATA ANALYSIS AND APPLICATIONS

DOI (press to read full Abstract and References):

https://doi.org/10.1080/23737484.2020.1740066

Limited-number free downloads (please download only if seriously interested):

https://www.tandfonline.com/eprint/WRXV8ECTHJJTPNYTM8UE/full?target=10.1080/23737484.2020.1740066

Other statistical applications on this blog (sample);

How to Use Standard Deviations in Weighting Averages?

Response Modeling Methodology Explained by Developer

Response Modeling Methodology — Now on Wikipedia

SPC-based monitoring of ecological processes (Presentation, Hebrew)

SPC-based Monitoring of Fetal Growth (Presentations)


ABSTRACT (partial)

Modelling the distribution of surgery-duration has been the subject of much research effort. A common assumption of these endeavours is that a single distribution is shared by all (or most) subcategories of surgeries, though parameters’ values may vary. Various distributions have been suggested to empirically model surgery-duration distribution, among them the normal and the exponential. In this paper, we abandon the assumption of a single distribution, and the practice of selecting it based on goodness-of-fit criteria. Introducing an innovative new concept, work-content instability (within surgery subcategory), we show that the normal and the exponential are just two end-points on a continuous spectrum of possible scenarios, between which surgery-duration distribution fluctuates (according to subcategory work-content instability). A new explanatory bi-variate stochastic model for surgery-duration is developed, which reflects the two sources affecting variability— work-content instability and error…

Reference:

Shore, H. 1986. “An Approximation for the Inverse Distribution Function of a Combination of Random Variables, with an Application to Operating Theatres.” Journal of Statistical Computation and Simulation 23 (3):157–181.

Categories
General Statistical Applications My Research in Statistics

How to Use Standard Deviations in Weighting Averages?

We wish to calculate a weighted average of a set of sample averages, given their standard deviations. How do we do that?

The objective is to find a weighting factor, alpha, that minimizes the variance of the weighted average, namely (for two averages):

Minimum { Variance[ (α)Average1 + (1-α)Average2 ] }

We first calculate the variance to obtain (Var is short for Variance; samples for averages assumed independent):

Variance[ (α)Average1 + (1-α)Average2 ] =

=  α2 Var(Average1) + (1-α)2 Var(Average2) .

Differentiating with respect to alpha and equating to zero, we obtain:

(2α)Var(Average 1) – 2(1-α)Var(Average 2) = 0, and the optimal alpha is:

α* = var(Average 2) / [ var(Average1) + var(Average2) ] ,

where: var(Average)= variance/n, with n a sample size.

We may wish to adapt this reply to specific needs. For example, for three averages we have:

Variance[ (α1)Average1 + (α2)Average2 + (1-α12)Average3 ] = 

= α12Var(Average1) + α22Var(Average2) + (1-α12)2 Var(Average3)

To minimize this expression, we differentiate twice, with respect to α1 and to α2. Equating to zero we obtain two linear equations in two unknowns that may be easily identified:

(2α1)Var(Average1) – 2(1-α12)Var(Average3) = 0,

(2α2)Var(Average2) – 2(1-α12)Var(Average3) = 0,

or:

α1= v3 / [v1 + v3 + (v1v3)/v2]

α2= v3 / [v2 + v3 + (v2v3)/v1]

where vi is Var(Average i) (i=1,2,3).

Since “in general, a system with the same number of equations and unknowns has a single unique solution” (Wikipedia, “System of linear equations”), extension to a higher number of averages (m>3), is straightforward, requiring solving a system of m-1 linear equations with m-1 unknowns.

 (This post appears also on my personal page at ResearchGate)

Categories
General Statistical Applications

Fibonacci series, Pi, Golden Ratio — Simple Relationships

Fibonacci numbers, the associated Golden Ratio and Pi appear abundantly in all phenomena of nature, from the very small to the very large. In this post, we deliver simple relationships between these three that allow their simple calculation, either exactly (Golden-Ratio and Fibonacci terms) or to high accuracy (Pi).

The start of the Fibonacci series (first seventeen terms) is:

{0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, …}.

A Fibonacci number is obtained by adding the last two terms preceding it in the series, for example, 55 is the sum of 21 and 34.

As the length of the Fibonacci series increases, the ratio between two consecutive Fibonacci numbers converge to the Golden Ratio. Simple exact expressions to calculate the Golden Ratio (denoted herewith Φ, capital of φ) and its reciprocal (denoted herewith φ, small Φ) are Eq. [1] and Eq. [2] (refer to PDF downloadable file below).

Employing Φ and φ, a simple formula for the k-th term in a Fibonacci series is given in Eq. [3]. Note that F(0)=0.

A formula that combines Fibonacci numbers and the Golden-Ratio (Phi = Φ =1.618…) delivers a compact expression for π (Eq. [4]).

For example, for n=3: F(2n+1)=F(7)=13.

Inserting in this equation the formula for a Fibonacci number in terms of the Golden-Ratio, as given earlier, we finally obtain a formula to calculate Pi in terms of the Golden-Ratio (Φ) and its reciprocal (ϕ) (Eq. [5]).

This formula delivers highly accurate values for π even for relatively small upper summation limit of n.

Below are values of π obtained for different upper summation values:

“Exact” Pi value (π): 3.141592654…

{Upper summation limit, calculated π}:

{{5, 3.141148432}, {6, 3.141739012}, {7, 3.141543509}, {8, 3.141609399}, {9, 3.141586881}, {10, 3.141594663}, {11, 3.141591949}, {12, 3.141592902}, {13, 3.141592565}, {14, 3.141592685}, {15, 3.141592642}}.

We realize that already for upper summation limit of 14 — exact value of Pi to seven decimal points is obtained‼

Playing Pi ) and Phi , Golden Ratio) on the piano:

  • Song from π! (reproduced 2015; with Sheet Music/HQ Download):
  • Song from π! (original, 2011):
  • What Phi (Golden Ratio) Sounds Like (reproduced 2012):
  • What Phi (Golden Ratio) Sounds Like (original, 2011):

Partial Source for this post: Castellanos D. Rapidly Converging Expansions with Fibonacci Coefficients 1986; Fibonacci Quarterly 24: 70-82.

Categories
General Statistical Applications My Research in Statistics

Response Modeling Methodology Explained by Developer

Professor Haim Shore Lecture on RMM (Response Modeling Methodology), delivered at Department of Industrial and Systems Engineering, Samuel Ginn College of Engineering, Auburn University, USA; March 6 2006.

Comprehensive literature review may be found on Wikipedia:

Wikipedia: Response Modeling Methodology

Links to published articles about RMM on ResearchGate:

Haim Shore_ResearchGate Page_Response Modeling Methodology (RMM)_

PowerPoint Presentation:Shore_Seminar_Auburn-Univ_March 2006

PowerPoint Presentation:Shore_Seminar_Auburn-Univ_March 2006_2

Categories
General Statistical Applications My Research in Statistics

Response Modeling Methodology — Now on Wikipedia

Response Modeling Methodology (RMM) is now on Wikipedia!! RMM is a general platform for modeling monotone convex relationships, which I have been developing over the last fifteen years (as of May, 2017),  applying it to various scientific and engineering disciplines.

A new entry about Response Modeling Methodology (RMM) has now been added to Wikipedia, with a comprehensive literature review::

Response Modeling Methodology – Haim Shore (Wikipedia).