The answer to this question is detailed in a new paper of mine, just published, available to all (Open Access):
https://onlinelibrary.wiley.com/doi/10.1002/qre.3386
Enjoy and please share!!
The answer to this question is detailed in a new paper of mine, just published, available to all (Open Access):
https://onlinelibrary.wiley.com/doi/10.1002/qre.3386
Enjoy and please share!!
My paper of 2020:
An Explanatory Bi-Variate Model for Surgery-Duration and Its Empirical Validation ,
which outlines a novel approach to modeling and forecasting surgery-duration, has now become Free Access (namely, open for all to read).
The paper has become cornerstone for a series of related papers that followed.
If you feel qualified (in terms of basic knowledge of Statistics),
Read and enjoy!!
I have now uploaded the complete series of thirteen lectures (Hebrew) on “Quality by Design”, delivered by me to graduate students (engineers from Israel industry) in the summer of 2014.
Recent AI techniques to improve audio have allowed me to upload this series to YouTube, for the benefit of Hebrew-speaking quality professionals.
Enjoy, and please share:
Accurate prediction of surgery-duration is key to optimal utilization of operating theatres. Yet, current predictions, based on best available statistical and AI techniques, are highly inaccurate. This causes operating rooms worldwide to operate in a sub-optimal mode. Based on personal experience, supported by recently published three peer-reviewed articles, we believe that the poor state-of-the-art of current predictive methods for surgery-duration is traceable to a single cause. What is it? What is the remedy?
Literature
[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.
[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .
[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .
[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141
[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”. Wikipedia. April 16, 2021.
(Related podcast: Why Predictions of Surgery-Duration are So Poor, and a Possible Remedy (Podcast) ).
Operating theatres are the most expensive resource at the disposal of hospitals. This renders optimizing scheduling of surgeries to operating rooms a top priority. A pre-condition to optimal scheduling is that accurate predictions of surgery-duration be available. Much research effort has in recent years been invested to develop methods that improve the accuracy of surgery-duration predictions. This ongoing effort includes both traditional statistical methods and newer Artificial Intelligence (AI) methods. The state-of-the-art of these methods, with relevant peer-reviewed literature, have recently been summarized by us in a new entry on Wikipedia, titled “Predictive Methods for Surgery Duation”.
Personally, I was first exposed to the problem of predicting surgery-duration over thirty years ago, when I was involved in a large-scale project encompassing all governmental hospitals in Israel (at the time). Partial results of this effort had been reported in my published paper of 1986, and further details can be found in my more recent paper of 2020. Both articles are listed in the literature section at the end of this post (for podcast listeners, this list may be found on haimshore.blog).
My second involvement in developing predictive methods for surgery-duration was in more recent years, culminating in three peer-reviewed published papers (Shore 2020, 2021 ab; see references below).
Surgery-duration is known to be very highly volatile. The larger the variability between surgeries, the less accurate the prediction may be expected to be. To reduce this variability, newly devised predictive methods for surgery-duration tend to concentrate on subsets of surgeries, classified according to some classification system. It is assumed that via this classification, prediction accuracy may be enhanced. A common method to classify surgeries, implemented worldwide, is Current Procedural Terminology (CPT®). This coding system delivers, in a hierarchical fashion, particular codes to subsets of surgeries. In doing so, variability between surgeries sharing same CPT code is expected to be reduced, allowing for better prediction accuracy.
A second effort to increase accuracy is to include, in the predictive method, certain factors, known prior to surgery, which deliver variability to surgery-duration. It is hoped that by taking account of these factors, in the predictive method, unexplained variability in surgery-duration will be reduced, thereby enhancing prediction accuracy (examples will soon be given).
A third factor that influence accuracy is the amount of reliable data, used to generate predictions. Given recent developments in our ability to process large amounts of data, commonly known as Big Data, Artificial Intelligence (AI) methods have been summoned to assist in predicting surgery times.
These new methods and others are surveyed more thoroughly in the aforementioned entry on Wikipedia.
The new methods notwithstanding, current predictive methods for surgery-duration still deliver unsatisfactory accuracy.
Why is that so?
We believe that a major factor for the poor performance of current predictive methods is lack of essential understanding of what constitute major sources of variability to surgery-duration. Based on our own personal experience, as alluded to earlier, and also on our professional background as industrial engineers, specializing in analysis of work processes (of which surgeries are an example), we believe that there are two sets of factors that generate variability in surgery-duration: A set of major factors and a set of secondary factors. We denote these Set 1 and Set 2 (henceforth, we refer only to variability between surgeries within a subset of same code):
Set 1 — Two Major Factors:
Set 2 — Multiple Secondary Factors, like: patient age, professional experience and size of medical team, number of surgeries a surgeon has to perform in a shift, type of anaesthetic administered.
Let us explain why, in contrast to current practices, we believe that work-content instability has critical effect on prediction accuracy, and why accounting for it, in the predictive method, is crucial to improving current accuracy, obtained via traditional methods.
To prepare predictions for any random phenomenon, assumed to be in steady-state, the best approach is to define its statistical distribution and estimate its parameters, based on real data. Once the distribution is completely defined, various statements about the conduct of the random phenomenon (like surgery-duration) can be made.
For example:
Understanding that complete definition of the distribution is the best approach to predict surgery-duration, let us next explain what type of distributions can one expect in the two extreme states, regarding the two major factors of Set 1:
State 1. There is no variability in work-content (there is only error variability);
State 2. There is no error (error variability is zero; there is only work-content variability).
The two states define two different distributions for surgery-duration.
The first state, State 1, implies that the only source of variability is error. This incurs the normal distribution, for an additive error, or the log-normal distribution, for a multiplicative error (namely, error expressed as a percentage).
State 2, lack of error variability, by definition can only materialize when there is no typical value (like the mode), on which error can be defined. Since no definition of error is feasible, error variability becomes zero. For work-processes, like surgery, this can happen only when there is no typical work-content. In statistical terms, this is a state of lack-of-memory. An example is the duration of repair jobs at a car garage, relating to all types of repair. The distribution typical to such situations is the memoryless exponential.
We learn from this discussion, that any statistical model of surgery-duration, from which its distribution may be derived, needs to include, as extreme cases, both the normal/lognormal distributions and the exponential distribution.
This is a major constraint on any model for the distribution of surgery-duration. It has so far eluded individuals engaged in developing predictive methods for surgery-duration. Lack of knowledge of basic principles of industrial engineering, as well as total ignorance regarding how instability in work-content of a work process (like surgery) influences the form of the distribution, these probably constitute the major culprit for the poor current state-of-the-art of predicting surgery-duration.
In Shore (2020), we have developed a bi-variate model for surgery-duration, which delivers not only the distributions of surgery-duration in the extreme states (State 1 and State 2), but also the distributions of intermediate states, residing between the two extreme states. The two components of the bi-variate model represent work-content and error as two multiplicative random variables, with relative variabilities (standard deviations) that gradually change as surgery-duration moves from State 1 (normal/lognormal case) to State 2 (exponential case).
What do we hope to achieve with publishing of this post (and the accompanying podcast)?
We hope that individuals, engaged in developing predictive methods for surgery-duration, internalize the grim reality that:
unless these two conditions be met, the likelihood for the accuracy of predictive methods for surgery-duration to improve anytime soon, this likelihood would remain, as it is today, extremely slim.
Literature
[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.
[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .
[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .
[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141
[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”. Wikipedia. April 16, 2021.
The third of three papers on modeling, monitoring and control of surgery times has just been published. Links to all three papers are given below.
Most recent paper, published on line Dec. 13, 2021, introduces a new methodology to estimate operating-room utilization-rate for differently distributed surgery-times: http://10.1080/00207543.2021.2009141
Second paper, published on line December 03, 2020, introduces a new methodology to monitor surgery duration, using Statistical Process Control (SPC): https://doi.org/10.1002/qre.2813.
First paper, published on line, May 07, 2020, develops a new statistical model for surgery time: https://doi.org/10.1080/23737484.2020.1740066.
Below you may find a link to a new entry in Wikipedia, written by me sometime ago (now accepted to be published):
https://en.wikipedia.org/wiki/Predictive_methods_for_surgery_duration
My new paper, published on line 03 December 2020. The paper introduces a new methodology to monitor surgery duration, using Statistical Process Control (SPC). It may be found at: https://doi.org/10.1002/qre.2813.
This is the second in a series of three papers addressing surgery duration. The first paper, published on line 07 May 2020:
It may be found at: https://doi.org/10.1080/23737484.2020.1740066.
A third paper, addressing estimation of operating-room utilization-rate for differently-distributed surgery times, is under review.
Forecasting surgery-duration (SD) accurately is a pre-condition for efficient utilization of operating theatres. An explanatory model may provide a good tool to produce such forecasts.
In this post, I deliver essential details of a new article, published recently in a peer-reviewed journal (Shore, 2020; see details below). A new explanatory model for SD is developed, and empirically validated, using a database of ten-thousand surgeries, performed in an Israeli hospital.
The new publication indeed complements a previous article on the same subject, published by me over thirty years ago (Shore, 1986; see details below).
One may realize that this article in practice presents a general model for performance-time of any of the three possible categories of work-processes: Repetitive, semi-repetitive and non-repetitive/memoryless. However, applying the model does not require specifying in advance which category the work-process belongs to. This becomes apparent as a result of data analysis.
Part of the Abstract and a link to the new article are provided below (please share).
Enjoy it!
Article title: An explanatory bi-variate model for surgery-duration and its empirical validation
Journal: COMMUNICATIONS IN STATISTICS: CASE STUDIES, DATA ANALYSIS AND APPLICATIONS
DOI (press to read full Abstract and References):
https://doi.org/10.1080/23737484.2020.1740066
Limited-number free downloads (please download only if seriously interested):
https://www.tandfonline.com/eprint/WRXV8ECTHJJTPNYTM8UE/full?target=10.1080/23737484.2020.1740066
Other statistical applications on this blog (sample);
How to Use Standard Deviations in Weighting Averages?
Response Modeling Methodology Explained by Developer
Response Modeling Methodology — Now on Wikipedia
SPC-based monitoring of ecological processes (Presentation, Hebrew)
SPC-based Monitoring of Fetal Growth (Presentations)
ABSTRACT (partial)
Modelling the distribution of surgery-duration has been the subject of much research effort. A common assumption of these endeavours is that a single distribution is shared by all (or most) subcategories of surgeries, though parameters’ values may vary. Various distributions have been suggested to empirically model surgery-duration distribution, among them the normal and the exponential. In this paper, we abandon the assumption of a single distribution, and the practice of selecting it based on goodness-of-fit criteria. Introducing an innovative new concept, work-content instability (within surgery subcategory), we show that the normal and the exponential are just two end-points on a continuous spectrum of possible scenarios, between which surgery-duration distribution fluctuates (according to subcategory work-content instability). A new explanatory bi-variate stochastic model for surgery-duration is developed, which reflects the two sources affecting variability— work-content instability and error…
Reference:
Shore, H. 1986. “An Approximation for the Inverse Distribution Function of a Combination of Random Variables, with an Application to Operating Theatres.” Journal of Statistical Computation and Simulation 23 (3):157–181.
We wish to calculate a weighted average of a set of sample averages, given their standard deviations. How do we do that?
The objective is to find a weighting factor, alpha, that minimizes the variance of the weighted average, namely (for two averages):
Minimum { Variance[ (α)Average1 + (1-α)Average2 ] }
We first calculate the variance to obtain (Var is short for Variance; samples for averages assumed independent):
Variance[ (α)Average1 + (1-α)Average2 ] =
= α2 Var(Average1) + (1-α)2 Var(Average2) .
Differentiating with respect to alpha and equating to zero, we obtain:
(2α)Var(Average 1) – 2(1-α)Var(Average 2) = 0, and the optimal alpha is:
α* = var(Average 2) / [ var(Average1) + var(Average2) ] ,
where: var(Average)= variance/n, with n a sample size.
We may wish to adapt this reply to specific needs. For example, for three averages we have:
Variance[ (α1)Average1 + (α2)Average2 + (1-α1-α2)Average3 ] =
= α12Var(Average1) + α22Var(Average2) + (1-α1-α2)2 Var(Average3)
To minimize this expression, we differentiate twice, with respect to α1 and to α2. Equating to zero we obtain two linear equations in two unknowns that may be easily identified:
(2α1)Var(Average1) – 2(1-α1-α2)Var(Average3) = 0,
(2α2)Var(Average2) – 2(1-α1-α2)Var(Average3) = 0,
or:
α1= v3 / [v1 + v3 + (v1v3)/v2]
α2= v3 / [v2 + v3 + (v2v3)/v1]
where vi is Var(Average i) (i=1,2,3).
Since “in general, a system with the same number of equations and unknowns has a single unique solution” (Wikipedia, “System of linear equations”), extension to a higher number of averages (m>3), is straightforward, requiring solving a system of m-1 linear equations with m-1 unknowns.
(This post appears also on my personal page at ResearchGate)