Categories
General Statistical Applications My Research in Statistics

My Four-Part Mini-Series Now on Wiley StatsRef Online

My four-part mini-series on Statistics is now published by Wiley:

Shore Four-Part Mini-Series on: “Wiley StatsRef: Statistics Reference Online”

Here are links to all four parts (stat08456 to stat08459):

Parametric and Parameter-Free Shape Moments (Stat08459)

Asymptotic Normality and the Coefficient of Variation (Stat08458)

The Mean, Mode, Standard Deviation and Their Mutual Relationships (Stat08457)

The Effects of the Box–Cox Transformation (Stat08456)

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications My Research in Statistics

Why the Mode Occasionally Departs from the Mean?

The answer to this question is detailed in a new paper, just published (Shore, 2024a; Open Access):

A novel approach to modeling steady-state process-time with smooth transition from repetitive to semi-repetitive to non-repetitive (memoryless) processes

A related post, referring to a more recent paper (Shore, 2024b; Open Access):

Why the Mode Departs from the Mean (Published, Open Access)

A Layman’s Abstract, published by Wiley, may be found here:

Enjoy and please share!!

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

Modeling and Forecasting Surgey-Time (published article, now Free Access)

My paper of 2020:

An Explanatory Bi-Variate Model for Surgery-Duration and Its Empirical Validation ,

which outlines a novel approach to modeling and forecasting surgery-duration, has now become Free Access (namely, open for all to read).

The paper has become cornerstone for a series of related papers that followed.

If you feel qualified (in terms of basic knowledge of Statistics),

Read and enjoy!!

Comment:

Related more recent articles:

[1] A new paper that generalizes to any process the results of the first paper (Open Access):

https://onlinelibrary.wiley.com/doi/10.1002/qre.3386

[2] A Layman’s Abstract, published by Wiley, may be found here:

Categories
General General Statistical Applications

“Quality by Design” – Lectures (Hebrew) Delivered to Engineers from Israel Industry

I have now uploaded the complete series of thirteen lectures (Hebrew) on “Quality by Design”, delivered by me to graduate students (engineers from Israel industry) in the summer of 2014.

Recent AI techniques to improve audio have allowed me to upload this series to YouTube, for the benefit of Hebrew-speaking quality professionals.

Enjoy, and please share:

Categories
General Statistical Applications Podcasts (audio)

Why Predictions of Surgery-Duration are So Poor, and a Possible Remedy (Podcast)

Accurate prediction of surgery-duration is key to optimal utilization of operating theatres. Yet, current predictions, based on best available statistical and AI techniques, are highly inaccurate. This causes operating rooms worldwide to operate in a sub-optimal mode. Based on personal experience, supported by recently published three peer-reviewed articles, we believe that the poor state-of-the-art of current predictive methods for surgery-duration is traceable to a single cause. What is it? What is the remedy?

Literature

[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.

[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .

[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .

[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141

[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”Wikipedia. April 16, 2021.

Categories
General Statistical Applications

Why Surgery-Duration Predictions are So Poor, and a Possible Remedy

(Related podcast:  Why Predictions of Surgery-Duration are So Poor, and a Possible Remedy (Podcast)  ).

Operating theatres are the most expensive resource at the disposal of hospitals. This renders optimizing scheduling of surgeries to operating rooms a top priority. A pre-condition to optimal scheduling is that accurate predictions of surgery-duration be available. Much research effort has in recent years been invested to develop methods that improve the accuracy of surgery-duration predictions. This ongoing effort includes both traditional statistical methods and newer Artificial Intelligence (AI) methods. The state-of-the-art of these methods, with relevant peer-reviewed literature, have recently been summarized by us in a new entry on Wikipedia, titled “Predictive Methods for Surgery Duation”.     

Personally, I was first exposed to the problem of predicting surgery-duration over thirty years ago, when I was involved in a large-scale project encompassing all governmental hospitals in Israel (at the time). Partial results of this effort had been reported in my published paper of 1986, and further details can be found in my more recent paper of 2020. Both articles are listed in the literature section at the end of this post (for podcast listeners, this list may be found on haimshore.blog).

My second involvement in developing predictive methods for surgery-duration was in more recent years, culminating in three peer-reviewed published papers (Shore 2020, 2021 ab; see references below).

Surgery-duration is known to be very highly volatile. The larger the variability between surgeries, the less accurate the prediction may be expected to be. To reduce this variability, newly devised predictive methods for surgery-duration tend to concentrate on subsets of surgeries, classified according to some classification system. It is assumed that via this classification, prediction accuracy may be enhanced. A common method to classify surgeries, implemented worldwide, is Current Procedural Terminology (CPT®). This coding system delivers, in a hierarchical fashion, particular codes to subsets of surgeries. In doing so, variability between surgeries sharing same CPT code is expected to be reduced, allowing for better prediction accuracy.

A second effort to increase accuracy is to include, in the predictive method, certain factors, known prior to surgery, which deliver variability to surgery-duration. It is hoped that by taking account of these factors, in the predictive method, unexplained variability in surgery-duration will be reduced, thereby enhancing prediction accuracy (examples will soon be given).

A third factor that influence accuracy is the amount of reliable data, used to generate predictions. Given recent developments in our ability to process large amounts of data, commonly known as Big Data, Artificial Intelligence (AI) methods have been summoned to assist in predicting surgery times.

These new methods and others are surveyed more thoroughly in the aforementioned entry on Wikipedia.

The new methods notwithstanding, current predictive methods for surgery-duration still deliver unsatisfactory accuracy.

Why is that so?

We believe that a major factor for the poor performance of current predictive methods is lack of essential understanding of what constitute major sources of variability to surgery-duration. Based on our own personal experience, as alluded to earlier, and also on our professional background as industrial engineers, specializing in analysis of work processes (of which surgeries are an example), we believe that there are two sets of factors that generate variability in surgery-duration: A set of major factors and a set of secondary factors. We denote these Set 1 and Set 2 (henceforth, we refer only to variability between surgeries within a subset of same code):

Set 1 — Two Major Factors:

  • Factor I. Work-content instability (possibly affected by variability in patient condition);
  • Factor II. Error variability.

Set 2 — Multiple Secondary Factors, like: patient age, professional experience and size of medical team, number of surgeries a surgeon has to perform in a shift, type of anaesthetic administered. 

Let us explain why, in contrast to current practices, we believe that work-content instability has critical effect on prediction accuracy, and why accounting for it, in the predictive method, is crucial to improving current accuracy, obtained via traditional methods.

To prepare predictions for any random phenomenon, assumed to be in steady-state, the best approach is to define its statistical distribution and estimate its parameters, based on real data. Once the distribution is completely defined, various statements about the conduct of the random phenomenon (like surgery-duration) can be made.

For example:

  • What is the most likely realization (given by distribution’s mode);
  • What is the middle value, which delivers equal probabilities, for any realization, to be larger or smaller than that value (expressed by distribution’s median);
  • What is the probability that any realization of the random phenomenon exceeds a specified value (calculated by the cumulative density function, CDF)?

Understanding that complete definition of the distribution is the best approach to predict surgery-duration, let us next explain what type of distributions can one expect in the two extreme states, regarding the two major factors of Set 1:

State 1. There is no variability in work-content (there is only error variability);

State 2. There is no error (error variability is zero; there is only work-content variability).

The two states define two different distributions for surgery-duration.

The first state, State 1, implies that the only source of variability is error. This incurs the normal distribution, for an additive error, or the log-normal distribution, for a multiplicative error (namely, error expressed as a percentage).

State 2, lack of error variability, by definition can only materialize when there is no typical value (like the mode), on which error can be defined. Since no definition of error is feasible, error variability becomes zero. For work-processes, like surgery, this can happen only when there is no typical work-content. In statistical terms, this is a state of lack-of-memory. An example is the duration of repair jobs at a car garage, relating to all types of repair. The distribution typical to such situations is the memoryless exponential.

We learn from this discussion, that any statistical model of surgery-duration, from which its distribution may be derived, needs to include, as extreme cases, both the normal/lognormal distributions and the exponential distribution.

This is a major constraint on any model for the distribution of surgery-duration. It has so far eluded individuals engaged in developing predictive methods for surgery-duration. Lack of knowledge of basic principles of industrial engineering, as well as total ignorance regarding how instability in work-content of a work process (like surgery) influences the form of the distribution, these probably constitute the major culprit for the poor current state-of-the-art of predicting surgery-duration.

In Shore (2020), we have developed a bi-variate model for surgery-duration, which delivers not only the distributions of surgery-duration in the extreme states (State 1 and State 2), but also the distributions of intermediate states, residing between the two extreme states. The two components of the bi-variate model represent work-content and error as two multiplicative random variables, with relative variabilities (standard deviations) that gradually change as surgery-duration moves from State 1 (normal/lognormal case) to State 2 (exponential case).

What do we hope to achieve with publishing of this post (and the accompanying podcast)?

We hope that individuals, engaged in developing predictive methods for surgery-duration, internalize the grim reality that:

  1. Unless their predictive method allows for the normal/lognormal and for the exponential to serve as exact distributions of surgery-duration at the extreme states;
  2. Unless their predictive method allows intermediate states, spanned on a continuous spectrum between the two extreme states, to converge smoothly to these states (as in Shore, 2020),

unless these two conditions be met, the likelihood for the accuracy of predictive methods for surgery-duration to improve anytime soon, this likelihood would remain, as it is today, extremely slim.

Literature

[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.

[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .

[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .

[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141

[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”. Wikipedia. April 16, 2021.

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

My Trilogy of Articles on Surgery Times – Now Complete (Published)

The third of three papers on modeling, monitoring and control of surgery times has just been published. Links to all three papers are given below.

Most recent paper, published on line Dec. 13, 2021, introduces a new methodology to estimate operating-room utilization-rate for differently distributed surgery-times: http://10.1080/00207543.2021.2009141

Second paper, published on line December 03,  2020, introduces a new methodology to monitor surgery duration, using Statistical Process Control (SPC): https://doi.org/10.1002/qre.2813.

First paper, published on line, May 07, 2020, develops a new statistical model for surgery time: https://doi.org/10.1080/23737484.2020.1740066.

 

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

Predictive Methods for Surgery Duration (new entry in Wikipedia)

Below you may find a link to a new entry in Wikipedia, written by me sometime ago (now accepted to be published):

https://en.wikipedia.org/wiki/Predictive_methods_for_surgery_duration

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications

SPC Scheme to Monitor Surgery Duration (new article)

My new paper, published on line 03 December 2020. The paper introduces a new methodology to monitor surgery duration, using Statistical Process Control (SPC). It may be found at:  https://doi.org/10.1002/qre.2813.

This is the second in a series of three papers addressing surgery duration. The first paper, published on line 07 May 2020:

An explanatory bi-variate model for surgery-duration and its empirical validation

It may be found at: https://doi.org/10.1080/23737484.2020.1740066.

A third paper, addressing estimation of operating-room utilization-rate for differently-distributed surgery times, is under review.

Categories
General Statistical Applications

An explanatory model for surgery-duration

Forecasting surgery-duration (SD) accurately is a pre-condition for efficient utilization of operating theatres. An explanatory model may provide a good tool to produce such forecasts.

In this post, I deliver essential details of a new article, published recently in a peer-reviewed journal (Shore, 2020; see details below). A new explanatory model for SD is developed, and empirically validated, using a database of ten-thousand surgeries, performed in an Israeli hospital.

The new publication indeed complements a previous article on the same subject, published by me over thirty years ago (Shore, 1986; see details below).

One may realize that this article in practice presents a general model for performance-time of any of the three possible categories of work-processes: Repetitive, semi-repetitive and non-repetitive/memoryless. However, applying the model does not require specifying in advance which category the work-process belongs to. This becomes apparent as a result of data analysis.

Part of the Abstract and a link to the new article are provided below (please share).

Enjoy it!


Article title:  An explanatory bi-variate model for surgery-duration and its empirical validation

Journal: COMMUNICATIONS IN STATISTICS: CASE STUDIES, DATA ANALYSIS AND APPLICATIONS

DOI (press to read full Abstract and References):

https://doi.org/10.1080/23737484.2020.1740066

Limited-number free downloads (please download only if seriously interested):

https://www.tandfonline.com/eprint/WRXV8ECTHJJTPNYTM8UE/full?target=10.1080/23737484.2020.1740066

Other statistical applications on this blog (sample);

How to Use Standard Deviations in Weighting Averages?

Response Modeling Methodology Explained by Developer

Response Modeling Methodology — Now on Wikipedia

SPC-based monitoring of ecological processes (Presentation, Hebrew)

SPC-based Monitoring of Fetal Growth (Presentations)


ABSTRACT (partial)

Modelling the distribution of surgery-duration has been the subject of much research effort. A common assumption of these endeavours is that a single distribution is shared by all (or most) subcategories of surgeries, though parameters’ values may vary. Various distributions have been suggested to empirically model surgery-duration distribution, among them the normal and the exponential. In this paper, we abandon the assumption of a single distribution, and the practice of selecting it based on goodness-of-fit criteria. Introducing an innovative new concept, work-content instability (within surgery subcategory), we show that the normal and the exponential are just two end-points on a continuous spectrum of possible scenarios, between which surgery-duration distribution fluctuates (according to subcategory work-content instability). A new explanatory bi-variate stochastic model for surgery-duration is developed, which reflects the two sources affecting variability— work-content instability and error…

Reference:

Shore, H. 1986. “An Approximation for the Inverse Distribution Function of a Combination of Random Variables, with an Application to Operating Theatres.” Journal of Statistical Computation and Simulation 23 (3):157–181.

Categories
General Statistical Applications My Research in Statistics

How to Use Standard Deviations in Weighting Averages?

We wish to calculate a weighted average of a set of sample averages, given their standard deviations. How do we do that?

The objective is to find a weighting factor, alpha, that minimizes the variance of the weighted average, namely (for two averages):

Minimum { Variance[ (α)Average1 + (1-α)Average2 ] }

We first calculate the variance to obtain (Var is short for Variance; samples for averages assumed independent):

Variance[ (α)Average1 + (1-α)Average2 ] =

=  α2 Var(Average1) + (1-α)2 Var(Average2) .

Differentiating with respect to alpha and equating to zero, we obtain:

(2α)Var(Average 1) – 2(1-α)Var(Average 2) = 0, and the optimal alpha is:

α* = var(Average 2) / [ var(Average1) + var(Average2) ] ,

where: var(Average)= variance/n, with n a sample size.

We may wish to adapt this reply to specific needs. For example, for three averages we have:

Variance[ (α1)Average1 + (α2)Average2 + (1-α12)Average3 ] = 

= α12Var(Average1) + α22Var(Average2) + (1-α12)2 Var(Average3)

To minimize this expression, we differentiate twice, with respect to α1 and to α2. Equating to zero we obtain two linear equations in two unknowns that may be easily identified:

(2α1)Var(Average1) – 2(1-α12)Var(Average3) = 0,

(2α2)Var(Average2) – 2(1-α12)Var(Average3) = 0,

or:

α1= v3 / [v1 + v3 + (v1v3)/v2]

α2= v3 / [v2 + v3 + (v2v3)/v1]

where vi is Var(Average i) (i=1,2,3).

Since “in general, a system with the same number of equations and unknowns has a single unique solution” (Wikipedia, “System of linear equations”), extension to a higher number of averages (m>3), is straightforward, requiring solving a system of m-1 linear equations with m-1 unknowns.

 (This post appears also on my personal page at ResearchGate)

Categories
General Statistical Applications

Fibonacci series, Pi, Golden Ratio — Simple Relationships

Fibonacci numbers, the associated Golden Ratio and Pi appear abundantly in all phenomena of nature, from the very small to the very large. In this post, we deliver simple relationships between these three that allow their simple calculation, either exactly (Golden-Ratio and Fibonacci terms) or to high accuracy (Pi).

The start of the Fibonacci series (first seventeen terms) is:

{0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, …}.

A Fibonacci number is obtained by adding the last two terms preceding it in the series, for example, 55 is the sum of 21 and 34.

As the length of the Fibonacci series increases, the ratio between two consecutive Fibonacci numbers converge to the Golden Ratio. Simple exact expressions to calculate the Golden Ratio (denoted herewith Φ, capital of φ) and its reciprocal (denoted herewith φ, small Φ) are Eq. [1] and Eq. [2] (refer to PDF downloadable file below).

Employing Φ and φ, a simple formula for the k-th term in a Fibonacci series is given in Eq. [3]. Note that F(0)=0.

A formula that combines Fibonacci numbers and the Golden-Ratio (Phi = Φ =1.618…) delivers a compact expression for π (Eq. [4]).

For example, for n=3: F(2n+1)=F(7)=13.

Inserting in this equation the formula for a Fibonacci number in terms of the Golden-Ratio, as given earlier, we finally obtain a formula to calculate Pi in terms of the Golden-Ratio (Φ) and its reciprocal (ϕ) (Eq. [5]).

This formula delivers highly accurate values for π even for relatively small upper summation limit of n.

Below are values of π obtained for different upper summation values:

“Exact” Pi value (π): 3.141592654…

{Upper summation limit, calculated π}:

{{5, 3.141148432}, {6, 3.141739012}, {7, 3.141543509}, {8, 3.141609399}, {9, 3.141586881}, {10, 3.141594663}, {11, 3.141591949}, {12, 3.141592902}, {13, 3.141592565}, {14, 3.141592685}, {15, 3.141592642}}.

We realize that already for upper summation limit of 14 — exact value of Pi to seven decimal points is obtained‼

Playing Pi ) and Phi , Golden Ratio) on the piano:

  • Song from π! (reproduced 2015; with Sheet Music/HQ Download):
  • Song from π! (original, 2011):
  • What Phi (Golden Ratio) Sounds Like (reproduced 2012):
  • What Phi (Golden Ratio) Sounds Like (original, 2011):

Partial Source for this post: Castellanos D. Rapidly Converging Expansions with Fibonacci Coefficients 1986; Fibonacci Quarterly 24: 70-82.

Categories
General Statistical Applications My Research in Statistics

Response Modeling Methodology Explained by Developer

Professor Haim Shore Lecture on RMM (Response Modeling Methodology), delivered at Department of Industrial and Systems Engineering, Samuel Ginn College of Engineering, Auburn University, USA; March 6 2006.

Comprehensive literature review may be found on Wikipedia:

Wikipedia: Response Modeling Methodology

Links to published articles about RMM on ResearchGate:

Haim Shore_ResearchGate Page_Response Modeling Methodology (RMM)_

PowerPoint Presentation:Shore_Seminar_Auburn-Univ_March 2006

PowerPoint Presentation:Shore_Seminar_Auburn-Univ_March 2006_2

https://www.youtube.com/watch?v=oLeG4ZIUY5s&t=655s

Categories
General Statistical Applications My Research in Statistics

Response Modeling Methodology — Now on Wikipedia

Response Modeling Methodology (RMM) is now on Wikipedia!! RMM is a general platform for modeling monotone convex relationships, which I have been developing over the last fifteen years (as of May, 2017),  applying it to various scientific and engineering disciplines.

A new entry about Response Modeling Methodology (RMM) has now been added to Wikipedia, with a comprehensive literature review::

Response Modeling Methodology – Haim Shore (Wikipedia).

 

Categories
Forecasting and Monitoring of Surgery Times General Statistical Applications My Research in Statistics

The Universal Distribution

Since studying as an undergraduate student at the Technion (Israel Institute of Technology) and learning, for the first time in my life, that randomness too has its own laws (in the form of statistical distributions, amongst others), I have become extremely appreciative of the ingenuity of the concept of statistical distribution. The sheer combining of randomness with laws, formulated in the language of mathematics not unlike any other branch of the exact sciences, fascinated me considerably, young man that I was at the time.

That admiration has all since evaporated as I have become increasingly aware of the gigantic number of statistical distributions, defined and used within the science of statistics to describe random behavior, either of real-world phenomena or of sample-statistics embedded in statistical-analysis procedures (like hypothesis testing). I realized that unlike with modern-day physics, engaged to this day in the unification of the basic forces of nature, the science of statistics has failed to carry out similar attempts at unification. What the latter implies for me is derivation of a single universal distribution, relative to which all current distributions might be regarded as statistically insignificant random deviations (not unlike a sample average is a random deviation from the population mean). Such unification has never materialized, or even been attempted or debated, within the science of statistics.

Personally, I attribute this failure at unification to the fact that current foundations of statistics, with its basic concepts like probability function, probability density function (pdf) or distribution function (often denoted cumulative density function, or CDF), have been established back in the eighteenth century to derive various early-day distributions. These foundations have not been challenged ever since. Some well-known mathematicians of the time, like Jacob and Daniel Bernoulli, Abraham de Moivre, Carl Friedrich Gauss, Pierre-Simon Laplace and Joseph Louis Lagrange have all used those basic terms of statistics to derive specific distributions. However, the basic tenets underlying formation of those mathematical models of random variation have not been challenged to this day. Central amongst these tenets is the belief that random phenomena, with their associated properly-defined random variables, have each its own specific distribution. That tenet remained intact and unchallenged to this day. Consequently, no serious attempt at unification has ever become the core objective of the science of statistics. Furthermore, no discussion of how to proceed in the pursuit of the “universal distribution” has ever been conducted.

My sentiment about the feasibility of revolutionizing the concept of statistical distribution and deriving a universal distribution, relative to which all current distributions may be regarded as random deviations, has changed dramatically with the introduction of a new non-linear modeling approach, denoted Response Modeling Methodology, RMM). I have developed RMM back in the closing years of the previous century (Shore, 2005, and references therein), and only some years later I realized that the “Continuous Monotone Convexity (CMC)” property, part and parcel of RMM, could serve to derive the universal distribution, in the sense described in the previous paragraph. (Read about the CMC property in another post in this blog).

The results of the new realization are two articles (Shore 2015, 2017), one of which has already been published and the second currently under review (see references here).

More recently, I have reached new insights regarding the “Universal Distribution”, the result of ongoing research on predicting and statistical control of surgery time. This research effort has produced the new “Random Identity Paradigm”, described and explained in various published resources. Some of these are detailed below (for others refer to references therein):

Novel approach to model process time_Haim Shore (January 2024, Free Access)

Why the mode departs from the mean — a short communication (CIS, Free Access).

Why the Mode Departs from the Mean (Post on this blog)

My Four-Part Mini-Series Now on Wiley StatsRef Online

Modeling and Forecasting Surgey-Time (Post on this blog)

Categories
General Statistical Applications

Why use an average??

A letter to Significance (July 2014), addressing an often asked fundamental question: Why use an average?? This letter was written by me in response to a letter published in Significance by Tom King (February issue, p. 46), in which the writer conveys his (bad) experience when he asks undergraduate students and colleagues : “Why do we calculate averages?”.

Why use an average_A letter to Significance Magazine_Haim Shore_March 2014

The letter, as published in Significance, is linked below (titled “Average Differences”):

Haim Shore_Why Use an Average_Significance Magazine_V 11(2)_p 45-46_July 2014

Categories
General Statistical Applications

SPC-based monitoring of ecological processes (Presentation, Hebrew)

In a workshop about recent advances in the application of statistical methods to quality engineering and management, conducted in March of 2013 by the Open University of Israel, I have delivered a presentation (Hebrew) about SPC-based modeling and monitoring of ecological processes. The lecture was based on my recently published article:

Shore, H. (2013), Modeling and Monitoring Ecological Systems—A Statistical Process Control Approach. Qual. Reliab. Engng. Int.. doi: 10.1002/qre.1544

A link to the presentation is given below:

Haim Shore_SPC monitoring of ecological processes_Open University_March 2013

Categories
General Statistical Applications

Total Quality, Quality Control and Quality by Design (Book, in Hebrew)

This book was self-published back in 1992 (2nd edition in 1995). A unique feature of the book is that each page is structured as a separate slide, which may be integrated into a presentation. Related theoretical material is deferred to the appendices.

The book had gained popularity in Israel in institutions, academic and otherwise, where courses, or workshops, in quality engineering had been taught.

It may now be downloaded free here (with bookmarks that allow easy access to each chapter):

Shore_Total Quality, Quality Control and Q by Design_1995

Categories
General Statistical Applications

Determining measurement-error requirements to satisfy statistical-process-control performance requirements (Presentation, English)

On January 6th, 2014, I have delivered a talk that carried the title, as displayed above.

The talk was given in the framework of a workshop organized by the Open University of Israel (see details at the bottom of the opening screen of the presentation). It was based on my article of 2004:

Shore, H. (2004). Determining measurement error requirements to satisfy statistical process control performance requirements. IIE Transactions, 36(9): 881-890.
A link to this presentation, in PDF format, is given below:
Open University_Measurement Error and SPC_Haim Shore Presentation_Jan 2014
The lecture (in English) may be viewed at: