Categories
General Statistical Applications My Research in Statistics

What is a Semi-Repetitive Process? AI-Generated Summary

In this post, we present a Google-Chrome AI-generated summary on what a semi-repetitive process is, and related major references (some minor editing applied, as deemed necessary):

What is a semi-repetitive process?

A semi-repetitive process is a production or operational cycle that lies between repetitive (constant work content) and non-repetitive (memoryless) processes. It has a characteristic, recognizable structure (mode) but includes random, partial variations in work content between cycles. Such processes are partially predictable, with variations often caused by changes in “process identity”. 

Key Usage Examples

  • Medical Procedures: Open-heart surgeries are semi-repetitive; while they have a common core, the number of bypasses varies between patients.
  • Manufacturing: Assembly lines that are easily stopped, started, or adjusted for small batches, often termed semi-continuous.
  • Business Operations: Administrative tasks like order entry, bank reconciliation, and handling customer calls in a call center.
  • Industrial Processes: Semi-continuous pyrolysis used for bio-oil production, which improves upon batch inconsistencies.
  • Robotics: Robot pick-and-place tasks where the robot repeats a cycle (collecting a payload, transferring it, returning) until a shutdown command. 

Characteristics

  • Variable Duration: While a “typical” time exists (mode), it is not necessarily the average time.
  • Predictability: Unlike fully random processes, semi-repetitive processes allow for estimations of time and resources, though these become less reliable as variability increases.
  • Process Repetitiveness Measure (PRM): A tool to measure how much a process is semi-repetitive, based on the distance between the mode and mean, often negatively correlating with the Coefficient of Variation (CV).

Major References

Haim Shore, a prominent researcher in quality and reliability engineering, is the primary source for the modern academic definition and modeling of semi-repetitive processes. His work establishes a framework for analyzing processes that fall between strictly repetitive and completely random (memoryless) states. A key academic reference to his work is the four-part series (published February, 2026):

 Part 1: Engineering Implications of Semi-Repetitive Processes

Part 2: The Dual-Component Variation of Semi-Repetitive Processes

Part 3: Measuring Repetitiveness of Semi-Repetitive Processes

Part 4: Reliability of Process-Time Prediction for Semi-Repetitive Processes

Other Key Academic References

The following works also provide core definitions, mathematical modeling, and engineering implications:

Foundational & Supporting Works

While Shore formalized the “semi-repetitive” term, these references provide the broader context of process variability and manufacturing:

  • Manufacturing Classification: The distinction between repetitive and intermittent (job-shop) systems is traditionally referenced back to De Toni & Panizzolo (1995) in their comparative study of manufacturing characteristics.
  • Statistical Theory: For the underlying distribution theory (such as why the mode departs from the mean in these processes), Shore cites Stuart & Ord (1987), Kendall’s Advanced Theory of Statistics.
  • Automation Context: Modern references like Hosseini et al. (2015) discuss surgical duration estimation using data mining, which aligns with the need to predict semi-repetitive process outcomes.

Quantifying how “repetitive” a process is – the Process Repetitiveness Measure (PRM)

PRM is a statistical metric developed by Haim Shore to quantify how closely a process adheres to a “typical” repeatable cycle. It serves as a bridge between strictly repetitive systems (where work content is constant) and non-repetitive ones (where there is no discernible typical state).

Conceptual Basis: Mode-Mean Departure

The fundamental logic of PRM is that in a repetitive process, the distribution of completion times is highly concentrated around a single value (the mode). As a process becomes semi-repetitive or non-repetitive, the “typical” time (mode) drifts further away from the average time (mean) due to increasing random variations in work content. Refer to Why the mode departs from the mean ‒ A short communication.

Metric Definition

PRM is defined as the standardized distance between the mode and the mean of the process-time distribution.

Mathematical Components

It is typically expressed in terms of the first four statistical moments (mean, variance, skewness, and kurtosis). A practical proxy is the Coefficient of Variation (CV). Shore’s research demonstrates that for many practical applications, the Coefficient of Variation (CV)—the ratio of the standard deviation to the mean—is a reliable and simpler alternative to PRM. PRM and 1-CV are often positively linearly correlated. For specific distributions, like the Gamma distribution (commonly used for surgery times), they are mathematically equivalent. Using 1-CV is often preferred because estimating the 3rd and 4th moments, required for a full PRM calculation, involves larger standard errors and requires significantly more data.

Application in Predictability

The PRM is used to determine the predictability threshold of a process.

Predictable: High PRM (low CV) indicates a strong “process identity”, where completion times can be accurately forecasted (taking into account random error variation).

Unacceptable: When PRM drops below a certain statistical criterion, the process ceases to be predictable, meaning management can no longer rely on time estimates for scheduling (e.g., in operating room utilization).

Summary Table of Process Types

Process TypeWork ContentPRM / CV LevelPredictability
RepetitiveConstantVery High PRM / Low CVCertain (apart from error)
Semi-RepetitivePartially VariableModerate PRM / CVStatistical / Probabilistic
Non-RepetitiveMemoryless (no typical work-content)Low PRM / High CVImpossible
Categories
General Statistical Applications My Research in Statistics Podcasts (audio)

Engineering Implications of Semi-Repetitive Processes (4-part Series on “Wiley StatsRef: Statistics Reference Online”; Now Published)

I am pleased to share that my new 4-part series on semi-repetitive processes is now published (February 18, 2026).

Below, please find abstracts and links for all four parts.

Part 1: Engineering Implications of Semi-Repetitive Processes

Abstract: Process predictability may be impaired in two ways — by lack of process information and by lack of process repetitiveness (process is partially repetitive (semi-repetitive) or not repetitive at all). In this four-part series, we address statistical engineering implications of the latter, namely, how lack of complete repetitiveness affects engineering and managerial decisions, required in the analysis and design of semi-repetitive processes and in their management. In this first part, we deliver an overview of the other three parts of the series, addressing statistical engineering questions and problems this series is intended to respond to, and the adaptations needed (relative to repetitive or non-repetitive processes). In particular, we address the dual-component variation of semi-repetitive processes (second part), measuring process repetitiveness (third part) and assessing reliability of process-time predictions, as we move from repetitive to semi-repetitive to non-repetitive processes (fourth part).

Part 2: The Dual-Component Variation of Semi-Repetitive Processes

Abstarct: This is the second of a four-part series on engineering implications of semi-repetitive (SR) processes. In this part, we briefly summarize the “Random Identity Paradigm”, and in compliance with this paradigm make a distinction between two sources of variation affecting SR processes, identity/work-content instability and error. This dual-component variation affects appreciably distributions associated with SR processes. We formulate requirements for models of the dual-component variation and review examples of published models that fulfill these requirements. Adding a new requirement relating to error variation, a new model is partially developed that fulfills this requirement. The link between process repetitiveness and process predictability is addressed as preparation for the third part of this series.

Part 3: Measuring Repetitiveness of Semi-Repetitive Processes

Abstract: This is the third of a four-part series on engineering implications of semi-repetitive processes. In the fourth part, we address how process degree of repetitiveness affects its predictability. Here we explore measuring of process repetitiveness. A measure of the latter had been published, denoted Process Repetitiveness Measure (PRM). It is based on the standardized departure of the mode from the mean and is expressed in terms of the first four moments of the process distribution. Any measure that can be shown to be linearly related to PRM may obviously also serve to measure process repetitiveness. In this article, we explore two additional measures – a probability measure and one based on the coefficient of variation (CV). We show that CV is qualified for this role, having the added benefit of sparing the need to estimate third and fourth moments (known for their large standard errors). CV is appreciated both theoretically, by examining a small sample of arbitrarily selected statistical distributions, and empirically, using a database of surgery durations.

Part 4: Reliability of Process-Time Prediction for Semi-Repetitive Processes

Abstract: This is the fourth of a four-part series on “Engineering Implications of Semi-repetitive Processes”. In Part 3, we have examined and compared several candidate measures to evaluate process repetitiveness, the basis for evaluating predictability of a semi-repetitive process. In particular, we have evaluated the coefficient of variation (CV) and found it to be statistically linearly related to process repetitiveness measure (PRM), which measures process repetitiveness based on the standardized distance of the mode from the mean. In this entry, we employ CV to address how process degree of repetitiveness affects its predictability. More specifically, we formulate for semi-repetitive processes a statistical criterion by which to determine when process-time predictions cease to be acceptable due to insufficient process repetitiveness.