Categories
General Historical Coincidences My Research on the Bible and Biblical Hebrew

What is “The Chosen People”? Do the Jews Benefit from It? Do They Have a Choice?? — Bible Answers

The “Chosen People” is assumed to be taken from the Bible. Therefore, it seems appropriate to refer to the Bible for accurate description of what it means to be God’s “Chosen People”, and, no less important, to learn how the Jews “benefit” from being the “Chosen People”, and whether they have a choice at all to cease being so.

The Bible, in its totality, is incredibly specific and accurate answering all these questions.

We start with a single verse from the Bible that, we believe, best answers the first two questions (What is “Chosen People”?; What is the “benefit”?). It is taken from prophet Amos (3:2):

“רַק אֶתְכֶ֣ם יָדַ֔עְתִּי מִכֹּ֖ל מִשְׁפְּח֣וֹת הָֽאֲדָמָ֑ה עַל־כֵּן֙ אֶפְקֹ֣ד עֲלֵיכֶ֔ם אֵ֖ת כָּל־עֲו‍ֹנֹֽתֵיכֶֽם:”

“Only you did I know of all the nations of the earth; therefore, I will visit upon you all of your iniquities.”

In a single verse, in so many as thirteen Hebrew words, the prophet asserts explicitly why the Jewish nation is “Chosen”, and what it entails.

  • The Jewish nation is “Chosen” because: “Only you did I know of all the nations of the earth;”
  • The “benefit” to the Jewish people: “Therefore, I will visit upon you all of your iniquities.”

In other words, you, the Jewish people, keep the covenant with God, or else…

What covenant?

The Jewish nation has a covenant with Jehovah God:

  • Isaiah (43:10,12; Bold not in the Bible):

“You are My witnesses,” says Jehovah, “and My servant whom I chose,” in order that you know and believe Me, and understand that I am He; before Me no god was formed and after Me none shall be.”

“I told and I saved, and I made heard and there was no stranger among you, and you are My witnesses,” says Jehovah, “and I am God”.

  • Leviticus (19:1-2; Bold not in the Bible):

“And Jehovah spoke to Moses, saying, “Speak to the entire congregation of the Children of Israel, and say to them, holy shall you be, for I, Jehovah, your God, am holy.

Question: What “being holy” requires?

Answer: Keeping the highest moral standards, as specified in The Ten Commandments and derivatives, and as specified in some detail in Leviticus, Chapter 19 (as quoted above), and elsewhere in Torah. (see also “Becoming Holy” — The Bible Prescription).

Question: What would occur if the Jewish people decided to cease serving as “The Chosen People”, as “Witnesses”, and violate the covenant with Jehovah?

Answer: Prophet Ezekiel does not mince his words, delivering his historic stern warning to the Jewish people (Ezekiel 20:32,33):

“But that which comes unto your mind shall be not, that you are saying, ‘Let us be like the nations, like the families of the lands, to serve wood and stone.’  As I live, says Jehovah God, if not with a strong hand and with an outstretched arm and with poured out fury will I be king over you.” (see also Four Major Bible Messages).

Categories
My Research on the Bible and Biblical Hebrew

The True Meaning of the First Commandment

The first of the Ten Commandments is bizarre:

“I am Jehovah, your God, Who took you out of the land of Egypt, out of a house of slaves.” (Exodus 20:2).

This “Commandment” is bizarre on two counts.

First, The Bible starts with God as creator:

“In the beginning God created the heavens and the earth.” (Genesis 1:1).

Why, then, God introduces Himself as God of History (“Who took you”), and not as God of creation (“In the beginning God created”)?

Secondly, unlike the rest of the Ten Commandments, articulated as commands, this commandment is articulated differently, not as a command but as a statement of fact (“I am Jehovah…” God of history). Why is this bizarre way of “commanding”?

To answer these two puzzles, it is instructive to learn that in the original Hebrew there are no “Commandments”. Only Devarim (from the Hebrew verb “to speak”, namely, Divine communication in the form of a dialogue). In Genesis 1 God does not “Speak”. God “says”!! And this implies a monologue, namely, a Divine command. God command is obligatory. It is always fulfilled, to the letter (Genesis 1). Conversely, the Ten Commandments are Divine utterances, out of a dialogue (“speaking”). And in this dialogue, human beings preserve their most basic condition of existence, Free Will. We, human beings, are free to decide whether we wish to pursue the Ten Commandments, or otherwise. The Ten Commandments are for us to decide, out of the precious free-will, bestowed on us by our Creator.

Once the dependence on free-will in pursuing the “Ten Commandments” is properly grasped, the First Commandment is bizarre no more.

Regarding the first puzzlement (“Why God of History and not God of Creation”), the First Commandment requires of us to accept, out of free will, that God rules history. Therefore, there is purpose to life on Earth, both for the collective (in the form of nations and other forms of societies), and for our own personal existence on planet Earth (“Divine Providence”). God of Creation is easy to adopt as fact. It seems logical (to many, not all…). Not so with God of History. This requires a high level of faith. It is not nearly self-evident (as God of Creation is). God of history hides Himself. Prophet Isaiah recognizes the difficulty, and states this in no ambiguous terms:

“Indeed, You are a God Who conceals Himself, the God of Israel, Savior.” (Isaiah 45:15).

Therefore, addressing the second puzzlement (“why the First Commandment is articulated as statement of fact”), this commandment is indeed a command. It requires of us to accept as fact an invisible, unprovable fact. The unobservable reality that Jehovah is indeed also God of History, looking out for what transpires in His world, and leading it towards its Ultimate Goal:

“And I will reveal Myself in My greatness and in My holiness and will be recognized in the eyes of many nations, and they will know that I am Jehovah.” (Ezekeil 38:23).

“For then I will convert the peoples to a pure language that all of them call in the name of Jehovah, to worship Him of one accord.” (Zephaniah 3:9).

Categories
General

King, Judge and… Quality Inspector (on Judicial Reform)

A new post by Professor Haim Shore on The Blogs of The Times of Israel:

Haim Shore_ King, Judge and… Quality Inspector (about the Judicial Reform)_September 12 2025

Categories
Podcasts (audio)

Reading the Bible Prophets (Isaiah; Chps. 61, 62; Hebrew; Hebrew/English text; Post/Podcast)

In this post/podcast, we read end-time scenarios, as prophesized by prophet Isaiah (verses from Chps. 61, 62).

Following the audio is a YouTube podcast, and then the text (Hebrew/English), available for download as a PDF file:

Categories
Podcasts (audio)

Reading the Bible Prophets (Malachi; Hebrew; Hebrew/English text; Post/Podcast)

In this post we read end-time scenarios, as prophesized by Malachi (Chapter 3). Enhanced audio is linked, as of March 2025.

Following the audio is two YouTube podcasts (first is a newer one, with enhanced audio), and then the text (Hebrew/English), available for download as a PDF file:

Categories
My Research on the Bible and Biblical Hebrew Podcasts (audio)

Hebrew Bible Mathematical Precision (Podcast)

For millennia, Jewish rabbis, and other scholars of monotheistic faith, have related to the Hebrew-Bible text as mathematical precise (even when this term was obviously not used).

The objective of this podcast is to demonstrate this precision with three examples (of many), where translation is causing the original Hebrew-text meaning to be lost:

Categories
Historical Coincidences Podcasts (audio)

Why a Jewish Rabbi wondered that Sun in Hebrew not named Eretz? (Podcast; Spanish)

Living in the period of the Geo-Centric worldview, a Jewish Rabbi wondered (claimed) that it is the sun that should be named Eretz (Hebrew for Earth). With the later science-based shift towards the Helio-Centric worldview (Sun is “still”, Earth is “running” around it), biblical Hebrew once again proved to describe accurately physical reality:

Categories
My Research in Statistics

Where Statistics Went Wrong Modeling Random Variation

Update: A new free-access article, published 2024 (“Why the Mode Departs from the Mean – A Short Communication“) adds a new dimension to the contents of the post below. 

(Related podcast: Where Statistics Went Wrong Modeling Random Variation (Podcast) )

A model of random variation, generated by a “random variable”, is presented in Statistics in the form of a statistical distribution (like the normal or the exponential).

For example, the weight of people at a certain age is a random variable, and its observed variation may be modeled by the normal distribution; Surgery duration is a random variable, and its observed variation may, at a specified circumstance, be modeled by the exponential distribution.

In the Statistics literature, one may find statistical distributions modeling random variation directly observed in nature (as the above two examples), or random variation associated with a function of random variables (like a sample average calculated from a sample of n observations).

To-date, within the Statistics literature, one may literally find thousands of statistical distributions.  

Is this acceptable?

Or perhaps we are wrong in how we model random variation?

Pursuant to a large-scale project, where I have modeled surgery times (a research effort reported in three recent publications, Shore 2020ab, 2021), I have reached certain conclusions of how random variation should be modeled as to be more truthful to reality. The new approach seems to reduce the problem of the insanely gigantic number of distributions, as currently appearing in the Statistics literature.

I have summarized these new insights in a new paper, carrying the title of the post.

The Introduction section of this paper is posted below. Underneath it, one may find a link to the entire article.

Where Statistics Went Wrong Modeling Random Variation

  1. Introduction

The development of thousands of statistical distributions to-date is puzzling, if not bizarre. An innocent observer may wonder, how in most other branches of science the historical development shows a clear trend towards unifying the “objects of enquiry” (forces in physics; properties of materials in chemistry; human characteristics in biology), this has not taken place within the mathematical modelling of random variation? Why in Statistics, as the branch of science engaged in modeling random variation observed in nature, the number of “objects of enquiry” (statistical distributions) keeps growing?

In other words: Where has Statistics gone wrong modeling observed random variation?

Based on new insights, gained from a recent personal experience with data-based modeling of surgery time (resulting in a trilogy of published papers, Shore 2020ab, 2021), we present in this paper a new paradigm to modeling observed random variation. A fundamental insight is a new perception of how observed random variation is generated, and how it affects the form of the observed distribution. The latter is perceived to be generated not by a single source of variation (as the common concept of “random variable”, r.v., implies), but by two interacting sources of variation. One source is “Identity”, formed by “identity factors”. This source is represented in the distribution by the mode (if one exists), and it may generate identity-variation. A detailed example for this source, regarding modeling of surgery times, is presented in Shore (2020a). Another source is an interacting error, formed by “non-identity/error factors”. This source generates error variation (separate from identity variation). Combined, the two interacting sources generate the observed random variation. The random phenomenon, generating the latter, may be in two extreme states: An identity-full state (there is only error variation), and an identity-less state (identity factors become so unstable as to be indistinguishable from error factors; identity vanishes; no error can be defined). Scenarios, residing in between these two extreme states, reflect a source of variation with partial lack of identity (LoI).

The new “Random Identity Paradigm”, attributing two contributing sources to observed random variation (rather than a single one, as to date assumed), has far reaching implications to the true relationships between location, scale and shape moments. These are probed and demonstrated extensively in this paper, with numerous examples from current Statistics literature (relate, in particular, to Section 3).

In this paper, we first introduce, in Section 2, basic terms and definitions that form the skeleton for the new random-identity paradigm. Section 3 addresses implications of the new paradigm in the form of six propositions (subsection 3.1) and five predictions (presented as conjectures, subsection 3.2). The latter are empirically supported, in Section 4, with examples from the published Statistics literature. A general model for observed random variation (Shore, 2020a), bridging the gap between current models for the two extreme states (normal, for identity-full state; exponential, for the other), is reviewed in Section 5, and its properties and implications probed. Section 6 delivers some concluding comments.

A link to the complete article:

Categories
My Research on the Bible and Biblical Hebrew Shorties

Shorty*: How Do the Ten Commandments Comport with Free-Will?

A Divine Commandment is always fulfilled, to the letter.

An example:

“And Elohim said: “Let there be light”, and there was light” (Genesis 1:3).

If that is so.

If divine command, by definition, is always fulfilled:

  • How is it that the same has not materialized with regard to another set of Divine Commandments, the Ten Commandments?
  • How come that since its inception at Mount Sinai, about three thousand and three hundred years ago, we are witnessing violating of the Ten Commandments by the human species throughout history, abundantly, continuously, right, left and center?

And more generally:

How do the Ten Commandments comport with free-will, endowed by The Creator onto humankind, the created?

Free-will is emphasized in the Bible, again and again:

  • “See, I set before you today life, and that which is good; and death, and that which is bad” (Deuteronomy 30:15);
  • “I call Heaven and earth to witness this day against you that I have set before you life and death, blessing and cursing; Therefore, choose life that both you and your seed may live” (Deuteronomy 30:19).

Hebrew prophets, likewise, do not cease to insist (emphasized mine):

  • “He has told thee, O man, what is good and what does Jehovah requires of you, but to do justice and love mercy, and to walk humbly with your God” (Micah 6:8).

If emphasis on free-will is so prevalent throughout the Bible, and given the wide-spread ignoring of the Ten Commandments, throughout history, how should we account for this seeming inconsistency in the Bible?

The answer to this intriguing question is simple and straightforward:

In its original biblical Hebrew, the Bible does not have a concept of “Ten Commandments”.

Instead, biblical Hebrew for the Ten Commandments is “Devarim”.

The root of this word, in its verbal form, means to speak. “Devarim”, literally, implies divine utterances.

A thorough discussion of this concept, with biblical quotes, is delivered in:

“Diber” or “Dever” – Two Modes of Divine Dialogue with Humankind in a World of Free-Will .

* Shorty is a short post

Categories
General Statistical Applications

Why Surgery-Duration Predictions are So Poor, and a Possible Remedy

(Related podcast:  Why Predictions of Surgery-Duration are So Poor, and a Possible Remedy (Podcast)  ).

Operating theatres are the most expensive resource at the disposal of hospitals. This renders optimizing scheduling of surgeries to operating rooms a top priority. A pre-condition to optimal scheduling is that accurate predictions of surgery-duration be available. Much research effort has in recent years been invested to develop methods that improve the accuracy of surgery-duration predictions. This ongoing effort includes both traditional statistical methods and newer Artificial Intelligence (AI) methods. The state-of-the-art of these methods, with relevant peer-reviewed literature, have recently been summarized by us in a new entry on Wikipedia, titled “Predictive Methods for Surgery Duation”.     

Personally, I was first exposed to the problem of predicting surgery-duration over thirty years ago, when I was involved in a large-scale project encompassing all governmental hospitals in Israel (at the time). Partial results of this effort had been reported in my published paper of 1986, and further details can be found in my more recent paper of 2020. Both articles are listed in the literature section at the end of this post (for podcast listeners, this list may be found on haimshore.blog).

My second involvement in developing predictive methods for surgery-duration was in more recent years, culminating in three peer-reviewed published papers (Shore 2020, 2021 ab; see references below).

Surgery-duration is known to be very highly volatile. The larger the variability between surgeries, the less accurate the prediction may be expected to be. To reduce this variability, newly devised predictive methods for surgery-duration tend to concentrate on subsets of surgeries, classified according to some classification system. It is assumed that via this classification, prediction accuracy may be enhanced. A common method to classify surgeries, implemented worldwide, is Current Procedural Terminology (CPT®). This coding system delivers, in a hierarchical fashion, particular codes to subsets of surgeries. In doing so, variability between surgeries sharing same CPT code is expected to be reduced, allowing for better prediction accuracy.

A second effort to increase accuracy is to include, in the predictive method, certain factors, known prior to surgery, which deliver variability to surgery-duration. It is hoped that by taking account of these factors, in the predictive method, unexplained variability in surgery-duration will be reduced, thereby enhancing prediction accuracy (examples will soon be given).

A third factor that influence accuracy is the amount of reliable data, used to generate predictions. Given recent developments in our ability to process large amounts of data, commonly known as Big Data, Artificial Intelligence (AI) methods have been summoned to assist in predicting surgery times.

These new methods and others are surveyed more thoroughly in the aforementioned entry on Wikipedia.

The new methods notwithstanding, current predictive methods for surgery-duration still deliver unsatisfactory accuracy.

Why is that so?

We believe that a major factor for the poor performance of current predictive methods is lack of essential understanding of what constitute major sources of variability to surgery-duration. Based on our own personal experience, as alluded to earlier, and also on our professional background as industrial engineers, specializing in analysis of work processes (of which surgeries are an example), we believe that there are two sets of factors that generate variability in surgery-duration: A set of major factors and a set of secondary factors. We denote these Set 1 and Set 2 (henceforth, we refer only to variability between surgeries within a subset of same code):

Set 1 — Two Major Factors:

  • Factor I. Work-content instability (possibly affected by variability in patient condition);
  • Factor II. Error variability.

Set 2 — Multiple Secondary Factors, like: patient age, professional experience and size of medical team, number of surgeries a surgeon has to perform in a shift, type of anaesthetic administered. 

Let us explain why, in contrast to current practices, we believe that work-content instability has critical effect on prediction accuracy, and why accounting for it, in the predictive method, is crucial to improving current accuracy, obtained via traditional methods.

To prepare predictions for any random phenomenon, assumed to be in steady-state, the best approach is to define its statistical distribution and estimate its parameters, based on real data. Once the distribution is completely defined, various statements about the conduct of the random phenomenon (like surgery-duration) can be made.

For example:

  • What is the most likely realization (given by distribution’s mode);
  • What is the middle value, which delivers equal probabilities, for any realization, to be larger or smaller than that value (expressed by distribution’s median);
  • What is the probability that any realization of the random phenomenon exceeds a specified value (calculated by the cumulative density function, CDF)?

Understanding that complete definition of the distribution is the best approach to predict surgery-duration, let us next explain what type of distributions can one expect in the two extreme states, regarding the two major factors of Set 1:

State 1. There is no variability in work-content (there is only error variability);

State 2. There is no error (error variability is zero; there is only work-content variability).

The two states define two different distributions for surgery-duration.

The first state, State 1, implies that the only source of variability is error. This incurs the normal distribution, for an additive error, or the log-normal distribution, for a multiplicative error (namely, error expressed as a percentage).

State 2, lack of error variability, by definition can only materialize when there is no typical value (like the mode), on which error can be defined. Since no definition of error is feasible, error variability becomes zero. For work-processes, like surgery, this can happen only when there is no typical work-content. In statistical terms, this is a state of lack-of-memory. An example is the duration of repair jobs at a car garage, relating to all types of repair. The distribution typical to such situations is the memoryless exponential.

We learn from this discussion, that any statistical model of surgery-duration, from which its distribution may be derived, needs to include, as extreme cases, both the normal/lognormal distributions and the exponential distribution.

This is a major constraint on any model for the distribution of surgery-duration. It has so far eluded individuals engaged in developing predictive methods for surgery-duration. Lack of knowledge of basic principles of industrial engineering, as well as total ignorance regarding how instability in work-content of a work process (like surgery) influences the form of the distribution, these probably constitute the major culprit for the poor current state-of-the-art of predicting surgery-duration.

In Shore (2020), we have developed a bi-variate model for surgery-duration, which delivers not only the distributions of surgery-duration in the extreme states (State 1 and State 2), but also the distributions of intermediate states, residing between the two extreme states. The two components of the bi-variate model represent work-content and error as two multiplicative random variables, with relative variabilities (standard deviations) that gradually change as surgery-duration moves from State 1 (normal/lognormal case) to State 2 (exponential case).

What do we hope to achieve with publishing of this post (and the accompanying podcast)?

We hope that individuals, engaged in developing predictive methods for surgery-duration, internalize the grim reality that:

  1. Unless their predictive method allows for the normal/lognormal and for the exponential to serve as exact distributions of surgery-duration at the extreme states;
  2. Unless their predictive method allows intermediate states, spanned on a continuous spectrum between the two extreme states, to converge smoothly to these states (as in Shore, 2020),

unless these two conditions be met, the likelihood for the accuracy of predictive methods for surgery-duration to improve anytime soon, this likelihood would remain, as it is today, extremely slim.

Literature

[1] Shore, H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. J. Statist. Com. Simul. 1986; 23:157-81. Available on Shore’s ResearchGate page.

[2] Shore, H (2020). An explanatory bi-variate model for surgery-duration and its empirical validation, Communications in Statistics: Case Studies, Data Analysis and Applications, 6:2, 142-166, DOI: 10.1080/23737484.2020.1740066 .

[3] Shore, H (2021a). SPC scheme to monitor surgery-duration. Qual Reliab Eng Int. 37: 1561– 1577. DOI: 10.1002/qre.2813 .

[4] Shore, H (2021b). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. DOI: 10.1080/00207543.2021.2009141

[5] Shore, H (2021c). “Predictive Methods for Surgery Duation”. Wikipedia. April 16, 2021.

Categories
My Research in Statistics

Where Statistics Went Wrong? And Why?

Update: A new free-access article, published 2024 (“Why the Mode Departs from the Mean – A Short Communication“) adds a new dimension to the contents of the post below. 

  1. Introduction

Is Statistics, a branch of mathematics that serves central tool to investigate nature, heading in the right direction, comparable to other branches of science that explore nature?

I believe it is not.

This belief is based on my own personal experience in a recent research project, aimed to model surgery time (separately for different subcategories of surgeries). This research effort culminated in a trilogy of published articles (Shore 2020ab, 2021). The belief is also based on my life-long experience in academia. I am professor emeritus, after forty years in academia and scores of articles published in refereed professional journals, dealing both with the theory and application of Statistics. In this post, I deliver an account of my recent personal experience with modeling surgery time, and conclusions I have derived thereof, and from my own cumulative experience in the analysis of data and in data-based modeling.

The post is minimally technical, so that a layperson, with little knowledge of basic terms in Statistics, can easily understand.

We define a random phenomenon as one associated with uncertainty, for example, “Surgery”. A random variable (r.v) is any random quantitative property defined on a random phenomenon. Examples are surgery medical outcome (Success:  X=1; Failure: X=0), surgery duration (X>0) or patient’s maximum blood pressure during surgery (Max. X).

In practice, an r.v is characterized by its statistical distribution. The latter delivers the probability, P (0≤P≤1), that the random variable, X, assumes a certain value (if X is discrete), or that it will fall in a specified interval (if X is continuous). For example, the probability that surgery outcome will be a success, Pr(X=1), or the probability of surgery duration (SD) to exceed one hour, Pr(X>1).

Numerous statistical distributions have been developed over the centuries, starting with Bernoulli (1713), who derived what is now known as the binomial distribution, and Gauss (1809), deriving the “astronomer’s curve of error”, nowadays known as Gauss distribution, or the normal distribution. Good accounts of the historical development of the science of probability and statistics to its present-day appear at Britannica and Wikipedia, entry History_of_statistics.

A central part of these descriptions is, naturally, the development of the concept of statistical distribution. At first, the main source of motivation was games of chance. This later transformed into the study of errors, as we may learn from the development of the normal distribution by Gauss. In more recent years, emphasis shifted to describing random variation as observed in all disciplines of science and technology, resulting, to date, in thousands of new distributions. The scope of this ongoing research effort may be appreciated by the sheer volume of the four-volume Compendium on Statistical Distributions by Johnson and Kotz (First Edition 1969–1972, updated periodically with Balakrishnan as an additional co-author).

The development of thousands of statistical distributions over the years, up to the present, is puzzling, if not bizarre. An innocent observer may wonder, how is it that in most other branches of science, the historical development shows a clear trend towards convergence, while in modeling random variation, the most basic concept to describe processes of nature, the opposite has happened, namely, divergence?

Put in more basic terms: Why in science, in general, a continuous attempt is exercised to unify, under an umbrella of a unifying theory, the “objects of enquiry” (forces in physics; properties of materials in chemistry; human characteristics in biology), while in the mathematical modelling of random variation, this has not happened? Why in Statistics, the number of “objects of enquiry”, instead of diminishing, keeps growing?

And more succinctly: Where did Statistics go wrong? And why?

I have already had the opportunity to address this issue (the miserable state-of-the-art of modelling random variation) some years ago, when I wrote (Shore, 2015):

“ “All science is either physics or stamp collecting”. This assertion, ascribed to physicist Ernest Rutherford (the discoverer of the proton, in 1919) and quoted in Kaku (1994, p. 131), intended to convey a general sentiment that the drive to converge the five fundamental forces of nature into a unifying theory, nowadays a central theme of modern physics, represents science at its best. Furthermore, this is the right approach to the scientific investigation of nature. By contrast, at least until recently, most other scientific disciplines have engaged in taxonomy (“bug collecting” or “stamp collecting”). With “stamp collecting” the scientific inquiry is restricted to the discovery and classification of the “objects of enquiry” particular to that science, however this never culminates, as in physics, in a unifying theory, from which all these objects may be deductively derived as “special cases”. Is statistics a science in a state of “stamp collecting”?”

This question remains valid today, eight years later: Why has the science of Statistics, as a central tool to describe statistically stable random phenomena of nature, has deviated so fundamentally from the general trend at unification?

In Section 2, we enumerate the errors that, we believe, triggered this departure of Statistics from the general trend in the scientific study of nature, and outline possible outlets to eliminate these errors. Section 3 is an account of the personal learning experience that I have gone through while attempting to model surgery duration and its distribution. This article is a personal account, for the naive (non-statistician) reader, of that experience. As alluded to earlier, the research effort resulted in a trilogy of articles , and in the new “Random identity paradigm”. The latter is addressed in Section 4, where new concepts, heretofore ignored by Statistics, are introduced (based on Shore, 2022). Examples are “Random identity”, “identity variation”, “losing identity” (with characterization of the process), and “Identity-full/identity-less distributions”. These concepts are underlying a new methodology to model observed variation in natural processes (as contrasted with variation of r.v.s that are mathematical function of other r.v.s).The new methodology is outlined, based on Shore, 2022. Section 5 delivers some final thoughts and conclusions.

  1. The historical errors embedded in current-day Statistics

Studying the history of the development of statistical distributions to date, we believe Statistics departure from the general trend, resulting in a gigantic number of “objects of enquiry” (as alluded to earlier), may be traced to three fundamental, inter-related, errors, historically committed within Statistics:

Error 1: Failure to distinguish between two categories of statistical distributions:

Category A: Distributions that describe observed random variation of natural processes;

Category B: Distributions that describe behavior of statistics, namely, of random variables that are, by definition, mathematical functions of other random variables.

The difference between the two categories is simple: Category A succumbs to certain constraints on the shape of distribution, imposed by nature, which Category B does not (the latter succumbs to other constraints, imposed by the structure of the mathematical function, describing the r.v). As we shall soon realize, a major distinction between the two sets of constraints (not the only one) is the permissible values for skewness and kurtosis. While for Category A, these fluctuate in a specified interval, confined between values of an identity-full distribution and an identity-less distribution (like the normal and the exponential, respectively; both types of distribution shall be explained soon), for Category B such constraints do not hold.     

Error 2: Ignoring the real nature of error:

A necessary condition for the existence of an error, indeed a basic assumption integrated implicitly into its classic definition, is that for any observed random phenomenon, and the allied r.v, there is a typical constant, an outcome of various factors inherent to the process/system (“internal factors”), and there is error (multiplicative or additive), generated by random factors external to the system/process (“external factors”). This perception of error allows its distribution to be represented by the normal, since the latter is the only one having mean/mode (supposedly determined by “internal factors”) disconnected from the standard deviation, STD (supposedly determined by a separate set of factors, “external factors”).

A good representative of the constant, relative to which error is defined, is the raw mode or the standardized mode (raw mode divided by the STD). As perceived today, the error indeed expresses random deviations from this characteristic value (the most frequently observed value).

What happens to the error, when the mode itself ceases to be constant and becomes random? How does this affect the observed random variation or, more specifically, how is error then defined and modelled?

Statistics does not provide an answer to this quandary, except for stating that varying “internal factors”, namely, non-constant system/process factors, may produce systematic variation, and the latter may be captured and integrated into a model for variation, for example, via regression models (linear regression, nonlinear regression, generalized linear models and the like). In this case, the model ceases to represent purely random variation (as univariate statistical distributions are supposed to do). It becomes a model for systematic variation, coupled with a component of random variation (the nature of the latter may be studied by “freezing” “internal factors” at specified values). It is generally assumed in such models that a single distribution represents the component of random variation, though possibly with different parameters’ values for different values of the systematic effects, integrated into the model. Thus, implementing generalized linear models, the user is requested to specify a single distribution (not several), valid for all different sets of the effects’ values. As we shall soon learn (Error 3), “internal factors” may produce not only systematic effects, as currently wrongly assumed, but also a different component of variation, unrecognized to date. It will be addressed next as the third error.

Error 3: Failure to recognize the existence of a third type of variation (additional to random and systematic) — “Identity variation”:

System/process factors may potentially produce not only systematic variation, as currently commonly assumed, but also a third component of variation, passed under the radar, so to speak, in the science of Statistics. Ignoring this type of variation is the third historic error of Statistics. For reasons to be described soon (Sections 3 and 4), we denote this unrecognized type of variation — “Identity variation”.

  1. Modeling surgery duration — Personal learning experience that resulted in the new “Random identity paradigm”

I have not realized the enormity of the consequences of the above three errors, committed within Statistics to date, until a few years ago, when I have embarked on a comprehensive research effort to model the statistical distribution of surgery duration (SD), separately for each of over a hundred medically-specified subcategories of surgeries (the latter defined according to a universally accepted standard; find details in Shore 2020a). The subject (modeling SD distribution) was not new to me. I had been engaged in a similar effort years ago, in the eighties of the previous century (Shore, 1986). Then, based on analysis of available data and given the computing facilities available at the time, I divided all surgeries (except open-heart surgeries and neurosurgeries), into two broad groups: short surgeries, which were assumed to be normally distributed, and long surgeries, assumed to be exponential. There, for the first time, I have become aware of “Identity variation”, though not so defined, which resulted in modeling SD distribution differently for short surgeries (assumed to pursue a normal distribution) and long ones (assumed to be exponential). With modern available computing means, and with my own cumulative experience since publication of that paper (Shore, 1986), I thought, and felt, that a better model may be conceived, and embarked on the new project. 

Probing into the available data (about ten thousand surgery times with affiliated surgery subcategories), four insights/observations were apparent:

1. It was obvious to me that different subcategories pursue different statistical distributions, beyond just differences in values of distribution’s parameters (as currently generally assumed in modeling SD distribution);

2. Given point (1), it was obvious to me that differences in distribution between subcategories should be attributed to differences in the characteristic level of work-content instability (work-content variation between surgeries within subcategory);

3. Given points (1) and (2), it was obvious to me that this instability cannot be attributed to systematic variation. Indeed, it represents a different type of variation, “identity variation”, to-date unrecognized in the Statistics literature (as alluded to earlier);

4. Given points (1) to (3), it was obvious to me that any general model of surgery time (SD) should include the normal and the exponential as exact special cases.

For the naive reader, I will explain the new concept, “identity variation”. Understanding this concept will render all of the above insights clearer.

As an industrial engineer in profession, it was obvious to me, right from the beginning of the research project, that, ignoring negligible systematic effects caused by covariates (like the surgeon performing the operation), a model for SD, representing only random variation in its classical sense, would not be adequate to deliver proper representation to the observed variation. Changes between subcategories in the type of distribution, as revealed by changes in distribution shape (from the symmetric shape of the normal to the extremely non-symmetric of the exponential, as first noticed by me in the earlier project, Shore, 1986),  these changes have made it abundantly clear that the desired SD model should account for “identity loss”, occurring as we move from a repetitive process (subcategory with repetitive surgeries, having characteristic/constant work-content) to a memory-less non-repetitive process (subcategory with surgeries having no characteristic common work-content). As such, the SD model should include, as exact special cases, the exponential and the normal distributions.

What else do we know of the process of losing identity, as we move from the normal to the exponential, which account for “identity variation”?

In fact, several changes in distribution properties accompany “identity loss”. We relate again to surgeries. As work processes in general, surgeries too may be divided into three non-overlapping and exhaustive set of groups: repetitive, semi-repetitive and non-repetitive. In terms of work-content, this implies:

  • Work-processes with constant work-content (only error generates variation; SD normally distributed);
  • Semi-repetitive work-processes (work-content varies somewhat between surgeries, to a degree dependent on subcategory);
  • Memory-less work-processes (no characteristic work-content; For example, surgeries performed within an emergency room for all types of emergency, or service performed in a pharmacy, serving customers with varying number of items on the prescription list).

Thus, work-content, however it is defined (find an example in Shore, 2020a), forms “surgery identity”, with a characteristic value, the mode, that vanishes (becomes zero) for the exponential scenario (non-repetitive work-process).    

Let us delve somewhat deeper into the claim that a model for SD should include the normal and the exponential as exact special cases (not merely asymptotically, as, for example, the gamma tends to the normal).

There are four observations/properties, which put the two distributions, the identity-full normal and the identity-less exponential, apart from other distributions:

Observation 1: The mean and standard deviation are represented by different parameters for the normal distribution, and by a single parameter for the exponential. This difference is reflection of a reality, where, in the normal scenario, a set of process/system factors (“internal factors”) produces signal only, and a separate set (“external factors”) produces noise only (traditionally modelled as a zero-mean symmetrically distributed error). Moving away from the normal scenario to the exponential scenario, we witness a transition towards merging of the mean with the standard deviation, until, in the exponential scenario, both signal and noise are produced by the same set of factors — the mean and standard deviation merge to be expressed by a single parameter. The clear distinction, between “system/process factors” and “external/error factors”, typical to the normal scenario, this distinction has utterly vanished;

Observation 2: The mode, supposedly representing the typical constant on which the classical multiplicative error is defined in the normal scenario, this mode, or rather the standardized mode, shrinks, as we move away from the normal to the exponential. This movement, in reality, represents passing through semi-repetitive work-processes, with increasing degree of work-content instability. The standardized mode finally disappears (becoming zero) in the exponential scenario. What does this signify? What are the implications?

Observation 3: For both the normal and the exponential, skewness and kurtosis are non-parametric. Why is that, and what does this signify?

Observation 4: What happens to the classic error, when the r.v moves away from the normal scenario to the exponential? Can we still hold on to the classic definition of error, given that “internal factors”, assumed to generate a constant mode (signal), these factors start to produce noise? How would then error (in its classical sense) be re-specified? Can an error be defined at all?

All these considerations, as well as the need to include semi-repetitive surgeries within the desired model, brought me to the realization that we encounter here a different type of variation, heretofore unrecognized and not addressed in the literature. The instability of work-content (within subcategory), which I have traced to be the culprit for change in distribution as we move from one subcategory to another, could not possibly be regarded as cause for systematic variation. The latter is never assumed to change the shape of distribution, only at most its first two moments (mean and variance). This is evident, for example, on implementing generalized linear models, a regression methodology frequently used to model systematic variation in a non-normal environment. The user is requested to specify a single distribution (normal or otherwise), never different distributions for different sets of values of the effects being modeled (supposed delivering systematic variation). Neither can work-content variation be considered part of the classic random variation (as realized in Category A distributions) since the latter assumes existence of a single non-zero mode (for a single non-mixture univariate distribution), not zero mode or multiple modes (as, for example, with the identity-less exponential (zero mode), its allied Poisson distribution (two modes for an integer parameter), or the identity-less uniform (infinite number of modes); find details in Shore, 2022).

A new paradigm was born out of these deliberations — the “Random identity paradigm”. Under the new paradigm, observed non-systematic variation is assumed to originate in two components of variation: random variation, represented by a multiplicative normal/lognormal error, and identity variation, represented by an extended exponential distribution. A detailed technical development of this methodology, allied conjectures and their empirical support (from known theory-based results) are given in Shore (2022; A link to a pre-print is given at the References section). In the next Section 4 we deliver an outline of the “Random identity paradigm”.

  1. The “Random identity paradigm” — “Random identity”, “Identity variation”, “identity loss”, “identity-full/identity-less distributions” (based on Shore, 2022)

The insights, detailed earlier, have led to the development of the new “Random identity paradigm”, and its allied explanatory two-variate model for SD (Shore, 2020a). The model was designed to fulfill an a-priori specified set of requirements. Central among these is that the model includes the normal and the exponential distributions as exact special cases. After implementing the new model for various applications (as alluded to earlier), we have arrived at the realization that the model used in the article may, in fact, be expanded to introduce a new type of random variation, “random identity variation”, which served the basis for the new “Random Identity Paradigm” (Shore, 2022).

A major outcome of the new paradigm is the definition of two new types of distributions, an identity-full distribution and an identity-less distribution, and a criterion to diagnose a given distribution as “identity-full”, “identity-less”, or in between. Properties of identity-less and identity-full distributions are described, in particular, the property that such distributions have non-parametric skewness and kurtosis, namely, both types of distribution assume constant values, irrespective of values assumed by distribution parameters. Another requirement, naturally, is that the desired model includes a component of “identity variation”. However, the requirement also specifies that the allied distribution (representing “identity variation”) have support with the mode, if it exists, as its extreme left point (detailed explanation is given in Shore, 2022). As shown in Shore (2020ab, 2021, 2022), this resulted in defining the exponential distribution anew (the extended exponential distribution), adding a parameter, α, that assumes a value of α=0, for the exponential scenario (error STD becomes zero), and a value tending to 1, as the model moves towards normality (with “identity variation”, expressed in the extended exponential by parameter σi, tending to zero).

Sparing the naive reader the technical details of the complete picture, conveyed by the new “Random identity paradigm” (Shore, 2022), we outline herewith the associated model, as used in the trilogy of published paper.

The basic model is given in eq. (1):

Haim Shore_Equations_The Problem with Statistics_January 26 2022

where R is the observed response (an r.v), L and S are location and scale parameters, respectively, Y is the standardized response (L=0, S=1), {Yi ,Ye} are independent r.v.s representing internal/identity variation and external/error variation, respectively, ε is zero-mode normal disturbance (error) with standard deviation σε and Z is standard normal. The density function of the distribution of Yi in this model (the extended exponential) is eq. (2), where Yi is the component representing “identity variation” (caused by variation of system/process factors, “Internal factors”), CYi is a normalizing coefficient, and σi is a parameter representing internal/identity variation. It is easy to realize that α is the mode. At α=1, Yi becomes left-truncated normal (re-located half normal). However, it is assumed that at α=1 “identity variation” vanishes, so Yi becomes a constant, equal to the mode (1). For the exponential scenario (complete loss of identity), we obtain α=0, and the disturbance, assumed to be multiplicative, renders meaningless, namely, it vanishes (σe=0, Ye=1). Therefore, Yi and Y then both become exponential.

Let us introduce eq. (3).  From (2), we obtain the pdf of Zi: (eqs. (4) and (5)). Note that the mode of Zi is zero (mode of Yi is α).

Various theorems and conjectures are articulated in Shore (2022), which deliver eye-opening insights into various regularities in the behavior of statistical distributions, previously un-noticed, and good explanation to various statistical theoretical results, heretofore considered separate and unrelated (like a logical derivation of the Central Limit Theorem from the “Random identity paradigm”).

  1. Conclusions

In this article, I have reported about my personal experience, which led me to the development of the new “Random identity paradigm” and allied concepts. It followed my research effort to model surgery duration, which resulted in a bi-variate explanatory model, with the extended exponential distribution as the intermediate tool, that paved a smooth way to unify, under a single umbrella model, execution times of all types of work processes/surgeries, namely, not only repetitive (normal), or non-repetitive (exponential), but also those in between (semi-repetitive processes/surgeries). To date, we are not aware of a similar unifying model that is as capable in unifying diverse phenomena as the three categories of work-processes/surgeries. Furthermore, this modeling effort has led directly to conceiving the new “Random identity paradigm” with allied new concepts (as alluded to earlier).

The new paradigm has produced three major outcomes:

First, as demonstrated in the linked pre-print, under the new paradigm virtually scores of theoretical statistical results that have formerly been derived independently and considered unrelated, are explained in a consistent and coherent manner, becoming inter-related under the unifying “Random identity paradigm”.

Secondly, various conjectures about properties of distributions are empirically verified with scores of examples/predictions from the Statistics literature. For example, the conjectures that Category B r.v.s, which are function of only identity-less r.v.s, are also identity-less, and similarly for identity-full r.v.s.

Thirdly, the new bi-variate model has been demonstrated to represent well numerous existent distributions, as has been shown for diversely-shaped distributions in Shore, 2020a (see Supplementary Materials therein).

It is hoped that the new “Random identity paradigm”, representing an initial effort at unifying distributions of natural processes (Category A distributions), this new paradigm may pave the way for Statistics to join other branches of science in a common effort to reduce, via unification mediated by unifying theories, the number of statistical distributions, the “objects of enquiry” of modeling random-variation within the science/branch-of-mathematics of Statistics.

References

[1] Shore H (1986). An approximation for the inverse distribution function of a combination of random variables, with an application to operating theatres. Journal of Statistical Computation and Simulation, 23:157-181. DOI: 10.1080/00949658608810870 .

[2] Shore H (2015). A General Model of Random Variation. Communication in Statistics- Theory and Methods, 49(9):1819-1841. DOI: 10.1080/03610926.2013.784990.

[3] Shore H (2020a). An explanatory bi-variate model for surgery-duration and its empirical validation. Communications in Statistics: Case Studies, Data Analysis and Applications, 6(2):142-166. Published online: 07 May 2020. DOI: 10.1080/23737484.2020.1740066

[4] Shore H (2020b). SPC scheme to monitor surgery duration. Quality and Reliability Engineering International. Published on line 03 December 2020. DOI: 10.1002/qre.2813

[5] Shore H (2021). Estimating operating room utilisation rate for differently distributed surgery times. International Journal of Production Research. Published on line 13 Dec 2021. DOI: 10.1080/00207543.2021.2009141.

[6] Shore H (2022). “When an error ceases to be error” — On the process of merging the mean with the standard deviation and the vanishing of the mode. Preprint.

Haim Shore_Blog_Merging of Mean with STD and Vanishing of Mode_Jan 07 2022

 

Categories
Podcasts (audio)

“Shamayim” — The Most Counter-intuitive Yet Scientifically Accurate Word in Biblical Hebrew (Podcast)

The deeper meaning and implications of the biblical Hebrew Shamayim (Sky; A post of same title may be found here ):

Categories
My Research on the Bible and Biblical Hebrew Shorties

“Shamayim” — The Most Counter-intuitive Yet Scientifically Accurate Word in Biblical Hebrew

(Related podcast: “Shamayim” — The Most Counter-intuitive Yet Scientifically Accurate Word in Biblical Hebrew (Podcast) .)

The word Shamayim in Hebrew simply means Sky (Rakia in biblical Hebrew; Genesis 1:8):

“And God called the Rakia Shamayim, and there was evening and there was morning second day”.

Rakia in biblical Hebrew, like in modern Hebrew, simply means sky.

So why, in the first chapter of Genesis, is the sky Divinely called Shamayim?

And why, according to the rules of biblical Hebrew, is it fundamentally counter-intuitive, yet, so scientifically accurate?

The word Shamayim comprises two syllables. The first is Sham, which simply means there, namely, that which is inaccessible from here. The second syllable, ayim, is a suffix, namely, an affix added to the end of the stem of the word. Such suffix in added, in Hebrew, to words that represent a symmetric pair of objects, or, more generally, to words that represent objects that appear in symmetry. Thus, all visible organs in the human body that appear in pairs have same suffix, like legs (raglayim), hands (yadayim), eyes (einayim) and ears (oznayim). However, teeth, arranged in symmetry in the human mouth, though not in pairs, also have same suffix. Teeth in Hebrew is shinayim. Other examples may be read in my book at Chapter 5.

Let us address the two claims in the title:

  • Why Shamayim is counter-intuitive?
  • Why is Shamayim so scientifically accurate?

The answer to the first claim is nearly self-evident. When one observes the sky, at dark hours, the observed is far from symmetric. So much so that the twelve Zodiacal constellations had to be invented, in ancient times, to deliver some sense to the different non-symmetric configurations of stars that to this day can be observed by the naked eye in the sky.

Yet, despite the apparent non-symmetry observed in the sky, the Divine chose to grant the sky a word indicative of the most fundamental property of the sky, as we have scientifically learned it to be in recent times, namely, its symmetry (as observed from Plant Earth), or its uniformity (as preached by modern cosmology).

To learn how fundamentally uniform (or symmetric) the universe is, the reader is referred to Chapters 5 and 7 of my book, and references therein. Another good source to learn about the uniformity of the universe, as observed via telescopes and as articulated by modern science, is the excellent presentation by Don Lincoln at Wondrium channel:

https://www.youtube.com/watch?v=CRQvp3XPH_s

Note the term Desert, addressed in the lecture. The term is used, in modern cosmology, to denote the uniformity of the universe at the Big Bang (“In the beginning”).

Surprisingly, the words, Tohu Va-Vohu, describing the universe “in the beginning” (Genesis 1:2), are also associated with desert, as they are employed elsewhere in the Hebrew Bible.

Consider, for example Jeremiah (4:23, 26):

“I beheld the earth, and, lo, it was Tohu Va-Vohu…I beheld and, lo, the fruitful land has become the desert…”.

Refer also to Isaiah (34:11).

So:

  • Shamayim is counter-intuitive and at odds with the picture, revealed in ancient times to the naive observer, our pre-science ancestors;
  • Shamayim yet accurately describes current scientific picture of the universe, as formed in the last hundred years or so, based on cumulative empirical data (gathered via telescopes), and based on modern theories of the evolution and structure of the universe.

Articulated more simply:

Whatever direction in the sky you point to, Shamayim states that it is all the same, contrary to what the naked eyes are telling us, in conformance with what modern science is telling.

Personal confession, mind boggling…

Categories
Podcasts (audio)

The Three Pillars of Truth (Lessons from the Hebrew Alphabet; Podcast)

What does “Truth” stand on? How do we tell truth from falsehood?

The Hebrew Alphabet conveys to us the essential ingredients of truth.

We denote these:

The Three Pillars of Truth.

What are they?

Categories
Podcasts (audio)

Free Will — The Act of Separating and Choosing (Podcast-audio)

Why is there free-will?

What are the necessary and sufficient requirements for free-will to be exercised?

How do we make decisions within the two worlds, comprising our lives, the “World of Law-of-Nature” and the “World of Randomness”?

These questions and others are addressed, supported by excerpts from the Bible.

 

Categories
My Research on the Bible and Biblical Hebrew Podcasts (audio)

What Do We Know of God? (Podcast-audio)

(Related post: Shorty*: What Do We Know of God? )

The detailed answer, based on the Jewish Hebrew Bible (Torah, the prophets), on in-dept analysis of biblical Hebrew words and traditional Jewish interpreters — may surprise you:

Categories
Podcasts (audio)

“Do Not Steal” – Is it in the Ten Commandments? (Podcast-audio)

The answer to this intriguing question may surprise you. The true meaning of the Eighth Commandment, according to traditional Jewish scholarship, is not what it appears to be.

So where prohibition on stealing, in the common sense of the word, does appear in the Ten Commandments?

Find details in this podcast:

 

Categories
General

Free Will— The Act of Separating and Choosing

The essence of being human is exercising free will. This is the act by which we continuously create ourselves and form our personality and character.

The Divine has created mankind (“So God created mankind in his own image…”, Genesis 1:27); but He has also formed it (“And the Lord God formed mankind of the dust of the ground…”, Genesis 2:7). We, human beings, whether we wish it or not, are doomed throughout our lives to repeat, via exercising free will, the two acts of creating (establishing a solid link between soul and body, while we grow) and forming.

What is the needed environment for human beings to be able to exercise their free-will?

There are two conditions (necessary and sufficient):

[1] Existence of “Good” and “Bad” mixed together (as in “The Tree of Knowledge, good and bad”, Genesis 2:9);

[2] Hidden-ness of God and the concealment of God’s hidden-ness.

Prophet Isaiah delivers succinct and stunning expression to the existence of the first condition:

“That men may know from the rising of the sun to its setting that there is none besides me— I am Jehovah and there is no one else; Forming light and creating darkness, making peace and creating the bad, I Jehovah am doing all these” (Isaiah 45:6-7).

Note that creating (“something from nothing”) precedes forming ((“imprinting form on the created”), just as forming precedes making. Yet prophet Isaiah sets absence of light (darkness) and the bad (the harmful, the evil) at a level higher than that of light— the former were created, the latter was “just” formed.

Existence of the second condition, a daily human experience revealed in countless debates on whether God exists, is evidenced both by biblical Hebrew and by the Bible. In biblical Hebrew, “World” (Olam) derives from same root as all Hebrew words pointing to concealment. Examples: Ta’aluma (Mystery); He’almut (disappearance); Ne’elam (unknown (noun), as in an algebraic equation); Alum (secret, adj.). In other words, the whole world is testimony to the hidden-ness of God. Prophet Isaiah repeats same motive:

“Indeed, thou are a God who hides thyself, O God of Israel, savior” (Isaiah 45:15).

Concealment of God, however, is itself concealed (“Does God exist?”):

“And I will surely hide my face on that day…” (Haster Astir; Deuteronomy 31:18).

The repeat of same root twice (in two consecutive words) is traditionally interpreted by Jewish scholars as implying concealment of the concealment, an integrated fact of life that we all have probably experienced at one time or another throughout our lives (“Does God exist?”).

Having studied the two conditions for the existence of free-will, the next question to ask is:

What are the limitations to exercising free-will and what does the latter entail?

We continuously live in two worlds, intermingled and most often inseparable and indistinguishable from one another: “World of Law-of-Nature” and “World of Randomness”. We can exercise free-will only in an environment that allows choice, namely, in the “World of Randomness”. Unlike in the “World of Law-of-Nature”, where external constraints force us to behave in certain ways (and not others, namely, no free choice is available), in the “World of Randomness”, where randomness prevails, we are free to exercise whatever our heart desires. It is only then, in the “World of randomness”, that we become an agent of our own free will.

What exercising free-will is comprised of? It comprises two actions:

Separating;

Choosing.

We need to separate “Good” from “Bad”, before choosing. Most often in our daily lives, the good and the bad are intermingled to a degree that the two can rarely be told apart; Therefore, we need to separate before choosing. God created darkness (per prophet Isaiah), thereby allowing the good and the bad in our world to co-exist, mixed. Consider the biblical Hebrew word for “evening” (as in “…and there was evening and there was morning…”; Genesis 1:5, for example). The Hebrew word derives from same Hebrew root used for mixing (as in “mixture”). The “Tree of Knowledge good and bad” also implies mixed together. In biblical terms, one may allegorically assert that we all have eaten of “The Tree of Knowledge, good and bad”, where “Good” and “Bad” are mixed together in the same fruit. And since then, “Good” and “Bad” have become intermingled in our body and soul, delivering us our mission in life to grow and mature and create ourselves and form our personality and character, all via the process of separating (“Good” from “Bad”) and then choosing.

The act of separating (good from bad) is two-folded and it is expressed differently in the two worlds we inhabit:

  • In the “World of Law-of-Nature”, we need to separate “good” from “bad” because absent this separation we may choose the “bad”, thereby harming our well-being and possibly even endangering our life. Thus, buying fruit in the supermarket, we are careful to separate good apples from the bad ones (rotten apples) so that we can then make the correct choice of purchasing good apples only, benefiting our health and well-being. Separation is also inherent to many of our bodily processes (like in the kidney);
  • In the “World of Randomness”, the act of separating good from bad (or “good” from “evil”, as commonly used in biblical parlance) is a much harder task. Unlike in the “World of Law-of-Nature”, where science assists us in forming clear distinction and separation between the good and the bad, we do not easily, clearly and immediately differentiate between the two in the “World of Randomness”. Let us demonstrate with a simple example. I am selling a used car, aware that the car carries a certain defect. I can inform the buyer about it or I can inform her not. In the latter case, the thinking goes like this: “I have allowed the buyer to inspect and check the car thoroughly, have I not? However, the defect was not exposed. It is the buyer’s responsibility to identify the defect, not mine, is it not?”. Such thinking testifies to the daily blurring, in the “World of Randomness”, of “good” and “bad” (or “good” and “evil”, in biblical terms). Therefore, Jewish Torah explicitly instructs: “Thou shalt not curse the deaf, nor GIVE a stumbling block to the blind…” (Leviticus 19:14). In other words, one cannot hide behind an argument like the one just articulated. It is the seller’s responsibility to turn the blind into non-blind by alerting the buyer to the car’s defect.

Once we understand the act of separation in the two worlds, and grasp the role of science in assisting us separating in the “World of Law-of-Nature”, how do we separate and choose right in the “World of Randomness”?

Moses, speaking to the Children of Israel on behalf of the Divine, set to them clear separation and clear choice:

* Separation: “Behold, I have given thee this day life and the good, and death and the bad” (Deuteronomy 30:15);

* Choosing: “I call upon heaven and earth to witness this day against you that I have set before thee life and death, blessing and cursing; therefore, choose life that both thou and thy seed may live” (Deuteronomy 30:19).

Is free-will an endowment of the human species, granted to it for eternity?

Not according to Scripture. The free-will act bestowed on humankind, that of separating and choosing, has a limited life-span. It is not eternal. Time will come when God will reveal Himself and then free-will, by definition, will be no more:

“For then I will convert the peoples to a non-confounded language that they all call upon the name of Jehovah to serve him shoulder to shoulder” (Zephaniah 3:9);

“And Jehovah will be king over all the earth; on that day Jehovah will be one and his name One” (Zechariah 14:7).

Furthermore, not only the task of separating and choosing no longer be in the hands of mankind; At End-Times, the Divine will conduct a process of separation of His own; However, the separation process will not be between “Good” and “Evil” (as the latter exists in the “World of Randomness”), but rather between the righteous and the evil (who exist amidst humankind):

“I will also turn my hand against thee, and will purge away your dross as with lye and remove all thy alloy” (Isaiah 1:25);

“Therefore, thus says the Lord of hosts: Behold, I will smelt them and try them…” (Jeremiah 9:6);

“As silver is melted in the midst of the furnace, so shall you be melted in the midst of it…” (Ezekiel 22:22);

“I will bring the third part through the fire, and refine them as one refines silver and test them as one tests gold…” (Zechariah 13:9);

“But who may abide the day of his coming? and who shall stand when He appears? For He is like a refiner’s fire and like the washers’ soap; and He shall sit as a refiner and purifier of silver…” (Malachi 3:2);

“Many will be purged, and purified and refined…” (Daniel 12:10).

 

 

 

 

Categories
General Statistical Applications My Research in Statistics

Response Modeling Methodology — Now on Wikipedia

Response Modeling Methodology (RMM) is now on Wikipedia!! RMM is a general platform for modeling monotone convex relationships, which I have been developing over the last fifteen years (as of May, 2017),  applying it to various scientific and engineering disciplines.

A new entry about Response Modeling Methodology (RMM) has now been added to Wikipedia, with a comprehensive literature review::

Response Modeling Methodology – Haim Shore (Wikipedia).

 

Categories
Shorties

Shorty*: The Human Desire to be like God

At the core of all human endeavors is the burning desire to be like God. The desire is already expressed in the third chapter of Genesis:  “For God knows that on the day you eat of it” (of the Fruit of Knowledge) “then your eyes shall be opened and you shall be as God..” (Genesis 3:5).

But what does it mean to be like God?

The serpent expresses it explicitly: “You shall be like Elohim, knowing good and bad” (Genesis 3:5)  (Elohim is Hebrew for God as the creator).

Jewish prophets have incessantly preached differently:

“I am Jehovah speaking righteousness, I declare things that are right” (Isaiah 45:19) (Jehovah is Hebrew for God as source of morality and virtues).

Human history is the tale of nations and individuals seeking to be as powerful as Elohim via dominating resources (whether of knowledge, of humans beings (erroneously perceived as resource) or of physical properties).

Human history is also a tale of nations and individuals ignoring the message of the Jewish prophets that to be like God also means to be like Jehovah

(all the while concurrently harming the carriers of this inconvenient message).

********************************

* A “Shorty” is a newly invented word for a new idea or thought, expressed as shortly as possible..

Categories
My Research on the Bible and Biblical Hebrew Videos

Is There Linkage between Biblical Hebrew and Physical Reality? (Video; Hebrew, English captions)

This presentation addresses the key question of whether biblical Hebrew is intrinsically linked to physical reality. Delivered at Bar-Ilan University, I explain in this lecture the methodology employed and demonstrate with some statistical results.

:”הרצאה בעברית שניתנה באוניברסיטת בר-אילן במסגרת יום עיון של ארגון “המטרה אמת

Is There Association Between Biblical Hebrew and Physical Reality? (Hebrew)

With English captions:

PowerPoint file used in the lecture (and some more) is linked below:

קובץ PP שנעשה בו שימוש בהרצאה (וקצת יותר) בקישורית שלהלן:

Prof Haim Shore Presentation at Bar-Ilan Univ_Hebrew-English_Nov 2015

Categories
General My Research on the Bible and Biblical Hebrew

Present-Day Ultimate Replay of Sin of Adam and Eve

In this post I show that at present-day and age we witness a replay of the biblical sin of Adam and Eve.

We, as humans, entertain free will. This is made possible since our submission to the Law-of-Nature is not total. There are isolated islands in our lives where randomness prevails, allowing us to do whatever our heart desires, with apparently no moral consequences and no penalty due to violating some punishable law.

For example, we cannot decide to jump out of the window of a fifteenth floor of a high-rise because penalty would be immediate and ultimately catastrophic to our very survival. No free will here. Conversely, we may decide whether we wish (or wish not) to join a certain group of people for a shared activity with seemingly no devastating consequence, irrespective of which course of action we may have decided to pursue.

In summary, without ever so defined, our lives comprise two worlds intermingled with one another and generally indistinguishable from one another: The world of the “Law of Nature” and the world of “Randomness”. Our ability to exercise free will is conditioned on the existence of the latter; however we are prevented from exercising free will within the confines of the former.

Let us rephrase this assertion in biblical terms. The two seemingly unrelated and independent worlds, that of “Randomness” and that of “Law of Nature”, both originate in one source, which the Bible relates to as “Jehovah-Elohim”. Jehovah is source of the law of morality that prevails in the world of randomness. Elohim is source of physical creation and of Law-of-nature, within whose confines creation conducts itself since the beginning of time, at the Big Bang.

From its inception, humankind has aspired to be like God. But in what sense?

As the sin of Adam and Eve is described in the third chapter of Genesis, the serpent seduces Eve, explaining to her why it would have been beneficial to eat of the “Fruit of Knowledge”:

“For Elohim knows that on the day that you eat of it, then your eyes shall be opened, and you shall be like Elohim, knowing good and bad” (Genesis 3:5).

In other words: Gaining knowledge, by eating of the Fruit-of-Knowledge, aims at becoming like Elohim, knowing the Law-of-Nature that would grant us knowhow of that which is beneficial to us (“Good”) and that which is not (“Bad”). The burning desire is dominance over nature (including dominance over other human beings), but not the study of Law-of-Morality, which prevails in the “World of Randomness”, concealed from us so that we may exercise free will.

For that sin, the sin of wishing to know Elohim (source of Law-of-Nature), and not Jehovah-Elohim (the complete and all-encompassing manifestation of God’s leadership of his world, which is also the only name for God used by the “objective” narrator), Adam and Eve are subject to expulsion from the Garden-of-Eden.

Knowing Elohim with the objective of being Elohim-like implies knowing Law-of-Nature and gaining dominance over nature and people. Murdering another human being is the ultimate expression of dominance over nature as a result of the desire to be as powerful (and as “Great”) as Elohim.

An individual calling “Allahu Akbar” (“Allah is greatest”), while taking someone else’s life in an act of murder, commits the exact same sin as that of Adam and Eve, only taken to the extreme:

“I know Elohim (since I know Law-of-Nature)” → “Therefore I have gained dominance over nature” → “Therefore I am Elohim-like” → “Therefore I have Elohim’s privilege to take your life away”.

All wrong!! And on many counts.

The privilege to take away one’s life does not belong to Elohim but to Jehovah Elohim. Alone.

And no amount of knowledge of Elohim, supposedly leading to a state of being God-like, provides complete knowledge unless complemented by the knowledge of Jehovah and his law:

“And now, Israel, what does Jehovah, your Elohim, requires of you but to fear Jehovah, your Elohim, to walk in all his ways, and to love him, and to serve Jehovah, your Elohim, with all thy heart and with all thy soul” (Deuteronomy 10:12).

The history of the human race is marked by committing, over and over again, the exact same sin of Adam and Eve: Gaining knowledge about Law-of-Nature, originating in Elohim, with utter lack of interest in knowing Law-of-Morality, originating in Jehovah.

The stated mission, indeed the role, of the Jewish nation in the world is to declare in the public square:

“The free will that you experience in the “World of Randomness” is an illusion. As there is Law-of-Nature there is also Law-of-Morality. These are not two separate worlds, one governed by Law-of-nature and another  governed by.. nothing.”

And days are coming, when all will know, and aspire to know, not only Elohim but also Jehovah:

“Behold, days are coming, says Jehovah, when I will make a new covenant with the House of Israel and the House of Yehudah”,…, “and they shall teach no more every man his neighbor and every man his brother, saying, know Jehovah; for they shall all know me, from the least of them to the greatest of them, says Jehovah” (Jeremiah 31:30-33).

“Allahu Akbar”, followed by murder, is present-day ultimate replay of the ancient sin of Adam and Eve. The latter have produced first human attempt at separating Elohim from Jehovah, learning the ways of the former (leading by Law-of-Nature), while ignoring, and neglecting to learn, the ways of the latter (leading by Law-of-Morality).

Days are coming, prophesizes prophet Jeremiah, when this artificial separation of the two worlds will be no more.

**********************************************************

This post may also be read at Times-of-Israel:

Present-day Replay of the Sin of Adam and Eve

 

Categories
My Research on the Bible and Biblical Hebrew Videos

Prof. Shore’s Bible Findings – Simplified (New Short Videos, English/Spanish)

Two new video clips (Hebrew/English), each about ten minutes long, have recently been produced by Mr. Oren Evron. In these I explain, in plain language, the basic principles underlying my research on the Jewish Hebrew Bible and on biblical Hebrew. Below is the English version:

Two other videos (English/Hebrew) focus on Genesis creation narrative:

English:

 

 Hebrew : 

 

Categories
General

“Equality” – Sacred Cow that Distorts Observed Reality

In this post I address Equality, an originally Jewish value that has lately returned in the form of a sacred cow, which tends to distort observed reality.

“Equality”, an ancient Jewish value that has of late been “imported” back to the Jewish state in a distorted, largely reformed and deformed form, has since wreaked havoc on our ability to view reality as it is and to arrive at balanced and educated decisions.

In this post, I deliver three examples and address their implications.

A link to The Blogs on The Times of Israel:

Haim Shore_“Equality – the Sacred Cow”_The Times of Israel_August 14 2015

A PDF file of this post is attached:

Haim Shore_Equality-The Sacred Cow_Post in The Times of Israel_August 14 2015

Categories
Historical Coincidences My Research on the Bible and Biblical Hebrew

Agrippa Key for English Alphabet Gematria

A most intriguing finding, addressed in Oren Evron’s movie, is the nearly perfect correlation found for the trio of words, {moon, Earth, sun}, between the Hebrew gematria and the English Agrippa-Key gematria.

Although a table of Agrippa-Key may be easily reached via the Internet, we comply herewith with a request made by a viewer of this blog and present the Agrippa Key in the table below:

Agrippa Key_English Gematria

Categories
My Research on the Bible and Biblical Hebrew

New Movie on Prof. Shore’s Biblical Research Findings (English-Hebrew)

A while ago I was approached by Mr. Oren Evron, who had suggested to produce a movie based on findings from my research on the Bible and on biblical Hebrew (as these are described in my book).

I happily accepted but clarified that my input to the production of the movie will be restricted to ensuring that my research findings are appropriately presented (from the scientific and the statistical perspectives).

The result is a fabulous high-quality movie, which presents, in a straightforward and easy-to-understand fashion, the findings and their profound significance.

A link to the channel, where the movie is posted, is given below:

The Torah – Math Unveils the Truth” (English)_by Oren Evron_Feb 2015

A link to the movie with built-in Hebrew subtitles is given below:

הסרט עם תרגום מובנה לעברית:

The Torah – Math Unveils the Truth” (English-Hebrew)_by Oren Evron_Feb 2015

Categories
General

Antisemitism and “Killing the Messenger”

How is “Killing the Messenger” associated with antisemitism?

On October 1892, Asher Tzvi Ginzberg (1856-1927), also known by his pseudonym Achad-Haam, published an article in the Hebrew periodical Hamelitz. The title of the article was: “Half a Comfort” (Chatzi Nechamah). The article was published half a century after the Damascus blood libel, and in it Achad Haam tries to extract a useful lesson from the anti-Semitic blood libel (if one can be extracted at all). He denotes this lesson: Chatzi Nechamah. Achad Haam hoped that his Chatzi Nechamah would help Jews worldwide  to cope with the devastating psychological effects of constant vilification of the Jews as part of the acceptable Anti-Semitic  “General Agreement” (in his words; today’s “General Consensus”).

In the article, linked below, I offer an additional “Half  a Comfort”, to complement that of Achad Haam:

http://blogs.timesofisrael.com/anti-semitism-and-killing-the-messenger/

This article may also be downloaded as a PDF file:

Haim Shore_Antisemitism and Killing the Messenger_Oct 2014

Categories
General Statistical Applications

SPC-based monitoring of ecological processes (Presentation, Hebrew)

In a workshop about recent advances in the application of statistical methods to quality engineering and management, conducted in March of 2013 by the Open University of Israel, I have delivered a presentation (Hebrew) about SPC-based modeling and monitoring of ecological processes. The lecture was based on my recently published article:

Shore, H. (2013), Modeling and Monitoring Ecological Systems—A Statistical Process Control Approach. Qual. Reliab. Engng. Int.. doi: 10.1002/qre.1544

A link to the presentation is given below:

Haim Shore_SPC monitoring of ecological processes_Open University_March 2013

Categories
My Research in Statistics

‎”What is the significance of the significance level?”‎

This post delivers an in-depth analysis of the significance of the statistical term Significance Level (in response to an article in Significance Journal).

In a focus article that has appeared in Significance magazine (October, 2013), the author Mark Kelly delivers an excellent review of what “luminaries have to say” regarding the proper significance level to use in statistical hypothesis testing. The author thence concludes:

“No one therefore has come up with an objective statistically based reasoning behind choosing the now ubiquitous 5% level, although there are objective reasons for levels above and below it. And no one is forcing us to choose 5% either.”

In a response article, sent to the editor of Significance, Julian Champkin, I have made the point that, unlike the claim made in the original article, there is an obvious method to determine objectively the optimal statistical significance level. While the editor accepted my article, he declined to include the detailed numerical example therein since “Your illustration, though, is a little too technical for some of our readers – we have many who are not statisticians, and we try to keep heavy maths to a minimum in the magazine.”

In a further (unanswered) e-mail to the editor, I have suggested a solution to the editor’s concern and stated that “Personally I feel that there are many practitioners out there who could benefit from this simple practical example and get aware that engineering considerations are part and parcel of hypothesis testing in an engineering environment. I often feel that these engineers are somewhat neglected in the statistics literature in favor of pure science.”

Based on my own experience of over thirty years of academic teaching to industrial engineering undergraduates, I feel that it is important that individuals working in an engineering environment understand that the view point expressed in Kelly’s article in the Significance magazine, which is quite prevalent, is not accurate in all circumstances.

With this in mind, the originally submitted article, titled:

“What is the significance of the significance level?” “It’s the error costs, stupid!”

is linked below:

Haim Shore_What is the significance of the significance level_Response to Significance_March 2014

Categories
My Research on Modeling Fetal and Child Growth

SPC-based Monitoring of Fetal Growth (Presentations)

On March, 6, 2014, my research team delivered a seminar (in English) at the Soroka Medical Center, describing our ongoing research project in modeling and monitoring fetal growth. The research team includes, besides myself, Dr. Diamanta Benson-Karhi (from the Open University) and Professor Asher Bashiri (from Soroka and Ben-Gurion University).

Concurrently, our graduate student, Mrs. Maya Malamud, had concluded her thesis and delivered a presentation (in Hebrew) during her final exam session.

Both presentations are now accessible here (in PDF format):

Haim Shore_SPC-based Monitoring of Fetal Growth_Presentation (English)_March 2014

Maya Malamud_Fetal Growth Study_Final Exam Presentation (Hebrew)_March 2014

Article published in Quality Ebgineering:

SPC-based Monitoring of Fetal Growth

Categories
General Statistical Applications

Total Quality, Quality Control and Quality by Design (Book, in Hebrew)

This book was self-published back in 1992 (2nd edition in 1995). A unique feature of the book is that each page is structured as a separate slide, which may be integrated into a presentation. Related theoretical material is deferred to the appendices.

The book had gained popularity in Israel in institutions, academic and otherwise, where courses, or workshops, in quality engineering had been taught.

It may now be downloaded free here (with bookmarks that allow easy access to each chapter):

Shore_Total Quality, Quality Control and Q by Design_1995

Categories
My Research in Statistics

Statistics and “Stamp Collecting”

What is the linkage between the science of statistics and “Stamp Collecting”? More than you can imagine.. This blog entry (with the linked article and PP presentation) was originally posted, for a restricted time period, on the Community Blog of the American Statistical Association (ASA), where the linked items were visible to members only. The blog entry is now displayed, with the linked items, visible to all.

This is the fourth and last message in this series about the consequences to statistical modeling of the continuous monotone convexity (CMC) property. The new message discusses implications of the CMC property to modeling random variation.

As a departure point for this discussion, some historic perspective about the development of the principle of unification in human perception of nature can be useful.

Our ancestors believed in a multiplicity of gods. All phenomena of nature had their particular gods and various manifestations of same phenomenon were indeed different displays of wishes, desires and emotions of the relevant god. Thus, Prometheus was a deity who gave fire to the human race and for that was punished by Zeus, the king of the gods; Poseidon was the god of the seas; and Eros was the god of desire and attraction.

This convenient “explanation” for the diversity of nature phenomena had all but disappeared with the advent of monotheism. Under the “umbrella” of a single god, ancient gods were “deleted”, to be replaced by a “unified” and “unifying” almighty god, the source of all nature phenomena.

And the three major monotheistic religions had been born.

The “concept” of unification, however, did not stop there. It was migrated to science, where pioneering giants of modern scientific thinking observed diverse phenomena of nature and had attempted to unify them into an all-encompassing mathematics-based theory, from which the separate phenomena could be deduced as special cases. Some of the most well-known representatives of this mammoth shift in human thinking, in those early stages of modern science, were Copernicus (1473-1543), Johannes Kepler (1571-1630), Galileo Galilei (1564-1642) and Isaac Newton (1642-1727).

In particular, the science of physics had been at the forefront of these early attempts to pursue the basic concept of unity in the realm of science. Ernest Rutherford (1871–1937), known as the father of nuclear physics and the discoverer of the proton (in 1919), made the following observation at the time:

“All science is either physics or stamp collecting”.

The assertion, quoted in Kaku (1994, p. 131), intended to convey a general sentiment that the drive to converge the five fundamental forces of nature into a unifying theory, nowadays a central theme of modern physics, represented science at its best. Furthermore, this is the only correct approach to the scientific investigation of nature. By contrast, at least until recently, most other scientific disciplines have engaged in taxonomy (“bug collecting” or “stamp collecting”). With “stamp collecting” the scientific inquiry is restricted to the discovery and classification of the “objects of enquiry”, particular to that science. However, this never culminates, as in physics, in a unifying theory from which all these objects may be deductively derived as “special cases”.

Is statistics a science of “stamp collecting”?

Observing the abundance of statistical distributions, identified to-date, an unavoidable conclusion is that statistics is indeed a science engaged in “stamp collecting”. Furthermore, serious attempts at unification (partial, at least) are rarely reported in the literature.

In a recent article (Shore, 2015), I have attempted a new paradigm for modeling random variation. The new paradigm, so I believe, may constitute an initial effort to unite all distributions under a unified “umbrella distribution”. In the new paradigm, the “Continuous Monotone Convexity (CMC)” property plays a central role in deriving a general expression to the normal-based quantile function of a generic random variable (assuming a single mode and a non-mixture distribution). Employing numeric fitting to current distributions, the new model has been shown to deliver accurate representation to scores of differently-shaped distributions (including some suggested by anonymous reviewers). Furthermore, negligible deviations from the fitted general model may be attributed to the natural imperfection of the fitting procedure or being perceived as realization of random variation around the fitted general model, not unlike a sample average is a random deviation from the population mean.

In a more recent effort (Shore, 2017), a new paradigm for modeling random variation is introduced and validated via certain predictions about known “statistical facts” (like the Central Limit Theorem), shown to be empirically true, and via distribution fitting, via 5-moment matching procedure, to a sample of known distributions.

These topics and others are addressed extensively in the afore-cited new article. It is my judgment that at present the CMC property constitutes the only possible avenue for achieving in statistics (as in most other modern branches of science) unification of the “objects of enquiry”, as these relate to modeling random variation.

In the affiliated Article #4 , I introduce in a more comprehensive fashion (yet minimally technical) an outline of the new paradigm and elaborate on how the CMC property is employed to arrive at a “general model of random variation”. A related PowerPoint presentation, delivered last summer at a conference in Michigan, is also displayed.

Haim Shore_4_ASA_Feb 2014

Haim Shore_4_ASA_PP Presentation_Feb 2014

References

[1] Kaku M (1994). Hyperspace- A Scientific Odyssey Through Parallel Universes, Time Warps and the Tenth Dimension. Book. Oxford University Press Inc., NY.

[2] Shore, H. (2015). A General Model of Random Variation. Communications in Statistics – Theory and Methods  44 (9): 1819-1841.

[3] Shore, H. (2017). The Fundamental Process by which Random Variation is Generated. Under review.

Categories
Historical Coincidences

“An Outrage in Afghanistan” and One in Israel

“Outrage in Afghanistan” and a similar, almost concurrent, one in Israel. Sheer historic coincidence??

In his Talking Points on “Bill O’reilly Factor” (Fox News), February 13, 2014, the anchor, Bill O’reilly, related to the outrageous release, by President Karzai of Afghanistan, of 65 convicted Taliban terrorists, who have killed or maimed Americans.

In a response comment, posted on the same day at Foxnews blog, I wrote:

“Whence the surprise that Americans have to furiously witness release of Taliban terrorists, who have killed Americans, if only a few months ago Israelis had to furiously witness the American administration forcing the Israeli government release Palestinians terrorists, who have murdered Israelis? (check the two numbers!)”

In this post I detail the two parallel cases, which have surprisingly occurred no more than six months apart.

On August 2013, the Israeli Cabinet agreed on a four-stage process by which 104 Palestinian prisoners will be released as part of a “confidence-building” measure aimed to boost renewed Israeli-Palestinian peace negotiations. This decision was taken after US Secretary of State John Kerry, in his efforts to persuade the Palestinian side to re-embark on peace talks with Israel, posed two possible (one may say impossible) options for the Israeli government: To cease construction in Jewish villages and towns beyond the green line (Israel pre-1967 war borders) or release Palestinian terrorists, convicted in due judicial process in the Israeli Justice System.

All of the prisoners slated for release were convicted for terrorism against Israel before the signing of the Oslo Accords in September 1993; most were directly involved in the murder of Israelis and many were serving life sentences. On August 13, 2013, Israel released the first group of 26 convicted Palestinians terrorists. Another group of 26 were released October, 30th, and another group of 26 prisoners on December 31st, 2013.

By February of 2014, altogether 78 convicted Palestinian prisoners were released from Israeli jails in accordance with the decision of the Israeli government. (a fourth group was slated to be released April, 2014).

On February, 14th, Afghan President Hamid Karzai ordered the release of 65 captured Taliban terrorists who were supposed to be tried for crimes against civilians in their own country. These killers have also been linked to the deaths of 32 Americans and allied troops according to the U.S. command. This decision by Karzai was termed by Bill O’reilly “An Outrage in Afghanistan”.

The parallelism, between US conduct towards the State of Israel and supposed consequences to the US, is the subject of several books, published recently, all pursuing a single paradigm: “As America Has Done to Israel…”.

Examples:

McTernan, J. (2008). As America Has Done to Israel. Whitaker House.

Kroening, W. R. (2008). Eye to Eye: Facing the Consequences of Dividing Israel. About Him. Revised Edition.

The above current historical coincidence may be just that (or not).

Categories
List of Posts

List of Posts

This page lists all posts in Professor Haim Shore blog (by category). Press any item in the list to access the linked post.

( Podcast list in Section 7; Bible Reads in Section 8, also here; Link to Shore’s YouTube podcastsList of Professor Shore’s YouTube Podcasts; Link to Shore’s Authorized Publications List: Professor Haim Shore Authorized Publications List ; Amazon link to Shore’s book, with most posts of this blog: The Bible, Biblical Hebrew, Science and Their Inter-relationships: A compendium of essays, 2010-2023)

1. Statistics

2. Bible and Biblical Hebrew

3. Fetal and Child Growth: Modeling and Monitoring

4. General Statistical Applications

5. Current Historical Coincidences

6. General

7. Podcasts (Audio; Also, Section 8 below)

8. Bible Reads (Audio; Hebrew; Hebrew/English PDF; An updated list is here)

Categories
My Research on Modeling Fetal and Child Growth

RMM-based Modeling of Child Growth

The continuous monotone convexity (CMC) property, unique to Response Modeling Methodology (RMM), delivers a versatile platform for modeling fetal (pre-birth) and child (post-birth) growth.

In the linked article, now under review, we use real data to model child growth and compare the resulting growth curves to those obtained via generalized additive models for location, scale and shape (GAMLSS).

Haim Shore_Stepwise modeling of child growth with RMM_Feb 2014_2

Categories
My Research in Statistics

CMC-Based Modeling — the Approach and Its Performance Evaluation

This post explains the central role of Continuous Monotone Convexity (CMC) in Response Modeling Methodology (RMM).

In earlier blog entries, the unique effectiveness of the Box-Cox transformation (BCT) was addressed. I concluded that the BCT effectiveness could probably be attributed to the Continuous Monotone Convexity (CMC) property, unique to the inverse BCT (IBCT). Rather than requiring the analyst to specify a model in advance (prior to analysis), the CMC property allows the data, via parameter estimation, determine the final form of the model (linear, power or exponential). This would most likely lead to better fit of the—estimated model, as cumulative reported experience with implementation of IBCT (or BCT) clearly attest to.

In the most recent blog entry in this series, I have introduced the “Ladder of Monotone Convex Functions”, and have demonstrated that IBCT delivers only the first three “steps” of the Ladder. Furthermore, IBCT can be extended so that a single general model can represent all monotone convex functions belonging to the Ladder. This transforms monotone convexity into a continuous spectrum so that the discrete “steps” of the Ladder (the separate models) become mere points on that spectrum.

In this third entry on the subject (and Article #3, linked below), I introduce in a more comprehensive fashion (yet minimally technical) the general model from which all the Ladder functions can be derived as special cases. This model was initially conceived in the last years of the previous century (Shore, 2005, and references therein) and had since been developed into a comprehensive modeling approach, denoted Response Modeling Methodology (RMM). In the affiliated article, an axiomatic derivation of RMM basic model is outlined and specific adaptations of RMM to model systematic variation and to model random variation are addressed. Published evidence for the capability of RMM to replace current published models, previously derived within various scientific and engineering disciplines as either theoretical, empirical or semi-empirical models, is reviewed. Disciplines surveyed include chemical engineering, software quality engineering, process capability analysis, ecology and ultra-sound-based fetal-growth modeling (based on cross-sectional data).

This blog entry (with the linked article given below) was originally posted on the site of the American Statistical Association (ASA), where the linked article was visible to members only.

Haim Shore_3_ASA_Jan 2014

Categories
General Statistical Applications

Determining measurement-error requirements to satisfy statistical-process-control performance requirements (Presentation, English)

On January 6th, 2014, I have delivered a talk that carried the title, as displayed above.

The talk was given in the framework of a workshop organized by the Open University of Israel (see details at the bottom of the opening screen of the presentation). It was based on my article of 2004:

Shore, H. (2004). Determining measurement error requirements to satisfy statistical process control performance requirements. IIE Transactions, 36(9): 881-890.
A link to this presentation, in PDF format, is given below:
Open University_Measurement Error and SPC_Haim Shore Presentation_Jan 2014
The lecture (in English) may be viewed at:
Categories
My Research in Statistics

The “Continuous Monotone Convexity (CMC)” Property and its ‎Implications to Statistical Modeling

In a previous post in this series, I have discussed reasons for the effectiveness of the Box-Cox (BC) transformation, particularly when applied to a response variable within linear regression analysis. The final conclusion was that this effectiveness could probably be attributed to the “Continuous Monotone Convexity (CMC)” property, owned by the inverse BC transformation. It was emphasized that the latter, comprising the three most fundamental monotone convex functions, the “linear-power-exponential” trio, delivers only partial representation to a whole host of models of monotone convex relationships, which can be arranged in a hierarchy of monotone convexity. This hierarchy had been denoted the “Ladder of Monotone Convex Functions.”

In this post (and Article #2, linked below), I address in more detail the nature of the CMC property. I specify models included in the Ladder, and show how one can deliver, via a single model, representation to all models belonging to the Ladder (analogously with the inverse BC transformation, a special case of that model). Furthermore, I point to published evidence demonstrating that models of the Ladder may often substitute, with negligible loss in accuracy, published models of monotone convexity, which had been derived from theoretical discipline-specific considerations.

This blog entry (with the linked article given below) was originally posted on the site of the American Statistical Association (ASA), where the linked article was visible to members only.

Haim Shore_2_ASA_Dec 2013

Categories
My Research on the Bible and Biblical Hebrew

Hebrew-English presentation on the Bible and on biblical Hebrew (with color ‎graphics)‎

This presentation expounds on various research findings given in my book: “Coincidences in the Bible and in Biblical Hebrew” (Shore, 2 Ed., 2012).

The book is now available for free download at this blog’s home page (“About“).

Presentation is divided into eight parts:

  • “Laban – the Case of a Lost Identity” (Ch. 15 in the book; in Hebrew);
  • “Chance” and “Cold” – two separately developed scientific concepts of entropy that are actually one (and also expressed by a single word root in biblical Hebrew; Ch. 3 in the book; Hebrew);
  • Average lunar month according to Jewish sources (Ch. 18 in the book; Hebrew; See also separate blog entry on the subject);
  • “When a sample of observations are aligned on a straight line”: “A parable” about measuring temperatures on both Celsius and Fahrenheit scales (Ch. 23 in the book; English);
  • Relationships between numerical values of sets of Hebrew words and related physical traits (three consecutive examples with color plots; Hebrew-English):
    • Example 1: Time-cycles (“Day, Month, Year”; Ch. 12 in the book);
    • Example 2: Celestial diameters (“Moon, Earth, Sun”; Ch. 8 in the book);
    • Example 3: Velocity (“Light, Sound, Standstill” or “Lightening, Thunder, Silence”; Ch. 21 in the book);
  • Results from a computer simulation study aimed to estimate probabilities (Hebrew);
  • The planets example (An extensive example of relationships of size-sorted physical traits of celestial bodies to numerically sorted biblical names; Hebrew-English);
  • Genesis creation story – A statistical analysis (English);

To watch the PDF file in presentation mode, open with Adobe Reader and then go to: View -> Full Screen Mode. To manipulate slides click mouse-left to advance and mouse-right to retreat to previous slide.

Prof Haim Shore presentation_Bible and biblical Hebrew research_March 2016

Categories
My Research on the Bible and Biblical Hebrew

New Articles Related to My Research on the Bible and Biblical Hebrew

In this new blog entry, I deliver links to three new documents related mostly to the statistical analyses associated with my research on the Bible and on biblical Hebrew:

1. Three chapters from my book: “Coincidences in the Bible and in Biblical Hebrew”. These chapters mostly address the statistical perspective of my research work, as expounded in the book. Bookmarks may assist navigating between chapters:

Coincidences in the Bible and in Biblical Hebrew_Book by Haim Shore_2nd Revision_2012_Three Sample Chapters

2. An article in Hebrew, published recently in “Ha-mahapach 3”, by Rav Zamir Cohen of Hidabrut Oganization.

הידברות מקריות בתורה ובשבת הקודש

3. An article invited by Rav Zamir Cohen for the upcoming book “Ha-Mahapach 4”. The article explains how average lunar month duration can be calculated, from ancient Jewish sources (including the Hebrew Bible), to be 29.530594 days vs. NASA’s estimate of 29.530589 days.

פרופ שור_משך ירח הלבנה הממוצע_עבור המהפך 4_הידברות

Categories
My Research in Statistics

Why is Box-Cox transformation so effective?

Comment: Read my latest peer-reviewed article on the subject (2023): 10.1002/9781118445112.stat08456

The Box-Cox transformation and why is it so effective has intrigued my curiosity for many years. I have had the opportunity to talk both to Box and to Cox about their transformation (Box and Cox, 1964).

I conversed with the late George Box (deceased last March at age 94) when I was a visitor in Madison, Wisconsin, back in 1993-4.

A few years later I talked to David Cox at a conference on reliability in Bordeaux (MMR’2000).

I asked them both the same question, I received the same response.

The question was: What was the theory that led to the derivation of the Box-Cox transformation?

The answer was: “No theory. This was a purely empirical observation”.

The question therefore remains: Why is the Box-Cox transformation so effective, in particular when applied to a response variable in the framework of linear regression analysis?

In a new article, posted in my personal library at the American Statistical Association (ASA) site, I discuss this issue at some length. The article is now generally available for download here (Article #1 below).

Haim Shore_1_ASA_Nov 2013

Categories
My Research on the Bible and Biblical Hebrew

An Interview with the Author in The Jerusalem Post (Dec. 4th, 2009)

Link to an interview with professor Haim Shore, about his statistical analysis research of the Bible and biblical Hebrew, on The Jerusalem Post (Dec. 4th, 2009) :

An Interview with the Author in the Jerusalem Post (Dec., 4th, 2009)