We wish to calculate a weighted average of a set of sample averages, given their standard deviations. How do we do that?
The objective is to find a weighting factor, alpha, that minimizes the variance of the weighted average, namely (for two averages):
Minimum { Variance[ (α)Average1 + (1-α)Average2 ] }
We first calculate the variance to obtain (Var is short for Variance; samples for averages assumed independent):
Variance[ (α)Average1 + (1-α)Average2 ] =
= α2 Var(Average1) + (1-α)2 Var(Average2) .
Differentiating with respect to alpha and equating to zero, we obtain:
(2α)Var(Average 1) – 2(1-α)Var(Average 2) = 0, and the optimal alpha is:
α* = var(Average 2) / [ var(Average1) + var(Average2) ] ,
where: var(Average)= variance/n, with n a sample size.
We may wish to adapt this reply to specific needs. For example, for three averages we have:
Variance[ (α1)Average1 + (α2)Average2 + (1-α1-α2)Average3 ] =
= α12Var(Average1) + α22Var(Average2) + (1-α1-α2)2 Var(Average3)
To minimize this expression, we differentiate twice, with respect to α1 and to α2. Equating to zero we obtain two linear equations in two unknowns that may be easily identified:
(2α1)Var(Average1) – 2(1-α1-α2)Var(Average3) = 0,
(2α2)Var(Average2) – 2(1-α1-α2)Var(Average3) = 0,
or:
α1= v3 / [v1 + v3 + (v1v3)/v2]
α2= v3 / [v2 + v3 + (v2v3)/v1]
where vi is Var(Average i) (i=1,2,3).
Since “in general, a system with the same number of equations and unknowns has a single unique solution” (Wikipedia, “System of linear equations”), extension to a higher number of averages (m>3), is straightforward, requiring solving a system of m-1 linear equations with m-1 unknowns.
(This post appears also on my personal page at ResearchGate)