A laboratory manager wants to get the best (most accurate) result for the determination of chloride for a particular sample. She sets two of her top analysts on the problem who report back the following results, based on gravimetric and spectrophotometric methods, respectively:
x1 = 13.65% x2 = 13.73%
?1 = 0.05% ?2 = 0.20%
To reduce the error, she decides to average the two results together. If she does, what will be the resulting standard deviation of the average of these two results? Does averaging help reduce the reported error? Why or why not?