Applied Statistics
This page is intended to be a CrySP FAQ for statistics in papers.
How many sigfigs do I report?
Quick rule-of-thumb answer
Report standard deviation to 1 sigfig (or, if you're feeling bold, 2 sigfigs if you've run at least 51 experiments, or 3 sigfigs if you've run at least 5001 experiments). Report summary statistics to the precision of the first sigfig of the standard deviation, or the measurement error, whichever is less precise. E.g., if your stddev is 0.02, you can report your mean to two decimal places.
Full answer
The point of
significant figures is to represent the precision to which results are being reported. There are two limits to precision: measurement error (e.g., "how many lines are on my ruler and how accurate are they", "how many times does my clock tick in a second and how evenly spaced are they"), and variability in the distribution of the phenomenon being measured. The former requires some simple rules, which you likely learned in high school science class. The latter involves some math.
Since we want to know "how many digits actually represent something meaningful in this summary statistic" (typically the arithmetic mean, which we'll just use from here on out), an intuitive explanation of what we need to figure out is, "at what point is the variability in the data just as liable to 'cause' that digit as the actual underlying distribution mean is?". A more correct explanation is, "how many digits can we report where we'd expect someone else to run this identical experiment to get the same digits, other than the very last digit?" To estimate this, we use the standard deviation. If the standard deviation were 0.02, then we could report a mean of 10.23, or 111.50, or 0.01, etc.. Any more than that, say 10.234, and the extra digits would just be reporting noise from the variability in the measurements, and so would only be reproducible by sheer chance.
The next obvious question is, how many sigfigs does the standard deviation have? The rule of thumb is just one digit, because that's the only thing that matters for the precision of the summary statistic. I.e., it doesn't matter if your standard deviation is 0.020 or 0.029, you still report the mean to the second decimal place, since it's still the case that any more digits than that would be subject to the variability of the measurements more than they are to the true distribution's mean (just like how a stddev of 0.02 vs. 0.03 doesn't change the sigfigs you report; all we care about is orders of magnitude). The one exception is the edge case where it would change the position of the leading sigfig, e.g. 0.01 vs. 0.0095, but that's a fairly rare scenario.
But if you do want to actually know the number of sigfigs in your stddev, the answer is you just repeat the process: you report to the precision of the first sigfig of the estimated stddev of your stddev. The formula for estimating the stddev of stddev of a normal distribution is Δs=s/√(2n-2), where Δs is the estimated stddev of the stddev, and n is the number of samples. (There is a more precise formula at
this stats stack exchange post, the formula given here is an estimator derived independently in the comments on that post and at
this statistics for physics students site.) So suppose you ran 51 experiments. Then Δs=s/10, which means that you can report the standard deviation to 2 digits (the first digit of Δs will be one to the right of the first digit of s). We get our third digit at 5,001 experiments, our fourth at 500,001 experiments, etc., but we've long exited human interpretability at this point.