Express answers in terms of original units, in Box-Cox transformed data

For some measurements, the results of an analysis are appropriately presented on the transformed scale. In most of the cases, however, it's desirable to present the results on the original scale of measurement (otherwise your work is more or less worthless).

For example, in case of log-transformed data, a problem with interpretation on the original scale arises because the mean of the logged values is not the log of the mean. Taking the antilogarithm of the estimate of the mean on the log scale does not give an estimate of the mean on the original scale.

If, however, the log-transformed data have symmetric distributions, the following relationships hold (since the log preserves ordering):

$$\text{Mean}[\log (Y)] = \text{Median}[\log (Y)] = \log[\text{Median} (Y)]$$

(the antilogarithm of the mean of the log values is the median on the original scale of measurements).

So I only can make inferences about the difference (or the ratio) of the medians on the original scale of measurement.

Two-sample t-tests and confidence intervals are most reliable if the populations are roughly normal with approximately standard deviations, so we may be tempted to use the Box-Cox transformation for the normality assumption to hold (I also think that it is a variance stabilizing transformation too).

However, if we apply t-tools to Box-Cox transformed data , we will get inferences about the difference in means of the transformed data. How can we interpret those on the original scale of measurement? (The mean of the transformed values is not the transformed mean). In other words, taking the inverse transform of the estimate of the mean, on the transformed scale, does not give an estimate of the mean on the original scale.

Can I also make inferences only about the medians in this case? Is there a transformation that will allow me to go back to the means (on the original scale) ?

This question was initially posted as a comment here

If you want inferences specifically about the mean of the original variable, then don't use Box-Cox transformation. IMO Box-Cox transformations are most useful when the transformed variable has its own interpretation, and the Box-Cox transformation only helps you to find the right scale for analysis - this turns out to be the case surprisingly often. Two unexpected exponents that I found this way were 1/3 (when the response variable was bladder volume) and -1 (when the response variable was breaths per minute).

The log-transformation is probably the only exception to this. The mean on the log-scale corresponds to the geometric mean in the original scale, which is at least a well-defined quantity.

If the Box-Cox transformation yields a symmetric distribution, then the mean of the transformed data is back-transformed to the median on the original scale. This is true for any monotonic transformation, including the Box-Cox transformations, the IHS transformations, etc. So inferences about the means on the transformed data correspond to inferences about the median on the original scale.

As the original data were skewed (or you wouldn't have used a Box-Cox transformation in the first place), why do you want inferences about the means? I would have thought working with medians would make more sense in this situation. I don't understand why this is seen as a "problem with interpretation on the original scale".

If you want to do inference about means on the original scale, you could consider using inference that doesn't use a normality assumption.

Take care, however. Simply plugging through a straight comparison of means via say resampling (either permutation tests or bootstrapping) when the two samples have different variances may be a problem if your analysis assumes the variances to be equal (and equal variances on the transformed scale will be difference variances on the original scale if the means differ). Such techniques don't avoid the necessity to think about what you're doing.

Another approach to consider if you're more interested in estimation or prediction than testing is to use a Taylor expansion of the transformed variables to compute the approximate mean and variance after transforming back - where in the usual Taylor expansion you'd write $f(x+h)$, you now write $t[\mu + (Y-\mu)]$ where $Y$ is a random variable with mean $\mu$ and variance $\sigma^2$, which you're about to transform back using $t()$.

If you take expectations, the second term drops out, and people usually take just the first and third terms (where the third represents an approximation to the bias in just transforming the mean); further if you take the variance of the expansion to the second term, the first term and the first covariance terms drop out - because $t(\mu)$ is a constant - leaving you with a single-term approximation for the variance.

--

The easiest case is when you have normality on the log-scale, and hence a lognormal on the original scale. If your variance is known (which happens very rarely at best), you can construct lognormal CIs and PIs on the original scale, and you can give a predicted mean from the mean of the distribution of the relevant quantity.

If you're estimating both mean and variance on the log-scale, you can construct log-$t$ intervals (prediction intervals for an observation, say), but your original-scale log-$t$ doesn't have any moments. So the mean of a prediction just doesn't exist.

You need to think very carefully about precisely what question you're trying to answer.