1. Introduction
Transfer pricing analysis for intangible-intensive corporations requires structural econometric models, not ratio-based profit level indicators (PLIs). The standard operating margin and the Berry ratio are quotients of accounting aggregates. When the underlying structural relationship has a nonzero intercept — as is the case for most of the eleven corporations examined here — ratio-based PLIs are biased estimators. The bias equals alpha / H, where alpha is the regression intercept and H is the harmonic mean of the cost base across observations. This result is the harmonic mean bias theorem established in Silva (2024).
This article reports Huber robust regression estimates of the structural revenue equation for eleven U.S. high-technology corporations over the period 1980–2025. Several corporations have shorter histories that begin when data become available. The corporations included are AMD, Alphabet, Amazon, Apple, IBM, Intel, Meta, Microsoft, Nvidia, Oracle, and Tesla. For Amazon, the consolidated entity is used rather than the more appropriate AWS division alone.
The regression model is:
REVT(i) = \alpha + \beta \cdot XOPR(i) + \varepsilon(i)
where REVT is total revenues and XOPR is total operating costs (COGS + XSGA), both in millions of U.S. dollars. The slope coefficient beta is the marginal revenue per dollar of operating cost — the structurally correct markup multiplier. The intercept alpha captures fixed-cost recovery and quasi-rents that are independent of the variable cost base. When alpha ≠ 0, ratio methods fail.
2. Estimation Results
Table 1 reports the Huber robust regression results for all eleven corporations, ranked in descending order of beta. The reported statistics are the slope coefficient beta with its standard error (SE), the coefficient of variation CV(beta) = SE / beta, the approximate t-statistic, the significance of the intercept, and the coefficient of determination R².
Table 1. Huber Robust Regression Results: REVT = alpha + beta × XOPR
| Company | Beta | SE | CV(%) | t-stat | Alpha sig. | R² |
| Nvidia | 2.7739 | 0.0461 | 1.66% | ~60 | Yes | 98.23% |
| Meta | 1.9440 | 0.0440 | 2.26% | ~44 | No | 98.17% |
| Microsoft | 1.8782 | 0.0286 | 1.52% | ~66 | No | 97.40% |
| Oracle | 1.7798 | 0.0184 | 1.03% | ~97 | Yes | 99.60% |
| Intel | 1.7028 | 0.0262 | 1.54% | ~65 | No | 92.83% |
| Apple | 1.4892 | 0.0046 | 0.31% | ~324 | Yes | 99.86% |
| Alphabet | 1.4810 | 0.0083 | 0.56% | ~178 | No | 99.40% |
| AMD | 1.2494 | 0.0164 | 1.31 | ~76 | Yes | 99.40% |
| Tesla | 1.1643 | 0.0075 | 0.64% | ~155 | Yes(68%) | 99.69% |
| IBM | 1.1529 | 0.0296 | 2.57% | ~39 | Yes | 97.12% |
| Amazon | 1.1249 | 0.0030 | 0.27% | ~375 | Yes | 99.60% |
Note: t-stat is approximate, computed as beta / SE. Alpha significance is assessed at the 68% confidence level (±1 SE), which is the operative arm’s length standard in this framework. Tesla’s intercept is significant at 68% but not at 95%.
3. The Harmonic Mean Bias of Ratio PLIs
The bias of ratio-based PLIs follows directly from the structural model. Given REVT(i) = alpha + beta × XOPR(i), define the ratio markup as m(i) = REVT(i) / XOPR(i). Substituting the structural equation:
m(i) = \frac{a}{XOPR(i)} + bTaking the expectation across N observations and recognizing that the mean of 1 / XOPR(i) equals 1 / H, where H is the harmonic mean of XOPR:
\mathbb{E}[m] = b + \frac{a}{H}The bias of the ratio mean as an estimator of beta is therefore alpha / H. This bias propagates to all quartile statistics derived from m(i), including the interquartile range. The bias term alpha / H does not vanish as N grows — it is not a sampling artifact but a structural consequence of applying a ratio to data generated by an affine model.
Of the eleven corporations in the sample, seven have statistically significant intercepts at the 68% confidence level: AMD, Amazon, Apple, IBM, Nvidia, Oracle, and Tesla. For these seven, any ratio-based PLI applied in a transfer pricing analysis is a biased benchmark. The magnitude of the bias depends on alpha and H; both are recoverable from the regression output.
4. The Clean Cases: Alpha Insignificant
Four corporations have statistically insignificant intercepts: Alphabet, Intel, Meta, and Microsoft. For these four, the structural model is consistent with alpha = 0, meaning the ratio markup m(i) = REVT(i) / XOPR(i) is an unbiased estimator of beta. Ratio methods are defensible for these corporations, subject to the usual comparability requirements.
Alphabet (beta = 1.481, SE = 0.0083) and Meta (beta = 1.944, SE = 0.044) are pure digital advertising platforms with no significant structural fixed-revenue component outside the proportional cost relationship. Microsoft (beta = 1.8782, SE = 0.0286) spans enterprise software, cloud, and hardware, but the proportional specification holds. Intel (beta = 1.7028, SE = 0.0262) has the lowest R² in the sample at 92.83%, reflecting structural turbulence from competitive losses to AMD post-2017 and ongoing decisions about the integrated device manufacturer model.
5. Notable Individual Results
5.1 Nvidia: Structural Outlier
Nvidia has the highest slope coefficient in the sample at beta = 2.7739 (SE = 0.0461, CV = 1.66%). Every dollar of operating cost is associated with $2.77 in revenue — approximately double the sample median. This reflects GPU and AI accelerator pricing power anchored in the CUDA ecosystem, which constitutes a near-monopoly position in the market for training large language models and associated inference workloads. The 68% arm’s length range around the slope is [2.7278, 2.8200].
The significant intercept for Nvidia means that ratio-based PLIs are doubly problematic: not only is the level of beta far outside the range of any plausible comparable set, but the harmonic mean bias alpha / H additionally distorts any quartile statistic applied to the ratio.
5.2 Apple: Tight Precision, Structural Bias
Apple’s estimate beta = 1.4892 (SE = 0.0046) yields the tightest CV in the sample at 0.31% and an approximate t-statistic of 324. The 68% arm’s length range is [1.4846, 1.4938]. The high precision reflects Apple’s consistent proportional relationship between revenues and operating costs over an extended sample period. Nevertheless, the significant intercept means that a simple ratio of revenues to costs is a biased estimator of the structural markup.
5.3 The Low-Markup Cluster
Amazon (beta = 1.1249), IBM (beta = 1.1529), and Tesla (beta = 1.1643) form a coherent low-markup cluster, despite all three being classified under broad technology or technology-adjacent industry codes. Amazon’s retail and logistics operations suppress the consolidated margin even with AWS revenues included. IBM is in a services transition with compressed margins following the Kyndryl spinoff. Tesla is a capital-intensive manufacturer whose cost structure is not analogous to a software or platform company. All three have very tight standard errors (CV ≤ 0.64%), reflecting stable structural relationships.
All three corporations also have significant intercepts, which means that ratio-based operating margins or Berry ratios applied to these corporations in a comparables analysis would be biased. The direction and magnitude of the bias depends on the sign and size of a relative to the harmonic mean of XOPR
5.4 Tesla: Borderline Intercept
Tesla’s intercept is reported as significant at the 68% confidence level but not at 95%. In the EdgarStat framework, the 68% CI — that is, b ± SE(b) — is the operative arm’s length range for a slope coefficient. By the same criterion, an intercept that clears |t| > 1 is treated as materially nonzero for bias-assessment purposes. Tesla therefore belongs in the biased group under this framework. A critic relying on the conventional 95% threshold would classify Tesla differently. The methodological choice should be stated explicitly in any formal proceeding.
6. Pooled Estimation and the Double-Logarithmic Specification
The pooled linear regression across all eleven corporations yields an intercept that is significant and beta = 1.3686 (SE = 0.0048, R² = 95.4%). The double-logarithmic specification:
\ln(REVT) = \alpha_{\ln} + \beta_{\ln} \cdot \ln(XOPR) + \etayields an insignificant intercept and beta(ln) = 1.0378 (SE = 0.0056, R² = 98.6%). The log-log slope is an elasticity: it measures the percentage change in revenues per one-percent change in operating costs.
The 68% confidence interval for beta(ln) is [1.0378 − 0.0056, 1.0378 + 0.0056] = [1.0322, 1.0434]. This interval does not include unity: the lower bound 1.0322 exceeds 1.0000.
The correct interpretation is that beta(ln) = 1.0378 is economically close to unity: the departure from proportionality is 3.78 percentage points, which is small relative to the cross-sectional dispersion of individual beta values ranging from 1.1249 to 2.7739. The near-unit elasticity in log space supports treating the levels-linear model REVT = alpha + beta × XOPR as a well-specified first-order approximation. The higher R² of the log-log model (98.6% versus 95.4%) suggests the presence of heteroskedasticity (unequal variance) in the levels specification — consistent with the wide dispersion in corporation revenue size across the eleven corporations — without invalidating the linear model.
The insignificant intercept in log space has a direct implication for ratio methods: it means the geometric mean of REVT / XOPR is a consistent estimator of exp(alpha(ln)) × XOPR^(beta(ln) – 1). Because beta(ln) ≈ 1, this reduces to approximately exp(alpha(ln)), which is a constant. In other words, the log-log model implies that the geometric mean ratio is approximately stable across the size distribution — not because the ratio is the correct estimator of the structural markup, but because the elasticity near one makes size effects approximately cancel in geometric space. The structural markup beta from the levels regression remains the preferred estimated coefficient.
7. Transfer Pricing Implications
The cross-sectional dispersion in beta — from 1.1249 for Amazon to 2.7739 for Nvidia — is economically large. A comparables set that includes corporations from both ends of this distribution, and applies a quartile-based PLI, is constructing an arm’s length range from structurally incomparable entities. The IQR of the ratio markup is biased for seven of the eleven corporations, and the bias is not uniform across the sample.
The regression-based approach resolves this problem in two steps. First, the structural equation REVT = alpha + beta × XOPR identifies b as the arm’s length markup parameter. The 68% CI around b — that is, [b – SE(b), b + SE(b)] — constitutes the arm’s length range. Second, the significance of the intercept is a formal test of whether ratio methods are admissible for the controlled party under examination. An insignificant intercept is a sufficient condition for ratio admissibility; a significant intercept is a sufficient condition for exclusion of ratio-based PLIs.
In formal proceedings, these results carry Lakatosian evidential weight: the structural model is algebraically explicit, the bias expression is derived without distributional assumptions, and the test of intercept significance is empirically refutable. Claims based on ratio PLIs for corporations with significant intercepts satisfy neither the algebraic nor the empirical admissibility criteria. They are, in the terms used by the EdgarStat research program, inadmissible as arm’s length benchmarks.
8. Conclusion
Huber robust regression of revenues on operating costs for eleven U.S. high-technology corporations over 1980–2025 yields structurally interpretable markup parameters that are directly applicable as arm’s length benchmarks in transfer pricing analysis. Seven of eleven corporations have statistically significant intercepts, establishing that ratio-based PLIs are biased estimators for the majority of the sample. The bias is alpha / H, the ratio of the regression intercept to the harmonic mean of the cost base — a result that is algebraically exact and does not depend on distributional assumptions.
Nvidia is a structural outlier with beta = 2.7739, reflecting the pricing power of AI hardware and the CUDA monoculture. The low-markup cluster of Amazon, IBM, and Tesla reflects capital intensity and services compression, not membership in a comparable peer group with Nvidia, Meta, or Microsoft. The pooled double-logarithmic elasticity of 1.0378, while not including unity in the 68% CI, is economically close to unity, which supports the proportional-in-levels specification as a first-order approximation.
The regression framework provides three things that ratio methods cannot: an unbiased estimate of the structural markup, a formal test of the admissibility of ratio methods for any specific corporation, and an arm’s length range — the 68% CI around the slope — that is grounded in econometric theory rather than in the arbitrary percentiles of a biased distribution.