May 29, 2025
Objective: Evaluate how monoclonal antibody (mAb) agents reduce risk of hospitalization due to COVID-19 via their role in viral clearance.
Objective: Decompose the total effect of mAb agents on hospitalization or death into an indirect effect (thru viral RNA) and a direct effect (the remainder).
Given the limit of detection \(\lambda_{\text{LOD}}\) and known form of censoring, \(C_i = \I(M_i > \lambda_{\text{LOD}})\), the censoring mechanism is fully determined by \(M\).
Due to this, in the likelihood expressions, the censoring probability simplifies to \[ \Pr(C_i = c \mid M_i, A_i, L_i) = 1 \]
Then, we can express the observed-data log-likelihood as follows
Observed-data log-likelihood (under left-censoring by LOD ): \[ \begin{aligned} \ell_{\text{obs}}(\boldsymbol{\theta}) = \sum_{i : C_i = 1} &\left[\log \Pr(Y_i \mid M_i, A_i, L_i; \alpha) + \log f(M_i \mid A_i, L_i; \beta) \right] \\ + &\sum_{i : C_i = 0} \log \int_0^{\lambda_{\text{LOD}}} \Pr(Y_i \mid M_i, A_i, L_i; \alpha) \cdot f(M \mid A_i, L_i; \beta) \, dM \end{aligned} \]
Target density: \(f(m_i^{\star(j)} \mid Y_i, C_i = 0, a_i, l_i)\)—direct estimation is expensive!
Importance weights: \[ \frac{\text{Target}}{\text{Proposal}} = \frac{ f(m_i^{\star(j)} \mid Y_i, C_i = 0, a_i, l_i)}{ f(m_i^{\star(j)} \mid a_i, l_i)} \]
Avoid explicit evaluation of the joint density via the identity: \[ \begin{aligned} &\frac{\Pr(Y_i,\ M_i = m_i^{\star(j)}, C_i = 0 \mid a_i, l_i) / f(m_i^{\star(j)} \mid a_i, l_i)}{ \sum_{k=1}^S \Pr(Y_i,\ M_i = m_i^{\star(k)},\ C_i = 0 \mid a_i, l_i) / f(m_i^{\star(k)} \mid a_i, l_i)} \\[1.2em] = &\frac{ f(m_i^{\star(j)} \mid Y_i,\ C_i = 0, a_i, l_i) / f(m_i^{\star(j)} \mid a_i, l_i)}{ \sum_{k=1}^S f(m_i^{\star(k)} \mid Y_i,\ C_i = 0,\ a_i, l_i) / f(m_i^{\star(k)} \mid a_i, l_i)} \end{aligned} \]
Normalized Importance Weights
\[ w_{ij} = \frac{\tilde{w}_{ij}}{\sum_{k=1}^S \tilde{w}_{ik}}, \quad \tilde{w}_{ij} \propto \frac{ \Pr(Y_i, M_i = m_i^{\star(j)}, C_i = 0 \mid a_i, l_i)}{ f(m_i^{\star(j)} \mid a_i, l_i) } \]
Numerator term is the joint distribution of observed data. This can be factorized as
We will treat these as nuisance parameters, estimating each and updating the fractional weights assigned to imputation replicates via EM.
E-Step:
M-Step:
Monte Carlo EM (MCEM)
EM with Fractional Imputation
Theoretical Convergence
Under suitable regularity conditions, for a sufficiently large number of iterations \(t\) in the EM algorithm, the estimated parameter \(\hat{\theta}^{(t)}\) converges to its asymptotic limit \(\hat{\theta}^{\star}_{S}\), a stationary point of \(Q^{\star}\) for fixed \(S\); that is, \(\hat{\theta}^{(t)} \to \hat{\theta}^{\star}_{S}\,\, \text{as}\,\, t \to \infty\). Then, for a sufficiently large number of imputations \(S\), we have \(\hat{\theta}^{\star}_{S} \to \hat{\theta}_{\text{MLE}}\).
Key idea
Overcome failure of the standard bootstrap for non-smooth/irregular estimators (e.g., NDE/NIE functional with fractionally imputed mediator) by adapting the size of the resamples \(m\).
Simulate data from an observational study with a mediator left-censored by an assay limit of detection (LoD): \[\small \begin{aligned} L_1 & \sim \text{Bern}(0.7), \quad L_2 \sim \text{Bern}(0.5), \quad L_3 \sim \text{Bern}(0.25) \\\\ A \mid \boldsymbol{L} & \sim \text{Bern}\left( \expit(-1 + 0.5 L_1 + 1.25 L_2 + 0.75 L_3 - 1.25 L_1 L_3) \right) \\\\ \log M \mid A, \boldsymbol{L} & \sim \mathcal{N}(-3 + 1.5 A + 1.75 L_1 + 1.5 L_2 - 0.25 L_3,\; 0.25^2) \\\\ Y \mid M, A, \boldsymbol{L} & \sim \text{Bern}\left(\expit(-1 + 2.5 A + 1.75 M + 0.5 A M - 2.25 L_1 - 1.75 L_2 - 1.5 L_3) \right) \end{aligned} \]
Objective: Mitigate bias by applying the proposed imputation approaches with standard (e.g., plug-in) estimators of NDE/NIE statistical estimands.
Implemented 5 fractional imputation strategies, including a conventional method while keeping nuisance estimators correctly specified for NDE/NIE estimation.
Natural direct effect (NDE) functional: \[ \Psi_{\text{NDE}}(\Pr) = \E \left[\E \left\{ \E(Y \mid A=1, M, L) - \E(Y \mid A=0, M, L) \mid A=0, L \right\} \right] \]
Plug-in (g-computation) estimator:
Evaluate plug-in estimator (g-computation): \[\small \Psi_{\text{NDE}}(\hat{\Pr}) = \hat{\E} \left[ \hat{\E} \left\{ \hat{\E}(Y \mid A=1, M, L) - \hat{\E}(Y \mid A=0, M, L) \mid A=0, L \right\} \right] \] The bootstrap may be used to obtain inference based on this plug-in estimator.
Advantages
ASA Lifetime Data Science (LiDS) Conference 2025