Predictive AI can forecast what might happen. Causal AI answers the question investors actually care about: what will happen if we do X? In UK real estate, where decisions hinge on interventions such as retrofits, lease restructures, amenity upgrades, planning strategies or timing of disposals, correlations are not enough. This paper sets out a practitioner’s playbook for Causal AI: how to specify questions, build and test causal assumptions, choose estimators that survive scrutiny, and turn results into capital allocation with documented confidence.
A model may show that buildings with recent HVAC retrofits achieve higher rents. That is a correlation, not a guarantee. Prime location, newer stock or superior asset management could be driving both the retrofit decision and the rent. Spending £2m on Asset X because “retrofits correlate with +10% rent” is a gamble. Causal AI reframes the question as an effect of a decision: What is the expected rent uplift at Asset X if we retrofit now, versus if we don’t? That requires counterfactual reasoning, disciplined assumptions and evidence that those assumptions hold well enough.
1) Specify the decision and estimand.
Be concrete: “What is the effect on achieved rent over 24 months of upgrading EPC from D→B via package P, for London offices built 1975–2000?” Pick the estimand (ATE/ATT/CATE) and the unit of decision (asset, unit, lease).
2) Draw the causal story (DAG).
With asset managers, planners and valuers, build a Directed Acyclic Graph mapping believed relationships (e.g., Location → {Retrofit, Rent}; Condition → {Retrofit, Rent}; Management Quality → {Retrofit, Rent}; Policy Shock → Retrofit). The DAG clarifies confounders to adjust for and warns against controlling for colliders (variables caused by both T and Y).
3) Identify the adjustment set.
Use back-door/d-separation on the DAG to pick a minimal, defensible set of controls. This beats throwing “everything” into the model (which can introduce bias).
4) Check the data can support the claim.
Test overlap/positivity (treated and untreated observations look comparable on confounders), examine missingness and measurement error (e.g., NIA standards), and time alignment (pre-/post-periods). If overlap fails, narrow the population or reconsider the question.
5) Choose an estimator that fits the problem.
6) Diagnose and refute.
7) Communicate like an investor.
Report the effect with intervals, show heterogeneity (who benefits), state assumptions plainly, and provide a what-if calculator: “At Asset X, package P has a 70% chance of ≥£4–£6 psf uplift; 15% downside if interest rates rise and occupancy softens.” Include an action table: do now / wait / never.
A) Retrofit economics for a Midtown office (D→B).
Question: effect of EPC upgrade on achieved rent and voids over 24 months.
Approach: DAG identifies confounders location, building condition, amenity, management intensity, nearby transport upgrades. After overlap checks, estimate ATT with DML (gradient boosting for outcome/treatment models, orthogonalised final stage). Report overall ATT and CATE by micro-location.
Result: average uplift of +£3.10 psf (95% CI £1.6–£4.4), larger near recent transport improvements; placebo on pre-period shows no effect; E-value suggests an unmeasured confounder would need a risk ratio >1.8 to null the estimate. Decision: proceed on two buildings, defer one pending fabric works.
B) Planning reform as an instrument.
Question: causal impact of lab-ready fit-out on letting velocity.
Approach: retrofit decision endogenous; use a policy change that accelerated permissions as an instrument (strongly predicts treatment; plausibly affects letting only via treatment). 2SLS shows a local average treatment effect (LATE) of −23 days to let (CI −10 to −36) for compliers. Robustness: over-ID tests and event-study plots show no pre-trends.
Decision: prioritise pre-let strategy with light-lab spec in qualifying sub-markets.
C) Transport upgrade quasi-experiment.
Question: effect of station opening on office effective rents in a 750m catchment.
Approach: Synthetic Control at micro-market level using donor areas matched on pre-trends; cross-validated covariates include stock quality and sector mix.
Result: +6–8% rent premium emerging 12–18 months post-opening, with stronger effects where amenity upgrades accompanied the transport change. Use result as an input to pricing and as a counterfactual when assessing other corridors.
D) BTR amenity uplift and churn.
Question: impact of introducing pet-friendly floors + storage upgrades on churn.
Approach: Causal forests to estimate heterogeneous treatment effects by household profile and block condition, controlling for rent growth, service quality and local supply.
Result: churn reduction concentrated in two blocks with specific demographics; elsewhere effect is nil. Decision: target investment rather than roll-out.
Causal models belong under the same governance as valuation-adjacent analytics.
Weeks 1–3: Pick one decision (e.g., retrofit package P). Draft the estimand and DAG with practitioners; define data products; complete baseline overlap and quality checks.
Weeks 4–6: Build the first estimator (DML or DiD/Synthetic Control, as appropriate). Produce diagnostics (balance, placebo, sensitivity). Write the model factsheet.
Weeks 7–10: Add heterogeneity (causal forests/uplift). Integrate into underwriting with a simple what-if calculator. Log decisions and outcomes.
Weeks 11–13: Validate against realised results for earlier interventions; tune retraining triggers; publish a short Causal Methods Standard so future projects reuse patterns, features and diagnostics.
Prediction helps you see the weather; causality helps you choose the route. In UK real estate, Causal AI turns interventions, retrofits, amenity changes, planning strategies, and timing into quantified effects with confidence bands and caveats. When assumptions are explicit, diagnostics are honest and results are wired into underwriting, firms upgrade from “best guess” to testable strategy. That is a durable edge: faster, clearer capital allocation and decisions that stand up in committee and audit alike.