Three of the most powerful figures in artificial intelligence spent last week warning that the productivity base underpinning capital markets is being rebuilt faster than most economic models assume - and the sponsors raising capital today have yet to show their investors they have noticed.
Sam Altman, CEO of OpenAI, called for a national wealth fund, a robot tax, and automatic safety net expansions calibrated to AI displacement metrics, framing the moment as requiring a social contract on the scale of Roosevelt's New Deal. Demis Hassabis, CEO of Google DeepMind, told Fortune the commercial chatbot race is a distraction: the real event is AGI capable of genuine planning and reasoning, and the harder problem is arriving there without catastrophic outcomes. And Mustafa Suleyman, CEO of Microsoft AI, published an essay in MIT Technology Review projecting a 1,000x expansion in effective compute by 2028, arguing that humans chronically underestimate exponential change because we evolved for a linear world.
They disagree on priorities. They agree on scale and direction.
Every CRE pro-forma rests on assumptions about employment density, occupancy, rent growth, and the durability of demand in a given submarket. Those assumptions were calibrated against a labor market and a productivity curve that may no longer hold if Suleyman's exponential is directionally correct, if Hassabis's agentic systems arrive on his timeline, or if Altman's redistribution mechanics get political traction.
A deal closing today carries a minimum three-to-five year hold. The demand assumptions written into this quarter's underwriting will be tested against a market that has absorbed several further turns of the AI curve before the asset is refinanced or sold.
Investors are reading the same coverage and forming their own views. Sponsors presenting 2026 underwriting built on 2022 demand logic are signaling something about their analytical posture, whether they intend to or not.
The instinct, confronted with a fast-moving environment, is to reach for faster tools. AI workflows can process market data, generate rent analyses, and produce polished acquisition memos in a fraction of the time a junior analyst requires. The temptation to treat that speed as a proxy for accuracy is, in the current climate, close to irresistible.
It is also the field's most consequential error.
Gary Marcus, the cognitive scientist who has spent 25 years arguing that pure language models would never be reliable enough for serious analytical work, published three pieces last week making this point from a fresh angle. The AI systems showing genuine progress, he observed, are not the ones that scaled the largest. They are the ones that paired pattern recognition with explicit, classical reasoning that constrains and checks the output. Pure language models remain too probabilistic and too erratic to trust on their own.
In CRE operational terms, the trap has a name: The Wizard Fallacy.
The fallacy is the assumption that because an AI workflow produces a polished, formatted, professionally laid-out output, the output is also correct. It is the same error that has surfaced in every previous wave of CRE enthusiasm - from syndication to crowdfunding to the current AI moment. A confident presentation is mistaken for a verified result.
The mechanism is simple: AI models do not flag uncertainty. They generate the most plausible answer given their training, format it cleanly, and deliver it with the appearance of competent analytical work. In a multi-step workflow, each step's output feeds the next. Small errors compound silently into a final result that looks finished but is materially wrong. In a business where outputs carry fiduciary weight and shape capital allocation, the cost is not a rounding error.
The two problems are connected.
The macro environment is accelerating faster than most underwriting models have registered. The tools designed to help sponsors keep pace are capable of producing authoritative-looking errors at scale. The sponsors who navigate this well will be the ones who treat AI adoption as a verification discipline rather than a production exercise - who ask not how much the workflow can do unattended, but where the human checkpoints sit and whether they are adequate for the fiduciary stakes involved.
That discipline is not new. It is the same standard that has always applied to work of this kind. The current moment simply makes it easier to set aside.
***
Every cycle finds its evangelists. The syndication (crowdfunding) era had, and still has, its gurus - those who made the complexity of real estate investment look deceptively simple, and whose promises proved rather more compelling than their results.
AI has its own version: fluent, confident, and equally adept at making the difficult look effortless. The technology itself is not the problem. The incantations surrounding it are.
The AI in Real Estate Accelerator, the executive program I run, is built on a different premise - that sponsors are better served by learning to implement AI in deliberate, sequential steps than by chasing solutions that look impressive in a demonstration and prove unreliable in practice.
Sponsored by the National Apartment Council of Canada, the next cohort is scheduled. Enrollment is open.
Learn more here.
Adam
P.S. Enroll before we start on May 26 and you will have free access until then to the GowerCrowd AI in CRE inner circle - a working session held every Thursday at noon where participants bring real workflows and leave with real solutions.
More details here.