photoschmidt via iStock/Getty Images Plus

Most failures stem not from technological limits but from unclear decisions, misaligned incentives and misplaced expectations.

Artificial intelligence has made undeniable progress across industries. It has solved protein structures that stumped biology for decades, given machines the ability to act autonomously in the physical world, and empowered knowledge workers to generate outputs in seconds that once took weeks.

In sharp contrast, people in commercial real estate are still sending money with paper checks, closing deals in cherry wood-paneled law offices, and signing towering stacks of documents by hand. Deals are not underwriting themselves, and market research is increasingly clouded by AI-fabricated data that initially seemed correct but were completely nonsensical. As a result, many former technology optimists have begun to label AI “underwhelming,” concluding that it is little more than an overhyped bubble on the verge of bursting. As of 2026, human judgment remains indispensable.

Commercial real estate is not unique. This disillusionment is shared across industries. According to Forbes, roughly 9 out of 10 AI transformation initiatives fail to deliver meaningful return on investment, and in commercial real estate, most efforts never progress beyond the pilot stage. JLL’s industry survey from October 2025 reinforced this reality: AI pilots are widespread, yet only 5% of firms report achieving all their AI goals.

The most convenient explanation for these results is technological immaturity, a conclusion that is understandable but misguided. This framing shifts blame onto the tools themselves, spares organizations from examining how decisions are actually made, and allows familiar processes to persist unchallenged. In reality, the failure lies not in AI’s capabilities but in how firms select appropriate use cases, define success and integrate technology into real workflows.

Why Most AI Initiatives Fail in Practice

In commercial real estate, AI efforts tend to fail for a few predictable reasons.

First, when organizations lead with the solution rather than the problem, they ignore critical context. For example, in moments of panic adoption, often driven by fear of being left behind, many firms turn to AI to automate social media without defining brand positioning, audience intent or conversion goals. Even when visibility increases, the output often feels misaligned and inauthentic, potentially creating more reputational damage than value. Deployed without a clear objective, AI produces polished answers to the wrong questions.

Second, teams often layer new software onto existing processes without first removing underlying inefficiencies. When workflows are neither streamlined nor rigorously reviewed, point solutions accumulate, increasing handoffs, reconciliation and the risk of error. Any efficiency gains are short-lived, replaced by systems that are more complex, fragile and unsustainable. At its core, this is largely psychological: Organizations are terrified to abandon familiar, “good enough” processes, even when they work poorly. Simplifying the process requires ruthless clarity and the willingness to start over.

Finally, firms frequently treat symptoms rather than root causes because they have not aligned on the underlying problem or objective. In attempts to replace analysts with AI, teams celebrate underwriting a deal in minutes, then spend hours debating the assumptions the model produced. Vendors often exploit this gap, marketing “automated underwriting” while ignoring two constraints: the lack of reliable market data and the fact that underwriting judgment is inherently firm-specific. Full automation under these conditions is not only premature but structurally impractical.

Resolving these failures requires unpacking problems layer by layer and challenging assumptions and ownership at every stage — a demanding process that requires both technical fluency and the organizational maturity to confront change. When management does not understand what AI can (or should) do, they perceive it as a black box panacea to all their operational bottlenecks. As a result, they blindly purchase AI tools even when they do not need them or when AI simply is not the right solution. AI adoption driven by ego or fear will inevitably lead to wasted time, energy and resources.

A Reality Check on Organizational Readiness

Here is a simple diagnostic. If any of the following applies, AI is unlikely to help:

  • Core workflows live in people’s heads rather than in defined decision paths.
  • Tools are being piloted with-out clear ownership, assumptions or decision rights.
  • Outputs (reports, content, models) are being automated instead of the decisions they are meant to inform.
  • Success cannot be defined without qualifiers or post hoc explanations.

If company personnel have attended more AI demos than workflow-mapping sessions, that imbalance explains most failed pilots. These are clarity failures, not technology failures.

Misalignment in the Innovation Supply Chain

Across industries, technological breakthroughs follow a consistent pattern that can be understood through what Proptimal calls the “Innovation Supply Chain,” which consists of distinct layers that differ by technical depth and responsibility:

Foundational invention: New scientific or technical capabilities created through deep research (e.g., ChatGPT, Gemini, Claude).

Development tools: Platforms that make technical inventions accessible for people who lack deep domain knowledge to build tools on (e.g., Cursor, Windsurf, Lovable).

Applications: Products that embed technology into real workflows using domain expertise and judgment (e.g., business-to-business and business-to-consumer applications).

Most firms do not need to invent in-house AI tools, but many are misled into believing they should. Building software is easy, but building good software is very difficult. This phenomenon is exacerbated by the widespread yet misleading narrative that anyone can become a software developer with AI “vibe coding” tools. This ambition turns into a distracting and costly mistake.

Custom software or workflow often feels like control, but for nontechnical organizations, it quickly becomes a liability. When teams lack the ability to prototype in-house, relatively simple work is outsourced at significant cost. Typically, this is not because the problem is complex but because vision must be translated to an external team that knows nothing about the client’s industry, company and processes. Context is lost, feedback loops slow, costs compound, and long-term maintenance ultimately falls back on the firm.

The result is counterproductivity and a strategically weakened position. Capital and attention are diverted away from core strengths and toward technical problems the organization is not equipped to solve. Bespoke tools end up digitizing workflows that were never clarified, locking firms into dependency on external contractors without the capability to sustain it.

Unless a firm is prepared to operate like a technology company, building proprietary AI tools does not create advantage; it creates distractions and liabilities.

The Transparency-Alpha Paradox

Commercial real estate is an industry built on identifying market inefficiency, and the best operators generate returns by uncovering value-creation opportunities that others don’t see.

AI underwriting demands the opposite conditions to succeed: widely available market and property data, with deal nuances transparently documented. Expecting AI to deliver precise predictions in real estate therefore requires two incompatible realities at once: a market opaque enough to generate alpha and a dataset clean enough to uncover any inefficiency.

This is the “transparency-alpha paradox.” The variables that matter most in real estate rarely live in data, and forcing prediction where judgment is required produces false confidence rather than insight. AI’s value here is not prediction but compression: reducing time, synthesizing information and sharpening human judgment instead of replacing it.

Zillow’s algorithm-driven home-buying failure illustrates the risk. The models did not collapse because pricing was difficult; they collapsed because judgment was removed from the loop. When markets shifted, no one was clearly accountable for challenging assumptions, overriding outputs or slowing the system down.

In asset management, AI is often deployed to automate variance reporting and NOI forecasting. The reports arrive quickly and appear precise, but decision rights remain unclear. No model can determine whether to push rents, defer capital or accept short-term underperformance in exchange for long-term positioning.

Such failures stem from a lack of accountability rather than a lack of efficiency. An automated process that removes human responsibility is futile, regardless of how quickly it operates.

Reframing AI’s Role in Commercial Real Estate

Firms are less frustrated with the technological limits of AI than they are with the mismatch between what AI is suited to do and how real estate decisions are made, under the set of expectations of the people using it. AI does not (and should not) eliminate judgment, uncertainty or accountability. Its value lies in compressing time, surfacing inconsistencies and strengthening the decisions humans remain responsible for owning.

The firms that benefit will not be those that automate the most tasks or chase the boldest predictions. They will be the ones willing to do the unglamorous work first: clarifying which decisions matter, aligning assumptions before scaling output, choosing their place in the Innovation Supply Chain, and designing workflows where accountability is explicit, all while applying AI appropriately to perform what it is suited to do. In commercial real estate, advantage has always come from judgment applied faster and with better information.

The limiting factor is no longer access to tools or data; it is the discipline to use them well. Used correctly, AI exposes weaknesses in existing processes rather than replacing expertise. The opportunity is not greater automation but rather greater intentionality about how decisions are made, who owns them and where judgment truly adds value. 

Lilian Chen is the founder and CEO of Proptimal, a CRE software platform delivering institutional-grade underwriting and analytics to operators and capital allocators.

Close