The Boardroom's AI Problem: Why Executive Pay Consultants Are Sounding the Alarm on Integrity
As AI tools reshape how companies design executive pay packages and conduct governance reviews, veteran compensation consultant Frank Glassner warns that boards are trading rigor for convenience — and shareholders will eventually pay the price.
There is a question circulating in Bay Area boardrooms that nobody wants to ask out loud: How much of the analysis your compensation consultant just handed you was actually written by a human?
It is not a hypothetical. Over the past eighteen months, AI-powered tools have infiltrated nearly every corner of corporate advisory work — from drafting proxy statement language to modeling long-term incentive plan scenarios. The technology is fast, cheap, and seductive. It is also, according to at least one veteran in the field, a ticking bomb sitting underneath the governance structures that public company boards are legally obligated to maintain.
Frank Glassner, the founder of Veritas Executive Compensation Consultants and a decades-long fixture in Northern California’s corporate governance scene, has not been shy about saying so. In a recent essay published on his firm’s website titled “CheatGPT: Outsourcing Your Integrity to AI,” Glassner lays out what he calls a “voluntary surrender” of professional standards — not by the machines themselves, but by the humans who have decided that pressing a button counts as doing the work.
“The technology amplifies existing human shortcuts rather than creating new ones,” Glassner writes. His argument is not anti-technology. It is anti-laziness dressed in technology’s clothing.
The Proxy Statement Problem
For anyone outside the world of executive compensation, the stakes might not be immediately obvious. But consider what a compensation committee actually does. These are the people — usually three to five independent board members — who decide how much the CEO gets paid. They approve base salaries, annual bonuses, stock option grants, and the golden parachute provisions that kick in during mergers or terminations. Their decisions show up in the company’s annual proxy statement, which every shareholder can read, and which the SEC reviews for compliance.
Getting this wrong is not an abstract risk. Say-on-pay votes have become a routine part of proxy season, and companies that lose those votes — meaning shareholders reject the board’s compensation recommendations — face immediate reputational damage, activist investor pressure, and sometimes litigation. In 2025, more than two dozen S&P 500 companies saw say-on-pay failures or near-misses, and institutional investors like BlackRock and Vanguard have made it clear that they are paying closer attention to how boards justify executive pay.
This is the context in which AI tools are being deployed. Consulting firms — some of them large, some of them one-person shops trying to compete with the big names — are using generative AI to produce benchmarking reports, draft compensation discussion and analysis sections, and model payout scenarios under different performance assumptions. The output looks professional. It reads well. And in many cases, the person signing off on it has not verified whether the underlying data is accurate, whether the peer group comparisons are appropriate, or whether the incentive plan design actually aligns with the company’s strategic objectives.
Glassner’s critique zeroes in on this gap between appearance and substance. In his telling, the problem is not that AI produces bad work — it is that AI produces plausible-looking work, which is far more dangerous in a field where plausibility is not the standard. The standard is fiduciary duty.
A Governance Culture Under Pressure
The Bay Area is both the epicenter of AI development and home to a dense concentration of public companies whose boards face these decisions every quarter. Glassner, who has advised companies ranging from early-stage tech firms to Fortune 500 corporations from his base in Northern California, has watched the shift happen in real time.
“We are seeing a detection arms race,” he writes in the Veritas piece. “For every plagiarism detector, workarounds emerge.” He is describing the academic world, but the analogy to corporate governance is precise. Compensation committees that rely on AI-generated reports are, in effect, outsourcing their judgment to a system that has no fiduciary obligation, no understanding of the specific company’s culture, and no ability to exercise the kind of nuanced, contextual analysis that shareholders and regulators expect.
The irony, as Glassner sees it, is that the very firms selling AI-powered efficiency to boards are creating their own obsolescence. If the value proposition of an executive compensation consultant is expertise, judgment, and independence, then automating those qualities away does not make the service cheaper. It makes it worthless.
This is not just philosophical hand-wringing. The practical consequences are already visible. Lawyers have submitted AI-generated briefs containing fabricated case citations — a scandal that made national headlines in 2023 and has continued to produce cautionary tales. Doctors have been caught pasting patient data into public chatbots. And in the compensation world, consultants have begun billing AI-generated deliverables as original analytical work, a practice that Glassner describes as “outsourcing your integrity.”
What Boards Should Be Asking
The question for compensation committees is not whether to use AI. That ship has sailed. The question is how to maintain governance rigor in an environment where the tools make it very easy to skip steps.
Glassner’s firm, Veritas Executive Compensation Consultants, has taken what he calls the “Veritas Way” approach: clear, measurable governance standards rather than buzzword-laden principles; transparent audit processes for any AI-assisted analysis; and a fundamental commitment to treating AI as an amplifier of human expertise rather than a replacement for it.
In practical terms, this means that any benchmarking data produced with AI assistance gets verified against primary sources. Peer group selections are reviewed by a human analyst who understands the specific industry dynamics at play. Proxy statement language is drafted by people who have actually read the company’s prior filings and understand the narrative arc that shareholders and proxy advisory firms like ISS and Glass Lewis are following.
These are not radical propositions. They are the baseline expectations that existed before AI tools made it tempting to cut corners. But Glassner’s argument — and it is a persuasive one — is that baseline expectations need to be explicitly restated when the technology makes it so easy to fall below them.
The Broader Pattern
Glassner’s essay touches on something bigger than executive compensation. He describes a cultural shift in which “effort is optional, lying is fine, and thinking is a hobby.” It is provocative language, deliberately so. But the pattern he identifies — institutions adopting what he calls “compliance theater,” policies that sound responsible but lack enforcement teeth — is recognizable across industries.
In higher education, students submit AI-generated essays while administrators debate honor code updates that arrive months or years after the tools do. In professional services, firms celebrate efficiency metrics while the intellectual capital that justified premium fees slowly evaporates. In corporate governance, boards adopt AI use policies that read beautifully in the annual report and mean nothing in the conference room where actual decisions are made.
What makes the executive compensation space particularly high-stakes is the legal framework. Compensation committee members are fiduciaries. They can be held personally liable for decisions that are not properly supported. An AI-generated report that contains a hallucinated data point — say, an incorrect peer company CEO’s total compensation figure, or a mischaracterized performance metric — does not just embarrass the consultant. It exposes the directors who relied on it.
The Path Forward
None of this means that AI should be banished from the boardroom. Glassner himself acknowledges that the technology can accelerate legitimate research, surface relevant market data faster, and help model complex scenarios that would take human analysts weeks to work through manually. The tools are powerful. The question is whether the people using them are exercising the judgment that their clients — and their clients’ shareholders — are paying for.
For Bay Area companies in particular, where the talent market for board directors and compensation committee members is intensely competitive, the ability to demonstrate genuine governance rigor is becoming a differentiator. Institutional investors are asking harder questions about how boards reach their decisions, not just what the decisions are. A compensation committee that can show its work — that can point to a consultant who verified every data point and exercised independent judgment at every step — is in a meaningfully different position than one that rubber-stamped an AI-generated deliverable.
Frank Glassner has been advising boards on executive compensation for decades. His willingness to call out his own industry’s drift toward convenience over craft is notable precisely because it cuts against the prevailing current. In a landscape where the easy money is in telling clients what AI can do for them, Glassner is making the harder case: that what AI cannot do — exercise professional judgment, maintain fiduciary standards, tell a board that its proposed pay package is going to fail a say-on-pay vote and explain exactly why — is the part that actually matters.
The boards that listen will be the ones that avoid the next governance scandal. The ones that do not will eventually wonder how a machine managed to look so confident while getting the important things wrong.