The Profit-Value Disconnect
Let’s fix this before we get permanently deprioritized.
Priorities Matter
As a product manager for over two decades, I’ve made countless decisions about feature prioritization. While we discuss many factors—short-term versus long-term growth, customer retention versus acquisition, end-user needs versus buyer requirements—one factor ultimately trumps all others: profit. Products succeed when they generate revenue. Product managers who increase revenue advance; those who don’t teach :).
This profit imperative is so fundamental to business that we rarely question it directly. Instead, we debate instrumental goals, uncertain which levers will most effectively drive financial growth. But in an era of accelerating machine intelligence, we need to confront a difficult truth: the gap between profit generation and human value creation is widening, and AI threatens to expand this chasm exponentially.
A Personal Case-Study
Allow me to illustrate with a personal example. I teach adults about machine learning—how it works, practical applications, and societal implications. According to course reviews, I’m good at this, and I work hard to improve.
When planning how to invest my time, I face an unfortunate choice. Activities that would genuinely benefit my students—developing new materials, designing better experiments, writing comprehensive guides, conducting and publishing research—have no impact on my income. Conversely, activities that drive revenue—search engine optimization, creating attention-grabbing social media content, recycling existing curriculum rather than updating it—provide little additional value to my students.
My situation isn’t unique. Throughout our economy, the correlation between revenue-generating activities and value-creating activities has weakened considerably:
- Approximately 2.5% of U.S. GDP goes to advertising, with research from the Journal of Consumer Research indicating that the majority serves to manipulate rather than inform consumers.
- According to JAMA, pharmaceutical companies spend 19% of revenue on marketing compared to 16% on research and development—prioritizing selling existing drugs over developing better ones.
- A 2019 study in Health Affairs found that U.S. healthcare administrative costs consume 8.2% of healthcare expenditures versus 2.7% in Canada’s single-payer system—a difference that represents hundreds of billions in spending that doesn’t improve health outcomes.
- Federal Reserve data shows the financial sector has grown from 2-3% of GDP in the 1950s to 7-8% today, with most growth coming from trading existing assets rather than funding new productive enterprises.
- Bureau of Labor Statistics data demonstrates that while U.S. worker productivity has risen by approximately 62% since the 1970s, real median wages have increased by only about 17%.
Machine Learning: Optimization Optimized
A colleague recently reminded me that optimizing for a single value is easier than balancing multiple objectives. This observation cuts to the heart of the challenge we face with AI. Machine learning excels at optimization—continuously refining parameters to maximize a specific, measurable outcome. Anything that doesn’t directly contribute to that outcome gets systematically eliminated.
Business itself already functions as “an algorithm optimizing for profit,” as Berkeley professor and AI pioneer Stuart Russell aptly described. For decades, corporations have deployed machine learning to optimize business models toward financial returns, neatly coinciding with the aforementioned productivity-wage gap.
What happens when we unleash superintelligent AI—built on trillions of dollars in speculative investment—to maximize returns on that investment? The already troubling divergence between profit generation and human benefit is likely to accelerate beyond our capacity to course-correct.
Alternative Paths Forward
Some organizations have demonstrated that alternative models are viable. Benefit Corporations legally commit to positive impact alongside profit. Platform cooperatives like Stocksy United distribute value more equitably among contributors. Organizations like the Mozilla Foundation develop technology explicitly for public benefit.
These examples remain outliers, but they illustrate that we can design systems that align financial incentives with human wellbeing—if we choose to.
Reimagining Our Fundamental Assumptions
Machine intelligence represents an unprecedented resource—built upon centuries of shared human intellectual achievement. We stand at a pivotal moment: we can harness this technology to optimize ever more aggressively for profit extraction, or we can deploy it to enhance human flourishing for everyone.
The latter won’t happen automatically. It requires intentional redesign of economic incentives, corporate structures, and regulatory frameworks. Most urgently, it demands that we question core assumptions about what business is for and how success should be measured.
This isn’t naive idealism—it’s pragmatism. A system that increasingly disconnects wealth creation from value creation will eventually collapse. As AI accelerates this disconnect, reconsidering our fundamental assumptions becomes not just morally necessary but practically essential.
The truly naive position would be expecting different outcomes while maintaining the same incentive structures. The smart move—even if it risks appearing radical—is to redesign those incentives before superintelligent optimization makes that task impossible.