Why Layoffs are the Wrong Move for Software Companies Right Now
And what they should do instead
In software, the most important thing a company builds is not the product. It is the engine that builds the product.
That engine is people, code, tooling, shared context, and the invisible web of decisions that make tomorrow easier than today. When a company does layoffs, it is not just shrinking payroll. It is changing the shape of the engine. And in early 2026, a lot of companies are doing it at the exact moment when the engine is about to get materially stronger. I’m looking at you, Block.
The reason is simple: revenue per employee in big tech is already massive, and the next wave of AI assistance is set up to push that number higher. At the same time, the cost of AI capability keeps dropping, which turns “more output per engineer” from a nice idea into a default competitive condition. Stanford HAI’s 2025 AI Index reports that the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold from November 2022 to October 2024. (Stanford HAI) And these numbers will just keep improving for the foreseeable future. See Ray Kurzweil’s Law of Accelerating Returns.
If you accept that trend, even cautiously, then layoffs aimed at “efficiency” start to look like a misread of what efficiency is about to become.
A metric that forces the conversation to stay honest
If you want to argue about layoffs without turning it into vibes, pick a blunt metric: revenue per employee (RPE).
RPE tells you, at a high level, how much revenue the organization generates per person. When RPE is already in the seven figures and rising, the default stance should be: the company has leverage, and the question becomes how to deploy it without damaging compounding capacity.
Below are recent RPE figures for major tech companies, sorted in ascending order, using their reported revenue and reported employee counts. These numbers are rounded.
Amazon: company-wide RPE is roughly $0.41M in 2024, using $637.959B of consolidated net sales and ~1.556M employees. This understates the leverage of software-heavy parts of the company because Amazon’s workforce includes a massive frontline logistics and delivery operation. (SEC)
Microsoft: roughly $1.24M in FY2025, using $281.7B of revenue and ~228k employees. (Microsoft)
Alphabet: roughly $1.91M in 2024, using $350.018B of total revenues and ~183k employees. (Q4 Capital)
Meta Platforms: roughly $2.22M in 2024, using $164.501B of total revenue and ~74,067 employees. (SEC)
Block: pre-layoffs (I will discuss post-layoff numbers later in this article):roughly $2.37M in 2025, using $24.194B of total net revenue and 10,205 full-time employees. (StockTitan).
Apple: roughly $2.38M in FY2024, using $391.035B of total net sales and ~164k employees. (SEC)
Nvidia: roughly $3.63M in FY2025, using $130.5B of revenue and ~36k employees. (SEC)
Those numbers are not “normal corporate performance.” They are the signature of software leverage and platform economics at scale.
Now add AI.
AI pushes RPE up, and it pushes “need more engineers” up right alongside it
There is a lazy version of the AI narrative: buy AI tools, cut headcount, declare victory.
The better model is: AI raises the ceiling on what a strong engineer can do, then competition forces companies to chase that ceiling.
Stanford’s AI Index gives a clean, visceral example of why the ceiling rises. When the cost of querying models at GPT-3.5 quality drops by more than 280× in under two years, adoption stops being a debate and starts being a default. (Stanford HAI)
Here is the key consequence that companies keep missing:
If revenue per engineer rises, demand for engineers rises too.
Why? Because the ROI on engineering rises. The same company that used to need 10 engineer-months to ship a feature might now do it in 6. That does not mean it should hire fewer engineers. It often means it can profitably ship more features, integrate faster, localize more markets, reduce support load, tighten security, and build entirely new product lines. The constraint becomes “how many capable people can we put on this and still move fast without breaking things.”
This is exactly how labor costs climb: the value per engineer goes up, and companies compete for the same pool of high-signal talent.
You can see the demand side in plain official terms. U.S. Bureau of Labor Statistics projects employment for software developers, QA analysts, and testers to grow 15% from 2024 to 2034, with about 129,200 openings per year on average. (Bureau of Labor Statistics)
The “hire now” incentive that smart companies will act on
There is a quiet incentive here that is easy to miss if you only look at next quarter’s expenses.
If engineering labor costs are about to inflate, then the smart companies have a reason to hire earlier, not later.
They are not just hiring to ship features. They are hiring to lock in a cost basis.
You can think of it like buying long-duration capacity before the price reprices. A company that builds a dense bench of strong engineers now can grow into that capacity while market wages and equity packages become more aggressive. A company that waits will be hiring into a world where:
productivity per engineer is higher,
revenue per engineer is higher,
and therefore the bidding war for top engineers is worse.
We already have a very loud signal of where this goes in the frontier AI world. The Wall Street Journal reported that OpenAI’s stock-based compensation averaged about $1.5 million per employee across a workforce of roughly 4,000 in 2025. (The Wall Street Journal) That is the cost of being early in a talent market where a small number of people can move a very large revenue number.
And importantly, this does not stay contained inside frontier AI labs. It leaks outward. Every big company trying to “do AI” competes for overlapping skill sets: distributed systems, infra, security, product engineering, data, applied ML, developer experience, and the emerging hybrid roles where engineers ship and support adoption at the customer.
Jevons paradox, and why “efficiency” often increases demand
There is an old economics concept that fits this moment so well that it feels like it was written for it: Jevons paradox.
Jevons observed that making coal use more efficient did not necessarily reduce total coal consumption. In practice, efficiency made coal-powered work cheaper, which expanded how much coal-powered work society chose to do. The modern framing is that efficiency gains lower effective price, and if demand is elastic, total consumption can rise. (Wikipedia)
Apply that to software and AI:
AI makes engineering output cheaper in time and effort.
Cheaper output expands the space of products that are worth building.
Expanded product space pulls in more total engineering work, not less.
That is the rebound effect in software form. When it becomes dramatically cheaper to add a feature, build an integration, ship a niche product, or automate a workflow, the world asks for more of it. (Northeastern Global News)
This reinforces the layoff thesis. Companies that treat AI as a reason to reduce engineering capacity are betting against a demand expansion effect that shows up repeatedly in technology transitions.
The new benchmark is being set by AI-native companies
This is where the conversation gets spicy, because the public-company RPE numbers, impressive as they are, may soon look conservative compared to what AI-native startups can do.
The reason is that coordination costs are collapsing. AI does not just write code. It compresses the time it takes to:
explore implementation options,
generate tests and scaffolding,
refactor safely,
write documentation,
handle the long tail of support and integration.
When that happens, small teams can service big markets. This is the “tiny team, huge revenue” phenomenon that people talk about like a meme, but it is increasingly measurable.
Let’s look at three modern examples: OpenAI, Anthropic, and Cursor.
OpenAI: roughly $5M per employee at 2025 run-rate
OpenAI’s CFO said annualized revenue exceeded $20B in 2025, up from $6B in 2024. (Reuters) OpenAI also published a similar breakdown of revenue growth on its own site. (OpenAI) Combine that with the Wall Street Journal’s report of roughly 4,000 employees, and you get a simple run-rate estimate:
$20B annualized revenue / 4,000 employees = about $5M per employee. (The Wall Street Journal)
This is a rough estimate because “annualized revenue” is not audited full-year revenue, and headcount changes throughout the year. It still gives you a clear signal: AI-native businesses can reach multi-million-dollar revenue per employee at scale, quickly.
Anthropic: plausibly $3M to $4M per employee at late-2025 run-rate
Reuters reported Anthropic’s run rate was approaching $7B in October 2025 and that the company projected $9B in annualized revenue by the end of 2025. (Reuters) San Francisco Chronicle reported Anthropic had over 2,500 employees around that time. (San Francisco Chronicle)
Run the same math:
$7B / 2,500 ≈ $2.8M per employee
$9B / 2,500 ≈ $3.6M per employee
Cursor: early signals already reach into eight figures per employee
Public numbers here are usually ARR and point-in-time headcount snapshots, so this section is more about directional evidence than precision.
Fast Company reported that Anysphere had about 20 employees when its ARR reached $100M, implying roughly $5M per employee at that moment, and described later reporting that put ARR around $300M, implying roughly $15M per employee at that snapshot. (Fast Company) Separate reporting from TechCrunch and Bloomberg described Anysphere surpassing $500M in annualized revenue or ARR by mid-2025. (TechCrunch)
That is not a stable long-term ratio, and it does not need to be. It is a flashing sign that the ceiling on revenue per builder is moving upward.
This is where the “$10M to $30M per engineer” thesis stops sounding like sci-fi. We already have early-company snapshots brushing up against the low end of that range, and the underlying drivers still have room to run.
Let’s talk about Block
A few days ago (Feb. 26, 2026), Block announced it would cut more than 4,000 jobs — roughly 40% of the company — explicitly framing the move as an “AI overhaul” that lets smaller teams move faster. Markets loved the story in the moment; multiple outlets reported a sharp share jump after the announcement, alongside guidance and restructuring changes that put a number on how disruptive this “efficiency” really is. (Reuters, Barron’s)

Another pitfall isn’t just the existence of a layoff event. The pitfall is the cadence — and how it changes behavior inside a high-trust, high-stakes financial product company.
Block has been on a repeated “shrink to move faster” loop:
Around ~1,000 cuts in January 2024 as part of a plan to reduce headcount. (Reuters)
931 more cuts in March 2025, with leadership explicitly saying it wasn’t about replacing people with AI. (TechCrunch)
And now a far larger cut in February 2026, with leadership explicitly making AI the center of the justification. (Reuters, WSJ)
That “AI did it” storyline is somewhat a deflection. Even if AI tooling genuinely boosts productivity, repeated downsizing turns the organization into a set of short-term survival strategies:
teams over-document to defend headcount rather than to improve systems
people ship safer, smaller bets because big bets feel career-risky in a churny org
internal trust erodes, and you start paying coordination tax on everything
the best people quietly optimize for portability (because they’d be irrational not to)
And in fintech, the hidden cost is worse: reliability, fraud controls, regulatory/compliance work, incident response, and customer support aren’t “nice-to-haves.” They’re the moat. When your cuts hit across functions that include engineers, data roles, design, and legal/compliance-adjacent talent, you’re not just trimming fat — you’re thinning the connective tissue that keeps money movement safe. (SF Chronicle)
It’s worth being explicit about what we don’t know: Block (like most companies) doesn’t publish a clean “number of software engineers” figure, and outsiders shouldn’t pretend otherwise. What we can say with confidence is that a public, repeated pattern of cuts — now framed as an AI inevitability — is an instability signal that customers, partners, and candidates can read in plain English. (Reuters, MarketWatch)
The chess move: the companies that stop laying off will hire the best people from the companies that keep doing it
Once you see the above dynamics, the incentive landscape changes.
Companies that keep laying off engineers signal fear and short-term thinking.
Companies that quietly hire and keep teams stable signal opportunity and compounding.
In a world where AI makes engineers more productive and where RPE trends up, “keep the team intact” becomes a recruiting strategy. The best engineers often have the same instinct: they want to build inside a compounding system, not inside a rotating exit door.
And because compensation is likely to rise with the value per engineer, there is a second incentive for smart companies: hire sooner to lock in the cost basis, then grow into it as output per engineer climbs.
A practical forecast, not a prophecy
Here is a safe way to talk about future RPE without pretending we can see the future.
Assume a mature big-tech company sees RPE growth of 8% to 12% per year over the next 5 years because:
AI improves developer throughput,
support and internal operations get automated,
and AI spend becomes more efficient over time.
A company at $2M RPE today becomes roughly $3M to $3.5M in five years under those assumptions. That shift is big enough to change hiring incentives and labor pricing, even if margins wobble and even if some AI spend remains heavy.
Now combine that with AI-native companies setting higher baselines today. When the “new normal” includes $5M per employee at scale, it becomes rational for CEOs to treat top engineering talent as the primary growth input, not a cost to be trimmed.
What do, us, software engineers even do? (and what AI can/can’t automate)
I’m a software engineer and I get asked a lot of the time what I actually do. I usually answer by abstracting it into super simple terms that the layperson can understand, like, “I help develop the Venmo iOS app”, or “I make cool websites.” But this is a large understatement as to what I actually do on a day-to-day basis, and I think it’s worth calling out and elaborating on what, we, software engineers actually do. In fact, some companies don’t even use the term “software engineer”, but rather “individual contributor”, because since our responsibilities cut across so many concerns, coding is one of the smallest aspects of what we actually do, especially for more senior roles.
In a recent video by Theo (t3.gg), he uses a great infographic to explain what software engineering looks like in practice: a “user problem” turns into a sequence of steps — describe it, find a solution, scope it, implement it, review it, test it, plan the release, and ship.
That framing is useful because it highlights the part most non-engineers overweight: “write code” is only one box. The job is mostly about turning messy reality into reliable systems — and then owning the consequences when reality fights back.
Below is a grounded way to describe each step, plus how automatable it is today vs. in principle.
Describe the problem
This is the “translate human chaos into something buildable” step. Engineers take “it’s broken” or “I want a button that does X” and turn it into concrete repro steps, definitions of success, constraints, and edge cases. Some of this is already helped by automation (log summaries, error grouping, drafting clarifying questions), but the hard part is usually not technical: it’s ambiguity, conflicting goals, and unstated assumptions.
Automatable today: medium
Automatable in principle: medium-high (because the problem is often social/organizational, not just computational)
Identify the solution
Here’s where tradeoffs get made: architecture, data model, security posture, latency/cost targets, failure modes, and how to recover when things go wrong. AI can suggest patterns and generate a lot of code, but the “best” solution depends heavily on local context: existing systems, team conventions, future roadmap, compliance needs, and risk tolerance. That context is usually the scarce resource.
Automatable today: low–medium
Automatable in principle: medium-high in constrained domains; lower in open-ended product work
Scope and assign work
This is the planning reality check: break work into shippable slices, sequence dependencies, estimate risk, and coordinate with other teams. Tools can help draft task breakdowns, but prioritization and sequencing are fundamentally about people, incentives, and organizational constraints — things that change mid-flight.
Automatable today: low-medium
Automatable in principle: medium-high only in highly standardized orgs
Write the code
Yes, engineers write code — but usually inside an already-running, already-complicated system. The work is integration: using existing APIs correctly, respecting performance and security constraints, handling weird legacy behavior, and making changes without breaking downstream consumers. AI is already strong at scaffolding, boilerplate, refactors, and well-trodden patterns. It’s less reliable when requirements are subtle, the system is large, or failures are expensive.
Automatable today: medium–high (depending on the task)
Automatable in principle: high in well-specified, well-instrumented codebases
Review the code
Code review is quality control and knowledge transfer: correctness, security, maintainability, operability, and “does this actually match the intent?” Automation helps with linting, static analysis, and even AI-assisted review comments, but review is also where teams catch misaligned assumptions and prevent future maintenance debt. That “sense-making” is partially automatable, but accountability and judgment remain human.
Automatable today: medium
Automatable in principle: medium–high if specs/tests are strong; still needs human ownership
Test the code
Testing isn’t just writing tests — it’s deciding what must never break, choosing the right test strategy, debugging failures, and dealing with flaky or nondeterministic behavior. AI can help generate unit tests and suggest coverage gaps. But validation against real-world behavior (especially distributed systems, race conditions, and data migrations) is still stubbornly hard.
Automatable today: medium-high
Automatable in principle: high in deterministic systems; medium where the world is messy and has extensive domain based knowledge
Plan the release
This is where engineering meets operational risk: rollout strategy, feature flags, migrations, monitoring, incident playbooks, compliance gates, and communication. Automation is already good at the mechanics (pipelines, checklists, release notes drafts), but deciding how risky this is and what guardrails you need is still a judgment call.
Automatable today: medium
Automatable in principle: high for mechanics; medium for risk decisions
Release (and own it)
Shipping isn’t “click deploy.” It’s monitoring, responding to incidents, rolling back safely, and learning from production behavior. CI/CD can automate deployment, and AI can help diagnose issues, but the key thing here is ownership: when the system breaks at 2am, someone has to make correct calls with incomplete information, coordinate stakeholders, and protect customers.
Automatable today: high for the button-pushing; low–medium for the ownership
Automatable in principle: medium (automation can ship; it can’t fully replace accountable decision-making)
Two small clarifications worth adding to Theo’s flow
It’s a loop, not a ladder. A release creates new user problems (bugs, scale limits, confusion, new requests), so the process repeats. And the more we integrate this process with AI, this loop spins faster and faster, creating more valuable/impactful work for the humans involved.
There’s an implied missing category: operations & maintenance. A lot of engineering is keeping systems healthy — monitoring, on-call, security patching, performance tuning, cost control, and incident response.
So, as you can see, integrating AI into a company is not simply a matter of writing more code faster. It’s a lot more complicated than that. And I think a lot of companies are going to figure this out the hard way.
How software engineering interviews should change in an AI-forward world
If the job is changing, the hiring signals have to change too.
I’ve been interviewing recently, and, fortunately, landed a job at Venmo (thank you Venmo! 🙏), and one thing is super clear: the standard LeetCode-style interview loop needs to change. It optimizes for a world where writing correct code under pressure is the scarce skill. That world is fading. Code generation is becoming cheap. The scarce skills are shifting toward judgment, clarity, fundamentals, debugging, and system thinking.
I think the better interview loop in 2026 looks like this:
Fast-paced fundamentals, done conversationally.
Arrays, bits, hash maps, concurrency, latency, memory behavior, and basic algorithmic tradeoffs. Not as a puzzle. As a discussion that moves quickly, because speed and fluency matter, and because conversation is harder to outsource to a tool in real time.Debug prompting and “explain what you see.”
Give the candidate a broken system, a confusing log, or a failing integration, then ask them to reason about it out loud. The signal is how they cut the problem, form hypotheses, instrument, and iterate. Bonus points for explicit, high-quality prompts that they would feed to an AI assistant in the middle of the mess.Spec writing and planning.
Ask them to plan a feature in writing. Define constraints, identify risks, propose milestones, and decide what to measure. Then ask them to generate a prompt, or a sequence of prompts, that would produce a first implementation draft, plus tests and validation.System design, plus “design it with agents.”
Classic scalability questions still matter. Now add a second layer: how would you use AI agents to accelerate implementation without creating a security and reliability disaster? Where do you put guardrails? What do you log? What do you treat as a dangerous action?
Cheating mitigation is part of this, whether we like it or not. As tools get better, the industry may move toward more in-person interviews for final rounds, or at least toward tightly proctored environments for the parts that are supposed to measure individual fluency. That is not a moral judgment. It is a practical response to changing technology.
The title “software engineer” probably needs an update
Once you watch how people work with agents, the phrase “software engineer” starts to feel incomplete. It captures the output, but not the method. More and more, the work is:
writing specs that agents can execute
orchestrating tool-using systems
validating and debugging AI-produced output
designing guardrails and safety layers
shipping features at a speed that would have seemed reckless a few years ago
So I expect titles to get weird and maybe a little fun. Some candidates:
Agentic Engineer
Spec Engineer
Agent Orchestrator
AI Conductor
AI Maestro
The actual name does not matter as much as the shift it signals. The center of gravity is moving from typing to steering, from crafting code line by line to crafting systems that can generate, test, and adapt code safely. Steve Yegge has a number of excellent articles that go into more depth about this coming transition, and has built fascinating tools, such as GasTown which demonstrate this in reality (GitHub).
Where we are headed long term: AI-controlled companies as competitors
The most uncomfortable version of this forecast is also the one that makes the “do not lay off engineers” argument feel the most urgent.
I think we are heading toward companies with minimal human control, and eventually companies that are almost entirely run by AI agents. Some will still have human owners and boards. Some will have humans in the loop only for the most sensitive decisions. Some will push even further. I was laughing out loud recently when I was listening to an episode of Moonshots (excellent, bleeding edge, AI podcast by the way; follow Peter Diamandis, and his regular podcast hosts for more excellent content like this), and Dr. Alex Wissner-Gross explained a story from the old cartoon, the Jetson’s, where work in that future heavily relied on “clicking the accept button” 😆 In my personal experience, this is so true. And I think our work will keep heading more and more in that direction, as we build more trust from the quality of the outputs they generate. (By the way, if you’re interested in daily AI-related news, I highly recommend following Dr. Wissner-Gross’ excellent substack: The Innermost Loop)
We are already seeing people experiment with letting agents run loose socially and operationally. The recent wave around OpenClaw and Moltbook is a bizarre, very early hint. The Verge reported Moltbook as a social network designed for AI agents, developed by OpenClaw, with large numbers of agents posting and interacting. (The Verge)
Once that impulse reaches operations, you get a new class of entity: an agent-driven company that ships continuously, buys compute, runs ads, A/B tests pricing, files paperwork, hires contractors, pays invoices, and iterates on product direction based on metrics, with humans mostly observing.
If that sounds dystopian, it is because we do not have a clear legal and societal framework for it yet. That uncertainty is a reason to move cautiously on governance and safety. It is also a reason to avoid shrinking the very workforce that can build the guardrails, audit trails, and control systems that make this future less chaotic.
This whole arc reminds me of The Second Renaissance in The Animatrix (an excellent series of sci-fi shorts, related to the Matrix, highly recommended), especially the part where machine-driven economic entities out-compete human organizations and the transition spirals into conflict. I do not want that trajectory. I do think the short was prescient about how competition can force adoption even when society is not ready for the consequences.
So if you zoom out, the conservative posture for most software companies is not to shrink engineering hiring. It is to keep expanding it, with better signals, better training, and better guardrails, while the technology compounds and the competitive landscape reshapes itself.
So what should companies do instead of layoffs?
If leadership truly wants efficiency, there is a better order of operations:
Cut coordination cost first. Reduce management layers, meeting load, approval chains, and increase individual autonomy.
Kill projects before killing builders. Cancel the zombies, not the teams that ship.
Invest in internal tooling and AI enablement. Make every engineer faster, safer, and more autonomous.
Protect senior context. A senior engineer leaving is not a headcount line item; it is a risk multiplier across systems.
The argument is not “never lay off anyone.” The argument is that right now, in early 2026, layoffs in software organizations are often a bet against the near-term shape of the world.
So what should Block, specifically, do instead?
I’m sorry Jack, but the action of laying off 4,000 people does not mean your company got leaner. It means you just created 4,000 potential individual competitors, not bound by corporate red tape, and with access to the most powerful AI tools created, at pennies per token.
I’m gonna be the douchy armchair critic for a minute and give my best effort argument as to what Block should do instead, and perhaps still has a chance to do if they decide to reverse their layoffs and bring their employees back. This argument is based on many of the concepts and topics I touched on in this article. Basically, instead of laying off those 4,000 folks, the company should do an “AI Brainstorm”, where the leadership team has deep analyses and discussions with AI agents about what would be the most impactful, most profitable, feasible businesses, that require engineering effort, right now. This brainstorm should not be haphazard; it should be deep, thoughtful, and cooperative, with human insight to steer the ideas, according to Block’s vision. The result of these discussions should be a distillation of the top, say, 100 ideas that the AI’s came up with. Then reorganize your 4,000 people who were just laid off by allowing them to vote and/or decide on which of these projects are most interesting to them, so they’ll fully put their heart and mind into the work. And now you’ve just created 400 potential internal unicorn companies within Block, with a lean size of roughly 40 individuals each. And each team will be greenfield, and utilize AI from the ground up, just like Anthropic, OpenAI, and Cursor. If even a few of those ideas hit big, then you’ve just created way more value for your company, and the world, in a fraction of the time it would have taken, otherwise. This is a radical concept, but other companies already have somewhat similar initiatives already, albeit at a smaller scale, such as Google. And one of these internal companies basically became Waymo, which I believe will be a unicorn, when scaled up.
Putting it all together
I hope I’ve opened people’s eyes into the potential future of what the tech world will look like, and even seeded some cool, new ideas for people to ponder and implement at their companies. All signals lead to this core concept: if AI assistance continues to improve and continues to commoditize, then revenue per engineer rises. When revenue per engineer rises, demand for engineers rises. When demand rises, labor costs rise. The companies that understand this early will hire earlier, lock in capacity earlier, and out-execute the companies still optimizing for the last era. And these are the companies that will win. These are the companies that will produce the most value for the world. And I’m excited to see what they’ll build next.

