Contra Citrini
My thoughts on the piece that's making its way through the financial press and slamming some stocks that might find themselves disrupted by AI.
Fair warning: This is a departure from my usual posting about ways that I’m using AI in my business. It’s a response to the Citrini story that took the internet and the markets by storm a couple of days ago — if you haven’t read that, then I definitely recommend that you do, even if you don’t want to hear me opine on it. It’s a great piece of work, even if I do disagree with big chunks of it.
I told myself that I would keep this Substack focused on actual things I am doing with AI, not meta commentary, but I have a lot of thoughts and feel compelled to put them down on digital paper. (Someone is wrong on the internet, and so I must respond!) Anyway, totally understandable if you want to skip this; we’ll be back to our regularly scheduled programming next week.
Why The Citrini Piece Matters
If you’re terminally online, you know the answer to this question, but on the off chance there is someone with a healthy relationship to internet discourse who also somehow reads my Substack, I will just give a quick overview here.
I would describe the Citrini piece as a work of speculative fiction, narrating AI-driven changes to the economy in the form of a macro research memo from 2028. It is very negative, basically a story of how AI’s impact on businesses and the markets creates a severe negative spiral that crashes stocks and puts a lot of people out of work.
It immediately took over the attention of the zeitgeist, and within a day it had been covered on Bloomberg terminals and in research notes. That in turn led to a selloff of many of the stocks mentioned in the report (I’ll get into specifics on those below), which in turn led to more coverage by just about the entire financial press: Bloomberg, Reuters, the WSJ, CNBC and more.
The impact is frankly pretty insane and lays bare how desperately even the most sophisticated investors are trying to figure out what the hell the big-picture impact of AI is going to be. An actual piece of fiction took Amex shares down 7% and Mastercard down 6%. DoorDash dropped 6.8%. ServiceNow fell 3.3%
All of this despite the fact that there are what I would call pretty substantial errors in the analysis. To be clear, I think it’s great and I commend the folks at Citrini for putting it out. We really do need more concrete predictions about what will happen, and the fictional narrative is a really great way to make it feel tangible to people. But they covered a lot of ground and did some handwaving that I think leads them to conclusions that are not reasonable.
What It Got Wrong
Let me pull out some quotes and offer my disagreement:
Qwen’s open-source agentic shopper was the catalyst for AI handling consumer decisions. Within weeks, every major AI assistant had integrated some agentic commerce feature. Distilled models meant these agents could run on phones and laptops, not just cloud instances, reducing the marginal cost of inference significantly.
The part that should have unsettled investors more than it did was that these agents didn’t wait to be asked. They ran in the background according to the user’s preferences. Commerce stopped being a series of discrete human decisions and became a continuous optimization process, running 24/7 on behalf of every connected consumer.
The technical pieces of this are clearly correct; AI agents will increasingly have the capabilities needed to buy things on behalf of people, and by early 2027 (the time period for the above quote), they’ll certainly be running on edge devices.
But the fact that the capabilities exist doesn’t mean people will use them. Humans aren’t just going to hand agents their credit cards and let them use their best judgement to buy stuff.
Consider some of the types of purchases one makes online:
I buy things I need for my house. The other day, the bulb of one of my outdoor lights died. I needed a new one, so I went to Amazon and found it. I ran low on toilet paper, so I put some more on my Costco list. My baby wasn’t sleeping that well during her daytime nap, so I ordered blackout curtains to go on top of the blackout shades I already have. My toddler’s daycare said it was pirate day this week, so we went to the store and bought him a red and white striped shirt.
Agents can’t do this kind of shopping on my behalf, because they don’t know that I need these things. Maybe they’ll be in my inbox checking for emails from my daycare in the next year, but they aren’t going to know that I need a new bulb for my light. Even for these sorts of run-of-the-mill goods where I don’t really care that much about which particular product I get as long as it’s good enough, an AI agent can’t go get them for me autonomously; it needs to be directly instructed.
Then you’ve got more discretionary purchases; for me, it’s mostly stuff for my toddler. I’m tired of reading the same books to him at bedtime, so I get more. I’m tired of playing with the same old toys with him, so I buy new ones. For myself, I realize that I only really have a couple of pairs of shorts I wear in the unseasonably hot San Diego winter, so I get a couple more. I keep reading about the Ninja Creami on X, and I finally decide to pull the trigger on it for myself.
AI can’t buy these things either. It could probably pick new toys and games for my toddler fine (in fact it definitely can, I consult Claude before buying stuff), but I wouldn’t want it making purchases unexpectedly without my knowledge. It definitely can’t handle wardrobe purchases even for an exceedingly simple dresser like myself. I absolutely do not want it seeing what I’m reading on X and making purchases accordingly!
There are just not that many things that can be purchased for a person without that person specifying either when to buy them or what to buy or both. For the stuff that can be purchased without my input, AI isn’t necessary; I already have an Amazon subscription set up for dog poop bags.
Subscriptions and memberships that passively renewed despite months of disuse. Introductory pricing that sneakily doubled after the trial period. Each one was rebranded as a hostage situation that agents could negotiate. The average customer lifetime value, the metric the entire subscription economy was built on, distinctly declined.
This is definitely more reasonable than the prior quote, but I think the idea that it would have any meaningful impact is overblown. We’ve had tools that go through your credit card charges and find recurring subscriptions for years now. AI agents aren’t going to be any better at this; it’s not like it’s a particularly complex problem.
I guess an agent that has full access to your location and financial data could tell that you’re paying for a gym that you’re not using, but let’s be real — you know you’re paying for a gym that you’re not using. The reason you haven’t cancelled isn’t because of a lack of awareness, it’s because you’re going to go next week! Really! Also, I have serious doubts about the willingness of the general public to give AI access to all of their financial and location data in the next few years.
The sneaky introductory pricing thing; sure, fine, but in this day and age with government watchdogs and social media allowing people to put companies that do this kind of thing on blast, it’s just not common enough to make a real difference in the economy.
Humans don’t really have the time to price-match across five competing platforms before buying a box of protein bars. Machines do.
This already exists. Do a Google search for a specific product and look in the right sidebar. Also, agents won’t necessarily have sufficient access to do this effectively. Amazon’s blocking them. In my experience, ChatGPT has trouble searching on Costco.com because you need to be logged in to see prices. It’s useless if agents can scour the internet for the best price except they can’t access the two sites that are far and away the most likely to have the best price.
Even places we thought insulated by the value of human relationships proved fragile. Real estate, where buyers had tolerated 5-6% commissions for decades because of information asymmetry between agent and consumer, crumbled once AI agents equipped with MLS access and decades of transaction data could replicate the knowledge base instantly. A sell-side piece from March 2027 titled it “agent on agent violence”. The median buy-side commission in major metros had compressed from 2.5-3% to under 1%, and a growing share of transactions were closing with no human agent on the buy side at all.
Wouldn’t this be nice! But no, this unfortunately isn’t going to happen. We very recently had a lawsuit against the National Association of Realtors for enforcing commissions at 2.5-3%. It has not had a great deal of impact. If you really want to pay lower commissions, you already could’ve used a fixed-price broker prior to the lawsuit.
The reality is that brokers are deeply entrenched in a way that AI agents aren’t going to be able to change. They’ve got regulatory advantages; you need to be licensed to sell real estate, and they’re going to fight tooth and nail to stop AI from disintermediating them. Given how unpopular AI is becoming with the general public, it’s not difficult to imagine their concerns (and campaign donations) finding sympathetic political ears.
Beyond that, buying a home is the biggest purchase most people will ever make in their lives. The average person will do it only a few times in his life and so will have basically zero expertise. If you screw it up, it can bankrupt you. They want a friendly, knowledgeable face helping them with the process. If you’re reading my Substack, you may well be fine with having that be the virtual face of an AI agent, but the majority of the population is not going to be comfortable with that for many years.
DoorDash (DASH US) was the poster child.
Coding agents had collapsed the barrier to entry for launching a delivery app. A competent developer could deploy a functional competitor in weeks, and dozens did, enticing drivers away from DoorDash and Uber Eats by passing 90-95% of the delivery fee through to the driver. Multi-app dashboards let gig workers track incoming jobs from twenty or thirty platforms at once, eliminating the lock-in that the incumbents depended on. The market fragmented overnight and margins compressed to nearly nothing.
Agents accelerated both sides of the destruction. They enabled the competitors and then they used them. The DoorDash moat was literally “you’re hungry, you’re lazy, this is the app on your home screen.” An agent doesn’t have a home screen. It checks DoorDash, Uber Eats, the restaurant’s own site, and twenty new vibe-coded alternatives so it can pick the lowest fee and fastest delivery every time.
Let’s play this out a bit. I’m very good at vibe coding stuff, so I ask Claude Code to spin me up a DoorDash app, sign me up for Stripe, get a domain and launch it. I have my OpenClaw agent go market it the AI agents of customers, drivers and restaurants. They all sign up, and I tell it to keep my prices at one cent below my competitors’ (we will assume for the sake of the narrative that this does not immediately cause a price war into margin negative territory and that I am somehow able to retain my one cent price advantage for at least a few restaurants).
Someone orders from me. A driver goes to pick up the order and delivers it to the customer. Success! Except ten minutes later the customer sends a message to my support agent letting me know that half the items are missing. My support agent contacts the restaurant; their agent says that the either the driver stole the items or the customer is lying, because they definitely packed everything.
I can’t force the restaurant to refund me what they were paid; there are so many competing services that they’ll just drop mine immediately. I could accuse the customer of lying and deny them a refund, but then they’d just charge it back on their credit card. I would lose that chargeback 100 times out of 100, and not only would I lose the money, I’d also get slapped with a $15 chargeback fee.
Thus, I have to refund the customer for the missing items. Since we are in a highly competitive environment, my margins are low single digits. I can’t afford to give refunds! Only thing to do is shutter this ill-conceived business before I lose more money.
Scale is a huge advantage in a marketplace. Even if you can somehow trivially recruit drivers, users and restaurants with no acquisition cost, if you’re tiny you don’t know anything about anybody and have no recourse against bad faith actors.
Are you going to background check every driver? You can’t do that both because it costs you too much in a cutthroat business and because it introduces too much friction in driver signup; they’re only going to add your app at the 57th one on their roster if there is literally 0 friction because their agent can handle the entire process. The second they have to sign a form agreeing to a background check or upload a video selfie to prove identity, they’re not going to bother.
Then you’ve got fraud to deal with. If there are 100 newly-minted DoorDash competitors and I steal a credit card, I’m going to have my agent place an order with every single one simultaneously. Some will get through, and when that card gets reported stolen, the services that took those orders will be holding that bag.
Throughout this piece there are repeated assumptions that agents will remove all kinds of friction. They will certainly get rid of some, but AI is not a panacea for every bottleneck and intermediary that exists.
There was only so much price-matching and aggregating to do. The biggest way to repeatedly save the user money (especially when agents started transacting among themselves) was to eliminate fees. In machine-to-machine commerce, the 2-3% card interchange rate became an obvious target.
Agents went looking for faster and cheaper options than cards. Most settled on using stablecoins via Solana or Ethereum L2s, where settlement was near-instant and the transaction cost was measured in fractions of a penny.
This grossly misunderstands the reason that we’ve ended up with our current consumer payment infrastructure.
The phrase “machine-to-machine commerce” is doing some heavy lifting here; I don’t disagree with the idea that two agents doing business directly might use crypto, but that’s not going to represent a meaningful amount of economic exchange. If my agent goes to buy something from Amazon, it’s going to use the set of payment methods Amazon allows. If Amazon wanted to accept stablecoins in order to avoid paying interchange fees, they would do that today. AI agents change nothing about that picture.
Even it Amazon accepted stablecoins, my agent still wouldn’t use them because it would be active detrimental to me to do so. The thing that I’m buying costs the same whether I pay with crypto or my Amex, but my Amex gives me points for the purchase and the ability to get a refund if there’s some issue with the thing that I’m buying. Amazon could try to incentivize me by giving me a crypto discount since they’re saving on fees, but they can’t offer enough for it to be worth it.
It doesn’t make sense to give me the full 2-3% they’d be paying; might as well just take the payment from the well-established credit card company and pay them the fee instead. So they’re gonna offer what? 1%? I’m getting that back in points anyway so it’s a negligible benefit to me. And that’s assuming that the credit card companies even allow it — their merchant agreements already forbid charging different prices for a product on the credit card vs. in cash.
This is another one of these places where Citrini imagines that the reason something exists is because of some friction that will disappear with AI agents, but friction isn’t the reason merchants pay credit card fees.
There are a number of other things I disagree with, but I’ll skip those to focus on the one big thing that isn’t stated in the report (and frequently goes unaddressed in many projections of the negative impact of the AI on the economy): deflation.
If all of the events in this piece come to pass, there will almost certainly be an unprecedented deflationary cycle. This is touched on in the form of a drop in real estate prices but otherwise ignored. If white collar firms cut most of their employees and have AI doing what would previously be highly-paid partner work, that’s going to drive the cost of services down.
If the layoffs then push a large number of white collar workers into the labor pool, that’s going to push wages down everywhere else in the economy, which will in turn drive prices down. That leads me to take issue with this section:
As in every downturn, outlays rise just as receipts fall. The difference this time is that the spending pressure is not cyclical. Automatic stabilizers were built for temporary job losses, not structural displacement. The system is paying benefits that assume workers will be reabsorbed. Many will not, at least not at anything like their prior wage. During COVID, the government freely embraced 15% deficits, but it was understood to be temporary. The people who need government support today were not hit by a pandemic they’ll recover from. They were replaced by a technology that continues to improve.
The government needs to transfer more money to households at precisely the moment it is collecting less money from them in taxes.
Most downturns aren’t deflationary, and while it’s fair that the government will need to give out money as receipts fall, the amounts needed will be relatively less than would be required in a typical recession. Beyond that, the structural change will make this a very different situation. You’ve got lower employment but greatly increased productivity overall; the big-picture math there says that overall we have more resources to give out to our society, even if those accrue to a smaller number of people and firms.
Overall the economy may shrink from a monetary perspective but not in terms of productive capacity. That means it’s a matter of getting the government to distribute resources to the people who need them, and given how wildly popular it’ll be to take money from AI firms provide it to the dispossessed, the situation seems likely to come to a much more rapid resolution than the piece suggests.
What I Agree With
While there is a lot that I disagree with in those specifics, I do think that this gets a lot of the big picture correct.
“Technological innovation destroys jobs and then creates even more”. This was the most popular and convincing counter-argument at the time. It was popular and convincing because it’d been right for two centuries. Even if we couldn’t conceive of what the future jobs would be, they would surely arrive.
ATMs made branches cheaper to operate so banks opened more of them and teller employment rose for the next twenty years. The internet disrupted travel agencies, the Yellow Pages, brick-and-mortar retail, but it invented entirely new industries in their place that conjured new jobs.
Every new job, however, required a human to perform it.
AI is now a general intelligence that improves at the very tasks humans would redeploy to. Displaced coders cannot simply move to “AI management” because AI is already capable of that.
I’m always surprised to hear prominent economists, AI leaders, etc., pitch this idea that AI will create new jobs because that’s what has happened in the past. You’re sitting here telling me that we’re going to have these agents that are smarter than humans and capable of learning the same things humans can and with access to all the digital tools and knowledge that exist and that they’re going to take over huge numbers of jobs that exist today, but it’s fine because they’ll create new jobs — okay, but… why won’t they be able to do those new jobs better than people too?
It’ll happen for a while; agents will be able to write the code, but a senior developer will need to guide the big picture architectural decisions. Except that then the AI will be able to guide the big picture architectural decisions, and then what? The human will keep moving up and up and up the stack, but eventually there will be no further up to go.
I don’t think AI will replace all jobs. There will be some where humans will be naturally preferable just because they’re humans. I would posit that once we have humanoid robots that can do the physical job of a nurse better than any human, many people will still want to have a human caring for them when they’re sick.
I suspect we’ll also have humans involved in business, but pretty much just relegated to making the subjective decisions. AI will be better at figuring out how to architect a software product, but somebody will still have to tell it what kind of software to build. I do think there will be AIs doing this too; basically someone will provide the capital, spin up an AI agent and say, “go do whatever makes me the most money!” Probably other humans will want to be more involved, though maybe they’ll get outcompeted to AI-only companies.
Insurance renewals, where the entire renewal model depended on policyholder inertia, were reformed. Agents that re-shop your coverage annually dismantled the 15-20% of premiums that insurers earned from passive renewals.
Insurance is a bizarre industry to me. It should be this highly automated and precise thing, in which firms have some information about your business or car or home or whatever it is that you want to insure, and they quote you and you pick the best quote. Instead it’s a tangled mess of having to talk to one guy who routes you to another guy until eventually you answer a bunch of questions and get a quote, and you’re so drained that even the thought of doing it again to see if you can get a better price is exhausting so you just say yes and move on with your day.
This is actually one place where I think agents are a solution to the friction that exists. My agent can simultaneously call every broker and rattle off whatever information they need until they get the absolute best quote. There are reasons why people would use a real estate broker — it’s a huge, often emotional transaction — but none of those apply to insurance. You just want the best price! There is no reason it should be a relationship business.
Financial advice. Tax prep. Routine legal work. Any category where the service provider’s value proposition was ultimately “I will navigate complexity that you find tedious” was disrupted, as the agents found nothing tedious.
I don’t quite agree with the value proposition put forth here; my CPA does my taxes not primarily because it would be a tedious task for me but because I do not possess the requisite knowledge of the tax code to do them optimally.
Still, the overall point is valid. White collar service industries make up an enormous part of the US economy, and they’re ripe for disruption. There will be some regulatory protection, since their respective lobbying groups will do whatever they can to ensure an AI agent cannot act as legal counsel or file taxes on your behalf, but the nature of these markets and the enormous numbers of players in them mean that AI will still lead to price cuts and job losses.
Maybe my AI agent can’t file my taxes for me (though who’s to say it can’t prepare them for me to file myself?), but tech-savvy CPAs should be able to use AI to do most of their work, lower their prices drastically and take market share. They won’t need employees, except perhaps for a few people to handle client relations with those customers who want a human face. Since the market is somewhat opaque and inefficient, firms not using AI will still go on for a while as their longtime customers stick with them out of inertia, but in the end capitalism wins here. If you don’t lean on AI and cut your prices, you’re just not going to last in this new world.
India was the inverse. The country’s IT services sector exported over $200 billion annually, the single largest contributor to India’s current account surplus and the offset that financed its persistent goods trade deficit. The entire model was built on one value proposition: Indian developers cost a fraction of their American counterparts. But the marginal cost of an AI coding agent had collapsed to, essentially, the cost of electricity. TCS, Infosys and Wipro saw contract cancellations accelerate through 2027. The rupee fell 18% against the dollar in four months as the services surplus that had anchored India’s external accounts evaporated. By Q1 2028, the IMF had begun “preliminary discussions” with New Delhi.
Yeah, India is not going to have a good time for the next few years. If your company’s value proposition is that you can offer mediocre to pretty good software developers at a very cheap price, you’re gonna have a bad time.
I’ll just close by saying again that I applaud the folks at Citrini for putting this out. We live in crazy times and there is tremendous value in anything that gets people thinking about what the future is going to look like.
Thanks for indulging me on this one. Back to me using AI to run my business next week!


