In this issue: - Algorithmic Collusion—Companies in an industry can't all get together to set prices, but they can buy software products that accidentally or intentionally do this for them. It's hard to avoid as software gets better and businesses get more comfortable sharing some of their critical data. And, ironically, it's a more dangerous problem when it happens entirely by accident.
- Operationalizing Secrecy—Corporate leaks are surprisingly rare, but for the companies that successfully keep secrets, it's a hard-won battle.
- Product Convergence—LinkedIn grudgingly considers copying TikTok.
- Black Box Update—The biggest ad platforms keep pushing advertisers to share more data, while the platforms themselves share less.
- Data Scarcity—Once we've trained AI models on all the world's data, what's next? Training them on less, or manufacturing more.
- Cash Management—The inconvenience of doing banking for companies selling an almost-legal product.
Algorithmic Collusion
There are a few ways that competitors will end up charging exactly the same prices. The first is the easiest to track: if they're in a competitive business with low margins, and they have similar cost structures, prices are set by forces outside their control; Exxon would love it if motorists felt that there was some essential Exxon-ness that Chevron just couldn't provide, but no, they choose to buy gas based mostly on price and convenience. More ambiguously, for products that are high margin, prices get set in reference to consumer expectations, and those get set in part by what's already out there. This newsletter's original pricing was set by looking at the Substack leaderboard, finding the most expensive newsletter that covered a similar topic area, and choosing the same price.
If, instead, I'd gotten in touch with that newsletter writer, and the two of us had agreed that we'd charge the same price and would coordinate our price increases, it would have been a straightforwardly anticompetitive act. And if this were widespread—if it were suspiciously hard to find any newsletter covering finance and technology that was priced below $19.99 or above $20.01, legal action would be a possibility.
What if we try something else, though? We could subscribe to some revenue-management product, upload data on our conversion rates, churn, the return on ads, elasticity revealed by temporary discounts, etc. That product might suggest periodically discounting things, or might argue that a 25% increase in price and a 20% drop in readers would mean equal revenue but less time spent in customer support. The product would get better if more writers used it, and shared more data. And if share got high enough, and its simulation of elasticity got good enough, it might suggest that we all converge on exactly the same price and never discount.
This is close to what a group of Atlantic City casino hotels have been accused of. The complaint is a good read, though it veers between interesting thoughts on algorithmic collusion (with unclear levels of deliberate effort) and some kind of fuzzy thinking about the underlying economics. The basic risk is that if 80% of the businesses in a given area are using the same pricing model, and they're all sharing data, this is just a slightly more efficient (but also slightly more deniable) alternative to getting together in a smoke-filled room and just agreeing to set prices and split the market.
From a consumer welfare standpoint, it doesn't particularly matter if this is an instance of accidental, emergent collusion or a deliberate conspiracy. But for understanding how this happens, it's quite relevant. The temptation to collude is universal: industry insiders have more in common with each other than with one another's customers, and have an information advantage. If some supplier announces that they're hiking prices 10%, and you find that every other supplier has done the same, it's nontrivial to determine whether this is because the cost of whatever they're supplying has gone up or because they've formed a lively chatgroup on Signal and decided who has market share where. Sometimes it's so institutionalized that it's hard to reverse, as in the GE case in the 1940s: GE had a rule against price-fixing, and executives often repeated that rule to salespeople; they'd always ostentatiously wink:
In May of 1948, for example, there was a meeting of G.E. sales managers during which the custom of winking was openly discussed. Robert Paxton, an upper-level G.E. executive who later became the company’s president, addressed the meeting and delivered the usual admonition about antitrust violations, whereupon William S. Ginn, then a sales executive in the transformer division, under Paxton’s authority, startled him by saying, “I didn’t see you wink.” Paxton replied firmly, “There was no wink. We mean it, and these are the orders.”
It is, in a time of more sophisticated pricing and above-normal inflation, not the main hypothesis anyone should start with. But it happens.
If algorithmic collusion happens in a cynical, intentional way, it's easier to prosecute. One thought experiment proposed in the Atlantic City complaint is The Bob Test: replace the word "algorithm" with "a guy named Bob" and see if it sounds illegal to make a plan like "All of us will tell Bob what our unit economics look like, and how fully-booked we are, and we'll ask Bob to tell us how much we should all charge to make as much money as possible." But that cuts both ways: if Bob is an experienced consultant who's good at pricing, and who gets hired by all of the companies in an industry, it's not inherently suspicious that they'd all want to hire him, any more than it's suspicious that they all use Google and accept Visa. But if they're all sharing data with a third party with the explicit intent to raise prices, there are fewer ethical guardrails. And if they aren't conspiring, there isn't a conspiracy to defect from!
That's the real risk of accidental algorithmic collusion: companies hire lots of service providers, and generally keep the ones who produce good returns. If their ad agency works well, they keep using them; if the accountant keeps the books in good shape, that accountant sticks around. If a cartel is an emergent property of business decisions that are entirely defensible on their own merits, then there are two problems:
- It's obviously harder to prosecute if Hard Rock Café says something like "this is good for Hard Rock" rather than "this will get Caesar's to finally stop undercutting us on suites."
- Cartels are extremely unstable, and only tend to last in high-trust situations. (Beware any concentrated industry with a reputation for really wild conferences—the more fun they have at the conference, the more mutual blackmail they have after.) But defecting only works when members know they're in a cartel! If they don't recognize this, they won't realize that they can do the classic cartel-busting move of cynically betraying their industry buddies by cutting prices to run away with the market.
There is a partial solution to this. Revenue-optimizing software will have a very easy time creating cartel behavior if the pricing model it uses assumes that everyone else uses the same pricing model. But if it doesn't see that, then any given customer can be the one who benefits from defecting from the agreement and cutting their own prices. Which is why it's historically been hard to get the low-cost producer to agree to high prices, because they're the ones who know that they will benefit from that last 1% of market share.
So antitrust will have to get more technically sophisticated, and will have to audit the source code and run econometric simulations to see if the algorithm is implicit cartel behavior or is just smarter about modeling how elasticity changes during three-day weekends or something. Meanwhile, the temptation to use SaaS as a cover story for actual collusion is high, particularly in consolidating businesses. So companies will probably want to start setting internal policies that limit the risk that they'll become accidental monopolists.
This newsletter has argued before that almost everyone likes price discrimination, and its effects, as long as they're not aware of what's happening. Casino hubs are a great example of this: a Vegas vacation is an incredible bargain if you don't like gambling or drinking very much, since basically everything you can buy is priced in reference to the attach rate of some high-margin vice. But it ceases to be a public good when monopolistic pricing is an emergent result of algorithms that are sophisticated in their execution and naive in their implication. As the world gets more networked and more software-mediated, engineers are in the same position that someone working at, say, DuPont was a century ago: today's clever hack is next decade's scandal-turned-class-action. The economics of the software providers provide an incredible temptation: the marginal cost of selling this casino hotel optimization product to one more casino company is low, and if the marginal benefit goes up as the market buys more of it, the software provider has incredible pricing power. Monopolies are in some sense an unavoidable good sign of progress: any time you build something that's never been built before, you also monopolize a market. But the other kind of monopoly, the antisocial kind, is when you try to arrange the world so the financial benefits of that innovation accrue to someone whose key insight was some new way to manipulate the market to keep new competitors out and get existing ones aligned with exactly the right way to soak consumers. Even though this particular case is weak, it's a sign of what is to come.
Diff JobsCompanies in the Diff network are actively looking for talent. See a sampling of current open roles below: - A company reinventing the way Americans build wealth for the long-run by enabling them to access "Universal Basic Capital" is looking for a GTM / growth lead. (NYC)
- A successful crypto prop-trading firm is looking for new quantitative developers with experience building high-performance, scalable systems in C++. (Remote)
- A well funded seed stage startup founded by former SpaceX engineers is looking for full stack engineers previously employed by Anduril or Palantir. (LA)
- A seed-stage startup is helping homebuyers assume the homeseller’s low-rate mortgage, and is in need of a product designer and a product manager. (NYC)
- A private credit fund denominated in Bitcoin needs a credit analyst that can negotiate derivatives pricing. Experience with low-risk crypto lending preferred (i.e. to large miners, prop-trading firms in safe jurisdictions). (Remote)Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.
If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority. Elsewhere
Operationalizing Secrecy
In some ways, it's hard to believe that corporate nondisclosure policies ever work at all. Many companies don't lock down their systems comprehensively, people leave poorly-secured laptops in cabs, it's simple to load something up on a secure system and then take a screenshot of it on a personal phone, and, of course, the really juicy stuff is memorable enough that no silicon-only secrecy policy can stop them from getting out. And yet, surprises happen, and companies release unleaked products fairly frequently. One way that happens is by taking aggressive legal action against someone who does leak, as Apple is doing right now. For a company like Apple, secrecy is partly mystique, but it's also closely tied to their business model: close integration between hardware and software means that the lead time for copying some features is in the quarters-to-years range of hardware-constrained businesses, not the hours-to-weeks range for software. So, for them at least, it pays to treat leakers as harshly as possible.
Product Convergence
Last week's Diff briefly noted that when the "stories" format got popular, even LinkedIn used it (though they quickly killed their version). Now, LinkedIn is testing a "TikTok-like" short video feature. That's potentially more native to the platform than disappearing videos, but it also illustrates that even mature social platforms have to respond when there's a new interaction model.
Black Box Update
Since 2018, Google has been showing advertisers a measure of their "ad strength," which "measures the relevance, quality, and diversity of your ad copy." This has historically been an internal metric they show customers, but now they're reducing the reach of campaigns that don't use enough kinds of creative. Since customers are omitting some of it specifically to avoid running ads in certain venues, this move has two effects:
- It encourages advertisers to run the same campaign across as many properties as possible, increasing the bid density on whichever platforms monetize worse.
- It continues the trend towards black box ad models over more targeted ones. Google has more control over where ads show up, and the advertiser is increasingly forced to use whatever targeting approach Google prefers over what they think works best.
Data Scarcity
AI models are increasingly constrained by the quantity of unique data they can access ($, WSJ). There are roughly three solutions:
- Researchers are getting better at building models with datasets, and particularly by pruning the data inputs to get better outputs. (And this is a case where bootstrapping can work—LLMs can speed up the process of trawling through lots of data to find which parts of it are at least written coherently.)
- Paying people to generate new data, or licensing big sources of it—this is part of the bull case for Reddit's recent IPO, and seems to be an important element of Stack Exchange's efforts to navigate AI.
- Training models on model-generated data. This feels dangerous, but it also describes the academic fields of math and philosophy, where the previous generation's output is the next generation's input, and progress (or at least activity) has been happening for millennia.
Cash Management
The WSJ highlights the difficulties legal cannabis companies have had in interacting with a banking system that doesn't necessarily want to process their transactions ($, WSJ). They often end up moving large amounts of physical cash, which can be a very high-risk activity (a few months ago a small bank had 90% of its quarterly earnings wiped out when someone robbed an armored car carrying $9.5m. One thing this illustrates is that in networks, risk aversion is contagious: doing business with a bank that works with high-risk customers has a similar risk profile to taking that business directly. This encourages financial intermediaries to specialize: the inconvenience and risk of taking on one legal cannabis customer is high, and the marginal risk of the next one is lower.
|