In this issue: - AI Ruins Education the way Pulleys Ruin Powerlifting—AI makes it much harder to test certain things, both within school and between schools. But the biggest negative impact is on people who were trying to maximize their grades-to-learning ratio. For students who genuinely want to learn, AI will accelerate this—and there are already compelling test cases.
- The Other Kind of Bank Unwind—Banks can fail surprisingly fast, but it's not necessarily an improvement when they fail slowly.
- AI Asset Plays—AI raises the scrap value for failed social networks.
- Retail Rides Again—A pandemic trend that hasn't mean-reverted.
- Single-Family Rentals—Corporate landlords may have to divest.
- AI Math—It's expensive to find out whether an AI idea is a good idea.
AI Ruins Education the way Pulleys Ruin Powerlifting
A good framework for evaluating the impact of some technologies starts with listing the relevant superlatives. In education, for example:
On the other hand, let's try some other superlatives, starting with the more formally academic:
- It's never been faster to check an essay for grammar problems, whether as a student or a grader.
- It's also never been easier to grade unstructured text assignments based on checklists—if the assignment is to explain the causes of the English Civil War or the Great Depression, it's a lot faster to check thirty three-page essays to see if they all reference Divine Right or rising tariffs.
- It's never been easier to get last-minute help on tricky concepts, especially if the confusion about them arises at some inconvenient time, like late at night the day before an assignment is due.
- Handouts, quizzes, lesson plans, and other textual artifacts of teaching are easier to make. Yes, there is the risk of hallucinations here, but "read this short text about the topic you teach and correct any misconceptions it contains" is a core competency that teachers already have.
These are all ways that AI helps with schooling. But it's also good for education, a thing that can but does not necessarily occur alongside it. What is education? A good working definition for being educated is that it's the ability to meaningfully participate in humanity's pursuit of knowledge. That's broad enough to cover poetry, econometrics, baking, multivariate calculus, how to comfort a friend after the death of their beloved pet, etc. There's a lot to it; it's never done. Another good working definition is that education is a measure of the recursion depth that you can handle if a curious child keeps asking you "Why?"
And AI is really, incredibly good for education, especially the kind that involves exploring topics independently. (It's still a useful tool for cheating, but the students who were cheating were also doing their best to minimize learning, so all it really does is make that process more efficient. You can put up impressive deadlift numbers by using a pulley instead of getting strong, but if you do that, you weren't really trying to get strong in the first place, were you?) You can trivially write your own lesson plan, which will not be perfect but will include plenty of information that you don't know you're missing. Similarly, you can dive into advanced textbooks without fear, because ChatGPT is great at telling you what the prerequisite concepts are for something you're trying to learn.
This process takes some getting used to, but it's quite doable. ChatGPT doesn't replace teachers, but does offer exactly what a good part-time tutor does: enough breadth to recognize holes in your knowledge, and to point you towards filling them. It does sometimes hallucinate the textbooks it recommends, but if you search for the topic rather than the exact title and add "Reddit" or "Metafilter" or "LessWrong" to the query, you'll probably find what you're looking for.
This kind of education doesn't work for everyone, and it doesn't work for every topic. For example, if you're trying to pick up some kind of tacit knowledge, no amount of text is a substitute for an in-person demonstration or a video. And I’ve also noticed that LLMs tend to have limited knowledge of obscure programming languages (though their knowledge of the principles behind them is comprehensive, unless the language in question is truly weird).
That combination of good theoretical grounding and less practical knowledge about the details of specific libraries is not optimal for productivity—you just want to write the code, not to finally learn what's getting hashed in a hash table and why it matters! But learning-through-doing is all about structuring work so there's an optimal drag from not knowing quite what you're doing. This is very hard in a classroom environment; it's very unlikely that twenty five people in the same age range and zip code will all happen to be capable of exactly the same academic work, or will have the same level of motivation in every class. If instruction is mass-produced at the classroom level—one teacher addressing a class and giving everyone the same assignments—the best you can get is a class that's too slow-paced for the best students and still too fast for the slowest. And even then, although academic skills broadly correlate, students do have different relative strengths and weaknesses. So even in that optimal scenario, the same student can be bored in a fourth-grade math class but also struggling in a fourth-grade reading class.
As with customer service, AI in education could focus less on replacing workers and more on letting them focus on things that are still hard to automate; a teacher who teaches five classes with twenty students each has one hundred students to keep track of, and just isn't all that likely to figure out that one of them misunderstands a particular fundamental concept while another one is struggling with the mechanics of a problem and should probably do a bunch of practice drills until it starts to click. It's hard to find room for individualized instruction if there's a lot of mass instruction, grading, etc. in the mix. But if students are interacting with AI tutors most of the time, and talking to a human being when they're truly stuck or can't find the motivation to keep going, teachers will have about as much to do as before, but will be doing more valuable things.
In that model, students are working at their own pace (at least if that pace is fast enough to consistently perform at grade level), and teachers are in a support role, there isn’t much of a need to lock students into the Nth grade that they're chronologically suited for if they're capable of N+2nd or N+5th-level work elsewhere. And this can vary significantly by subject; the extremes are more visible in math and music than elsewhere, but there's a lot of variance in talent.
And that variance presents an important problem: performing well in an academic domain requires some combination of crystalized intelligence (what you know) and fluid intelligence (how fast you think). These can be substituted a bit—it's a rite of passage for mathy kids to neglect to study for something, forget a critical formula, and then re-derive it during the text, and it's a rite of passage for conscientious and ambitious people to have some class they took which they aced through brute-force memorization when they couldn't get the concepts down pat. The more a field advances, the more crystalized knowledge is required to get to the frontiers. Isaac Newton was able to make material contributions to math and physics as an undergraduate, in part because there was so much low-hanging fruit from the absence of a Newton. A modern Newton wouldn't be making that kind of contribution at that age simply because there's a whole tower of giants-on-the-shoulders-of-giants to scale. Over time, the accumulated-knowledge barrier gets higher and higher, and more people aren't scaling the peak until after their fluid intelligence has started to decline. So any time a sufficiently talented young person is held back in school, we're statistically reducing the likelihood that they'll be at the peak of their thinking abilities by the time they know where to apply them.
Education policy tends to focus less on the needs of 99th-percentile students than the rest, in part from the healthy egalitarian view that they will do just fine without extra instruction. But that's a narrow and present-focused view: raising the ceiling on potential academic achievement today raises the floor on standards of living for the next generation, since so much economic growth comes from scaling and refining a handful of big inventions, and those big inventions are in short supply. But AI is also great for the 50th-percentile student. 99th-percentile kids will tend to appreciate this kind of customization because they can quickly outstrip anything a standard grade-level curriculum has to offer in their chosen subject, but the student-level impact is probably bigger for the 50th-percentile student since there's so much variation in what specific plusses and minuses put them into that status, and thus so much room to improve their worst subject. For an educator with infinite patience and infinite attention, the optimal student grade on any given test is probably something like 70%: better than random but worse than adequate is the most informative grade someone can get, since it illustrates exactly what they need to move to the next level.
It's challenging to actually get a school to adopt something like this. There's the standard reluctance for people who get paid for their labor to adopt a labor-saving technology, especially if—as is the typical case—its output is something slightly worse but astonishingly cheaper. But it's necessary: US schools tend to get funded based on the number of students they have, and their costs are fixed; the sixteenth student in a fifteen-student classroom needs a little more effort, but not 1/16th of the total. So if there's a productivity improvement that competing private schools or homeschooling parents discover, they have no choice to adopt it. Education is labor-intensive, and the default for any labor-intensive job in a growing economy is to see continuous declines in productivity; even if teachers' work doesn't change, if the jobs that compete for their talents get more productive and pay more, schools have to raise wages to compensate. (It's of course quite unfair that every time ASML has a breakthrough, it increase the wages of the median React programmer. But that's life in a complex economy—you're always benefiting from and suffering from random changes that are outside of your control, though the benefit outweighs the suffering on average over time. The best you can do is engineer your job so it's a complement to the things that are constantly getting cheaper and better.)
The approach of AI-enhanced adaptive learning is something I've seen in action, and while N is small it seems to work well. I've used ChatGPT to fill in lots of gaps in theoretical knowledge, to convert mathematical formulas into programs that I can play around with until I understand them, to confirm some intuitions, discomfirm others, find the right name for a concept so I can read the work of someone who understands it better than I do, etc. This is all useful to me, but hard to quantify. It's the ideal middleware between "I had a fascinating conversation with someone who's an expert in a field I know very little about" and answering all of the follow-up questions that emerged from that conversation.
Meanwhile, there are already promising signs that an app-first education can work. My oldest daughter attends Alpha School, whose pitch is: two hours of typical schoolwork per day (reading, math, etc.), all in apps, and then enrichment activities like learning karate, programming robots, practicing public speaking, etiquette, etc. It's tricky to generalize from a small sample, and especially so in education—the parents who select into unusual schools will be different from the ones who don't, and randomized controlled trials are expensive and controversial. But: she's in second grade chronologically, and when she finished her second-grade coursework, what happened was that her next assignment was third-grade level, and that continued until she got to fourth. It's possible for a patient parent to lobby their school to put their child in a more advanced grade, but it’s hard to arrange, and that grade will still be set by their worst subject rather than their best. It's a better default, where students are continuously challenged, get feedback on their weaknesses, and advance when they're ready, at whatever pace works. (In fact, one of the questions this model has to deal with is: when they're chronologically in tenth- or eight- or sixth-grade, and they've finished the K-12 curriculum, how do you keep them busy?)
This is not cheap, at least for now; if I didn't place a high value on my kids not being bored for a decade and a half, I'd probably spend the money on something else. But it scales in a way that traditional education doesn't: improvements in the efficiency of app-based education increase the number of students a given teacher can work with by decreasing the amount of time that any given student needs. It also leads to some flexibility for other fixed costs, like real estate. When kids are doing self-paced learning in apps, it's simply not that big a deal if they have to stay home for a day, or go on a family vacation; I've worked silently at my Laptop Job while my daughter works silently on her Laptop School a few feet away.
Schools move slowly, and it's hard for edtech to make inroads—schooling in the US is an $800bn market of which 120% is already spoken for. But education is hard to resist; ultimately, the people who prioritize it are capable of things that other people aren't, and they are the ones who end up in charge. So take it as a given that the most ambitious parents will be adopting AI-based education, that their kids will be learning faster and will never get around to associating learning with boredom, and that there is no meaningful policy decision that will stop this. The question is: how fast will the rest of the schooling system catch up?
Diff JobsCompanies in the Diff network are actively looking for talent. See a sampling of current open roles below: - A seed-stage startup is using blockchains to enforce commitments and is in need of a fullstack developer with Solidity experience. (Remote)
- A company building ML-powered tools to accelerate developer productivity is looking for a frontend engineer with product and UX experience. (Washington DC area)
- A company reinventing the way Americans build wealth for the long-run by enabling them to access "Universal Basic Capital" is looking for fullstack engineers with prior experience in fintech. (NYC)
- A fintech company using AI to craft new investment strategies seeks a portfolio management associate with 2+ years of experience in trading or operations for equities or crypto. This is a technical role—FIX proficiency required, as well as Python, C#, and SQL. (NYC)
- An investment company using AI to accelerate investment in esoteric asset classes is looking for a product engineer with Python and Typescript experience; preferably someone with a track record of building on their own (Bay Area, remote also a possibility).
Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up. If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority. Elsewhere
The Other Kind of Bank Unwind
A little over a year ago, there was a quick banking crisis, with lots of "bad news Thursday, worse news Friday, bank under government supervision Monday morning" sequences. But that's not the only way banks fail. The other option is the sixteen-year workout of tiny Liberty Bank. Small banks can take their time reorganizing because they're less of a nexus of systemic risk—their depositors will be able to find another bank, and their failure probably won't lead to cascading defaults elsewhere. A quick liquidation is expensive, because whoever buys the bank is underwriting the risk that it's even worse than it looks, and that if it isn't, it will get worse when the best people leave. But in a long process, the bank is still making subpar loans and the capital remains tied up for a long time. It also encourages the bank to take risks; if it's failing anyway, a high-risk bet can get it back to breakeven, while in the event of a failure it would have been insolvent either way.
AI Asset Plays
The eventual fate of any company is that its assets are worth more when applied to some other purpose, and it shuts down. This usually happens when the economics of the business itself deteriorate, and whatever it bought in more flush times (especially real estate) can be sold. Occasionally, it's the opposite: the assets appreciated faster than the business improved, so the sensible thing to do is to sell them. And sometimes it's both: a photo sharing app, EyeEM, went bankrupt and sold last year; now it's requiring users to manually delete their photos if they don't want them to be licensed for AI training. This is uncomfortable for people who uploaded photos for some other purpose, but in general your attitude towards posting things online should be to assume that after a while, they're all either public or deleted ($, Diff).
Retail Rides Again
Retail day-trading is the rare phenomenon that picked up during Covid, had an eye-catching peak, and then—reset to a much higher level than pre-pandemic, without some reversion to the original trend ($, WSJ). Some of this predated the impact of the pandemic (The Diff was writing about WallStreetBets in February 2020), but stimulus checks and a lot of time indoors exposed many more people to the joys of volatile stocks and short-dated options. Gambling is one of a few industries that try to optimize for customers rounding the cost of their service down to zero, at least over the timescales that matter when they're asking themselves whether it's a bad habit or not. In trading, zero-commission is an incredibly powerful way to do that.
Single-Family Rentals
State and Federal legislators are considering plans to force institutional single-family landlords—i.e. companies owned by PE or public markets that scoop up suburban homes for rental—to divest their portfolios to homeowners ($, WSJ). This is a business that The Diff has covered a few times ($): to the extent that these landlords compete with homebuyers, they're increasing prices, but if they compete with smaller landlords, they increase the supply of rentable property because their occupancy rate is higher than the nationwide vacancy rate. The relevant inventory number is not the easy-to-calculate count of houses sold, but the harder-to-estimate number of available home-months that can be sold to willing buyers. One thing the WSJ piece illustrates is that, despite their high growth, these larger landlords are typically 1-2% of all homes purchased by investors, and they've actually reduced their share in the last few quarters. The typical home is still owned by its occupant, and the typical home that isn't is owned by the kind of small-scale landlord who has existed for a long time. But even if the larger landlords are a small share of the market, they can still have a price impact by driving prices up. On still another hand: they tend to buy in markets where they see favorable demographic trends and tight housing supply, so part of their social function is to raise home prices today in places where they'd otherwise rise more gradually over time.
AI Math
One of the drivers of the consumer Internet renaissance of the early 2000s was just how cheap it was to start a company. Open-source software and affordable consumer hardware meant that the cost of launching was cheap, so the pace of iteration went up. AI has completely inverted that: there's a high fixed cost just to participate, so small companies have had to, and been able to, raise substantial rounds before they have any real business ($, WSJ). This creates some troubling incentives: hardware and foundation model companies want the next layer of the supply chain to be as fragmented as possible, so they offer easier terms to startups. This means that early AI companies, when they do have a business, can easily find themselves addicted to fantasy economics. In a structurally high-margin business, this is tolerable for a while, but in AI it's entirely possible for a company to iterate into a situation where it produces 50% gross margins, but only because more than half of its costs are subsidized by suppliers.
|