[Inverted Passion] Usefulness grounds truth

Usefulness grounds truth

By Paras Chopra on Jul 07, 2024 09:38 am

Are LLMs intelligent?

Debates on this question often, but not always, devolve into debates on what LLMs can or cannot do. To a limited extent, the original question is useful because it creates an opening for people to go into specific. But, beyond that initial use, the question quickly empties itself because (obviously) the answer to the question if X is intelligence depends on how you define intelligence (and how you define X).

Even though it is clear that words are inherently empty, internet is full of such debates. People focus on syntax, when semantics is what runs the world.

There’s no Platonic realm teeming truth that’s disconnected from the world we inhabit. If it existed, the debate on what’s true would shift to the question of who has access to that realm. Is it the scientists? Is it the Pope? Or is it your neighbourhood aunty?

We, fortunately, live in the modern world where everyone is entitled to their opinions. Someone says God exists. The other person says it’s clear that God doesn’t exist. (I say it depends on the definition but that’s boring and nobody wants to hear that)

So, in a sea of opinions, how do you distinguish truth?

The trick is to reframe the question: instead of asking what’s true, ask what’s useful.

The kind of usefulness I’m talking about here is like, but not limited to, the usefulness of a kitchen-knife. Just like a knife helps you slice tomatoes to make a sandwich for yourself when you’re hungry, “truths” are different tools in your arsenal that you could use to (potentially) make a difference in your life or the world at large (if that’s the kind of thing you care about).

We know 1+1 is 2 because it enables us to do simple accounting of objects and get ahead of other animals who can’t count. We know the sun rises from the east because this knowledge enables us to build houses with windows that stream sunlight into our bedroom just as we’re waking up (and, of course, also launch satellites).

I am walking in the footsteps of William James who founded Pragmatism. Breaking away from the philosophical tradition of swimming in abstractions, he preached asking whether something makes a real difference or not. Without a focus on real world impact, questions and debates often remain circular. Take, for example, the innocuous question: “do you love me?” It’s an empty question because love has no meaning beyond how it manifests. If I say I love you (whatever that means) but never do anything for you, should I defend my inaction by saying: “But I told you I love you”?

As you can imagine, nobody talks like this. Very soon, the cross questioning about love gets into the specifics (like it should): “You said you love me, but you never give me roses”. Now, this is a better conversation in because it is useful and actionable. It reveals the previously unstated assumption that the lover expects love to mean roses every now and then, thereby helping both the parties in getting what they want (to love and be loved by an exchange of roses).

Science is a beautiful examples of how truth emerges from usefulness. The scientific community has agreed that their stated goal is to study how the world works and their preferred method is nullius in verba. Opinions be damned, let’s see whose theory makes predictions that the real world agrees with.

Truth in science nothing but predictions about what we will observe when we perform a certain action in the world. So, when we say that mass bends the fabric of spacetime, we’re explicitly saying that there are locations in space that are so dark that even light cannot escape, so our telescopes should observe total darkness.

Through an elaborate chain of cause-and-effects, the grounding of the truth of general relatively, ultimately happens in the prediction of what we should or should not observe via our eyes peering into the optical telescope when we point it at different locations in space.

How do predictions in science relate to usefulness? Well, if I make a prediction X, and you make a prediction Y, I have an edge over you if mine tends to be the one that agrees with what the experiment reveals. The usefulness here finally emerges from its (potential) applicability. The theory of general relativity is true because it ultimately enabled us to build things like GPS satellites.

Experiments with no immediate real-world usefulness like the discovery of Higgs Boson are useful to the extent I believe that if confers me an edge over in a head-to-head battle about a real world issue with someone else. So, truths are, ultimately, bets about what could turn out to be useful.

One can argue that many theories of the past turned out to be wrong. For example, people argue that Plotemy’s epicycles doesn’t depict reality even through it made correct predictions. But, then, which theory depicts reality? What, ultimately, is the arbiter of reality? What is reality, anyway?

We are back to the circular logic of definitions. Reality is simply a collection of everything that impacts us (or could potentially impact us). And the only way for us to define it is via our tools and models. Models of reality (that work) is reality. Newton’s laws didn’t stop working (or, equivalently, being useful) once Einstein proposed relativity. Einstein simply expanded our repertoire of tools we have to intervene in reality.

<a href="http://<iframe width="560" height="315" src="https://www.youtube.com/embed/kjxF6rcblTw?si=PpGQuFEttgSTtJZT" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen>Even though truth doesn’t exist independently of the utility, it also doesn’t mean it’s subjective. You can’t simply think you can fly and jump out of the window. Reality will intervene and truth will emerges from the usefulness of the theory that no matter how hard you think, you can’t manifest flight out of thin air. So the question “can you fly?” is actually “will you survive if you jump?” in disguise.

Truth, here, is a prediction that enables you to get what you want in life (which, in this case, is not dying).

All our truths finally ground into what they do to the world we inhabit. Symbols require grounding in the real world. Without grounding, words are mere utterances.

Back to our original question: are LLMs intelligent? Let’s reframe it.

Can LLMs help summarise an article? Can they drive a car safely? Can they write a scientific paper that gets published in Nature?

See, words like “intelligence” don’t matter. At best, they’re pointers to tools, hypothesis and models one can choose to adopt to increase the odds of getting what one want.

TLDR: forget about truth. Ask what is useful, instead.

PS: all philosophy is politics.


/>

Join 150k+ followers
/>

Get my new essays in your email
/>
/>

The post Usefulness grounds truth appeared first on Inverted Passion.


Read in browser »
share on Twitter Like Usefulness grounds truth on Facebook




Recent Articles:

You can’t jail an AI
How to be a messy thinker
Why time seems to pass faster as we age
A primer on dopamine
Review of 2023
Copyright © 2024 Inverted Passion, All rights reserved.
You are receiving this email because you opted in via our website.

Our mailing address is:
Inverted Passion
1104 KLJ Tower
Netaji Subhah Place
Delhi, 110034
India

Add us to your address book


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp

Older messages

[Inverted Passion] You can’t jail an AI

Friday, May 17, 2024

Here's a new post on InvertedPassion.com You can't jail an AI By Paras Chopra on May 17, 2024 01:58 am Here's why I worry about AI. We know that people can get away with anything to pursue

[Inverted Passion] How to be a messy thinker

Monday, May 13, 2024

Here's a new post on InvertedPassion.com How to be a messy thinker By Paras Chopra on May 12, 2024 05:56 am I love thinking about thinking. Give me a research paper on rationality, cognitive biases

[Inverted Passion] Why time seems to pass faster as we age

Wednesday, February 28, 2024

Here's a new post on InvertedPassion.com Why time seems to pass faster as we age By Paras Chopra on Feb 27, 2024 05:01 am 1/ I've been mega-obsessed with this feeling. A year as a 36-year-old

[Inverted Passion] A primer on dopamine

Tuesday, January 23, 2024

Here's a new post on InvertedPassion.com A primer on dopamine By Paras Chopra on Jan 22, 2024 07:54 am 1/ I recently made notes on the book “Hooked” but wasn't satisfied by the depth of

[Inverted Passion] Review of 2023

Monday, January 1, 2024

Here's a new post on InvertedPassion.com Review of 2023 By Paras Chopra on Dec 31, 2023 06:33 am Time is strange – 2023 simultaneously felt too long and too short. It was short because I remember

You Might Also Like

Klarna’s founder factory

Wednesday, October 23, 2024

Plus: How Silo AI made it to exit; latest deals View in browser Groupe ADP logo Good morning there, This year I've reported on employees-turned-entrepreneurs emerging from later-stage fintechs like

The AI agents have arrived

Wednesday, October 23, 2024

Artificial intelligence can now compute for you on your behalf — and the web is never going to be the same Platformer Platformer The AI agents have arrived Artificial intelligence can now compute for

🗞 What's New: Why Stripe just made the biggest Web3 acquisition in history

Tuesday, October 22, 2024

Also: What AI made possible last week ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

Belt of Thought and Furbaby Takes 🐏

Tuesday, October 22, 2024

And who said earning is hard? ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌ ͏‌

This was a gamble

Tuesday, October 22, 2024

This early bird caught the worm ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

[CEI] Chrome Extension Ideas #163

Tuesday, October 22, 2024

ideas for Airbnb, Social Media, Maps, and YouTube ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏

AI agents and the future of work

Tuesday, October 22, 2024

learn what agents are, their limitations, key companies, and implications for the future of work Hi there, AI agents could revolutionize how companies hire, scale, and do more with fewer people. Long

worthless

Tuesday, October 22, 2024

Read time: 1 min. 17 sec Before I started my own business, I worked at a startup. When I interviewed for the job, they told me something like: “we're not gonna pay you a whole lot but we will give

Stay ahead of the curve

Tuesday, October 22, 2024

Access a range of resources from Latham & Watkins to adapt, evolve, and succeed View in browser 1200 x 380 (Desktop & Tablet) (33) Hey there, Whether you're starting up, scaling up, or

Growth Newsletter #218

Tuesday, October 22, 2024

Use conflict to make people care ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏