MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Defending Habit Streaks

2026-04-06 12:34:01

I have a lot of habit streaks. Some of the streaks I have going at the moment:

  • Studied Anki cards for Chinese every day for 8 months*
  • Meditated every day for the past 1.5 years*
  • Flossed every day for 6+ months*

In fact I think quite a lot of my identity is connected to these streaks at this point, and that’s part of what sustains them [1]. But there are a lot of other things you can do to make habits and their associated streaks more sustainable.

It’s helpful if they are small enough and flexible enough to be done even on days where you are extra busy, or forgot about them until the evening. It’s good to schedule time for them in advance, both so you have a designated time to start, and so you know you’ll have enough time to finish. It can help to do the habit literally every day so you don’t have to think about whether today’s a day to do it, and so the streak feels more visceral. It’s also helpful if you actually want to do the habit, because it’s enjoyable or clearly linked to your larger goals.

Here I want to focus on what to do if, god forbid, you do actually break a habit streak. There’s an argument to be made that planning for what to do in the event of a break makes it psychologically easier to then skip a day. A lot of the power of a habit streak comes from making it unthinkable to break the streak. I think this is true, but accidents happen. Sometimes you just plumb forget, or are sick, or are on a transatlantic flight and the concept of well-defined, discrete days starts to break down. And, as may be obvious, the value of habit streaks comes not from having a perfect unbroken chain, but from consistently doing the activity. So one of the most important parts is how to recover.

To me, the primary line of defense is: don’t fail twice [2]. Put in a special effort the next day to make sure that you actually perform the habit. Make it your primary goal, leave extra time for it, and get it done. If you’ve done that, and you get right back on the streak, then I think you should give yourself permission to think of the streak as still alive. (You may have noticed asterisks in my initial list of habits – for all three of those, I have had a day where it’s at least ambiguous whether I did the habit: for Anki, I just totally forgot on one day while I was traveling; for meditation, it was, ironically enough, the first day of a meditation retreat, and we didn’t do a formal sit; for flossing, I was on a flight to London and slept on the plane.)

But what if you’re really sick, or something unexpected happens, and you miss two days in a row? This is where I think it’s helpful to hold a hierarchy of goals in mind at once. You could decide to care about keeping the habit alive at multiple levels:

  • Whether the streak remains unbroken.
  • Whether you’ve failed two days in a row.
  • How reliable you were in the past month.
  • Your overall 9s of reliability.

By shifting focus to a higher level goal, there’s always something at stake – you can’t just say “Oh well, the streak’s over, I guess there’s no point continuing until I decide to make a new streak.” There’s always some nearby goal that you could meaningfully affect; it’s never time to fail with abandon. Even if you broke the streak, you can revive it. And even if you missed twice, you can aim for a good month. And even if the month starts off badly, you shouldn’t write the whole month off because that’d damage your long-run average.

There are a bunch of variations you could do on which specific metrics to track, and how much to weight each in your definition of “doing a good job at the habit”. But honestly I don’t think it matters to get the incentive perfectly right, and in fact maintaining some strategic ambiguity there might be helpful – it’ll be harder for your subconscious to exploit the details of your system. For me, collecting enough data that I could in theory compute whatever metrics is helpful enough, without actually having to do it (partly because I haven’t failed my habits enough recently to make that necessary, not to brag or anything.)

I’m not sure how to articulate how it feels to actually change the shape of your motivational system so it reflects these rules. A lot of it feels like subtly manipulating my motivational system by strategically making different things salient. The whole purpose of building streaks is to make a deal with an irrational part of the mind to achieve our rational goals, and trying to analyze it in rational terms often falls flat.

  1. Discussed in Atomic Habits. ↩︎

  2. Probably also discussed in Atomic Habits. ↩︎



Discuss

Estimates of the expected utility gain of AI Safety Research

2026-04-06 12:19:28

When thinking about AI risk, I often wonder how materially impactful each hour of my time is, and I think that this may be useful for other people to know as well, so I spent a couple of hours making a couple of estimates. I basically expect that a tonne of people have put a bunch more time into this than me, but this is nice to have as a rough sketch to point people to.

I'm going to make 3 estimates: an underestimate, my best-guess estimate and (what I think is) an overestimate.

Starting facts[1]

  • Currently 8.3 Billion people on planet earth
  • Current median age: 31.1 years
  • Current life expectancy: 73.8 years

I am going to commit statistical murder and assume this means that everyone on the planet lives ~42.7 years from this point onwards. 

  • Underestimate: 40 years of life left/person
  • Median: 42.7 years + ~15 years' increase in life expectancy (20 years' growth in the past 60 years) = about 60 years of life left
  • Overestimate: Everyone gets life extension and lives to heat death of universe: 10^100 years

Since the population is growing, we should take that into account:

  • Underestimate: We only care about the lives of people currently alive
  • Median: We keep growing at current ~1% growth rate per year
  • Overestimate: Population growth of 2% per year until the heat death of the universe

Given these parameters, we can figure out the total expected years of life we care about for each scenario: 

  • Under: 40 years x 8.3 B = 332 Gyr
  • Median: 

    Current population: 60 years x 8.3 B 

    Additional population (linear approximation):  = 

    Additional population life span: 73.8 years + ~1/3yrs added/year = 110 years

    Total expected years of life: 

  • Overestimate: 10^100 years x 1.02^(10^100) = broken calculator.

I think it might be best to skip out on the overestimate. For the underestimate, we'll go with ~20 years of research to produce a 1% chance of a 1% decrease in the final risk for the entire field. Extinction occurs 30 years from now. For the median estimate, we'll go with 5 years of research to reduce a risk of extinction, which happens 10 years from now, and we will go with a 50% chance of a 5% reduction in risk.

Expected years of life available to be saved:

  • Under: 332 Gyr x ((40-30)/40)  = 83 Gyr
  • Median: 498 Gyr x (60-10)/60 + 8.93Gyr x 10 = 415 Gyr + 89.3 Gyr = about 500 Gyr 

Expected years of life actually saved:

  • Under: 83 Gyr x 0.01 x 0.01 = 8.3 Myr
  • Median: 500 Gyr x 0.5 x 0.05 = 12.5 Gyr

Number of AI Safety researchers: 

  • Under: 10k researchers
  • Median: 2.5k researchers (to account for the growth of the field, current estimates are closer to 1-2k).

Expected impact per researcher:

  • Under: 830 yrs
  • Median: 5Myr

We've said the researchers have 20/5 years to make an impact, which gives us:

  • Under: ~40 years of life saved/year
  • Median: 1 Myr of life saved / year

Going back to the ~40 years of life expected for the modern median human, this gives an underestimate of 1 year of work to save one life, or a median estimate of 5 mins/life. This is a pretty broad range funnily enough.

1 year of work to save one life is just a tad worse than the 1.2 lives/year saved donating £3000/year as advertised by Effective Altruism UK. If we take that value as given and assume 1 life = £2500, this means that on the median estimate, you should be earning £2500 x 10^6 / 40 = £62.5 million/ year. If only the world was more sensible.

  1. ^

    All population data comes from https://www.worldometers.info



Discuss

The slow death of the accelerationist.

2026-04-06 11:40:00

The year is 2024. Summer has just begun. National discourse, for now, is solely focused on the upcoming presidential election, with many a journalist or political commentator critiquing the current, rather fiery state of political affairs. Tech and its associated public commentary has centered upon artificial intelligence as its new darling, hailing OpenAI as  a savior for what was once deemed an idea stuck in science fiction, and looking to burgeoning startups such as Cursor and Windsurf as early examples of how agents could automate software engineering tasks. Logging onto Twitter, one would catch glimpses of Beff Jezos, an aptly named satirical account, relentlessly posting optimistic odes about how our own silicon creations will soon enable us to solve all of our problems, enabling us to truly accelerate. Beff's social posts were not just the isolated ramblings of an overtly verbose anon; they were slowly becoming a zeitgeist of their own, inspiring an entire independent cohort of individuals who slowly began appending their public profiles with 4 characters: e/acc.

The e/acc community was, despite its overarching and centric belief, surprisingly diverse. You could find bootstrapped startup founders working on their next B2B SaaS play, far right exhibitionists who were enjoying both the attention and money that Twitter's creator program had bestowed upon on them, and renowned venture capitalists, all espousing the same ideas. One could argue that the central tenet of e/acc was rebellion: rebellion against the status quo, rebellion against the government (or the broader powers that be), and rebellion against those who may have doubted them in the past. This haphazard group slowly began to gain momentum, with Beff Jezos, who was later doxxed and revealed to be former Google scientist Guillaume Verdon, creating his own hardware startup, Extropic.

The year is now 2026. The e/acc movement is now, for the most part, dead, with little to no mention of it on Twitter, or on popular technology podcasts. The remnants of the community no longer sing praises for a technology that is still yet to come; they instead attempt to convince each other that their application of said technology are morally, ethically, or technically superior than that of the anonymous Discord user typing below them.

The history of AI, albeit short, is already incredibly rich. Never before has a certain technology changed this quickly, and brought with such rapid alterations in how we perceive the world, and ourselves. The summer of 2024 remains an interesting and somewhat unique time in this history: ChatGPT and its counterparts had been around long enough to become a part of the public discourse, but yet were still close enough to their infancy that it was not quite certain what they could become, or where the technology would eventually go. This effect was felt across the social, economical, and philosophical extensions of the colloquial world of "tech", which at that time, seemed to be all-encompassing: indeed, it might be years until we understand the extent to which this particular circle of individuals had an effect on cultural norms, politics, and more during this period, largely as a result of the optimism in which everyone felt at the time.

AI is no longer an optimistic technology. As with any new technology, the honeymoon period has effectively ended. The same university students who raved about the latest release of GPT to their classmates are now dreading the prospect of entering a job market that both challenging and ever-changing. The same tech bros who were early to vibe-coding are now lamenting the loss of technical moat for their businesses. The looming threat of economic risk, a risk that was once dismissed as hearsay and doomerism by those in the techno-bubble, is now very real. The national pride that once accompanied the advancement of AI being solely in the hands of American-made startups has evaporated, with Chinese counterparts such as DeepSeek and MiniMax shipping equally capable, open-source counterparts at a fraction of the price.

As we continue to grapple with and lament the changes that AI has brought us, I am often reminded of some conversations I had with friends who were around for the early days of the internet, back when AOL was the primary messaging app, and back when you could apparently find early drafts of internet-based currencies that predated Bitcoin. The internet at that time was, to many, special. Being a hacker, a person who knew their way around computers, networks, and the like, was a social boon, not a black mark. But yet, as the internet evolved, as it became commoditized and invaded by corporate whims and infrastructure, it became plain, a tool that enhanced productivity, but did little else for the soul. Being a hacker meant being embroiled in controversy or criminality, or worse, being a social outcast or nerd who could barely hold a conversation with their fellow man. The internet had produced an identity, one which got lost and eventually cast aside once the underlying technology became commoditized. Even within the subset of the internet that still considered themselves true hackers, there were now various gatekeepers, gatekeepers whose standards you had to meet before you could publicly proclaim yourself as a member of the broader collective of the hacker community. And with that, "hackerism" went from a cultural-norm, back to colloquial term associated with men in dark rooms, wearing black jackets and typing away at a neon keyboard. A movement can weaken at the very moment its central object becomes more important, because what made the movement compelling was never just belief in the object’s importance. It was the sense that belief itself distinguished you. Once everyone agrees that the technology matters, the movement loses one of its primary functions.

A similar thing happened to accelerationism. The technology movement of the time, AI, came by, became special, and then became mainstream, but this time, with nothing to replace it. Popular sentiment went from blissful glee to unabashed debate, debate on if the environmental costs of developing better AI models was worth, debate on if the economic uncertainty caused by increasingly autonomous models would become more severe. The market was flooded by a wave of startups building AI-based tools for a plethora of use-cases. It is difficult to sustain a politics of unbounded technological optimism once the technology in question no longer feels singular. It is difficult to maintain the romance of acceleration when what acceleration mostly seems to produce is an endless stream of mediocre products, collapsing defensibility, and a strange sense that capability is everywhere while meaningful progress remains harder to locate than expected.

And that, more than any technical disappointment, is what the accelerationist could not survive. While the individual downturns of more recent movements, such as the hackers, the NFT shills, and the toxic masculinity stans was due to our broader potopurri of culture either rejecting them or their movements failing due to economic or social pressures outside of their control, AI accelerationists have neither assimilated, nor have they been rejected. They have simply been left to be, left to wallow in an ironic reality in which their special technology progressed at a pace faster than anyone could have hoped, but yet, it became known not for enabling unprecedented societal progress, but for becoming a part of the stack, the same stack of software aided productivity that society slowly began to accept as a norm, a norm that became more associated with its negatives in public opinion than its positives, just like social media and cryptocurrency before it.

The summer of 2024 may remain a small footnote, if that, in the broader history of the development of AI. Yet, for those that were actively, for the lack of a better term, "chronically online", it may represent the last peak of the accelerationists, the tech bros, the culture of builders. While past trends such as the dot com bubble and social media applications had created similar microcosms of closed-off cults that had similarly either died off or assimilated into a wider societal group, the progress of AI is different altogether. Accelerationism did not get proven wrong (AI hype is at an all time high) nor did it fizzle out: it simply became normal. Being an optimistic accelerationist is fruitless when the technology you are interacting with is no longer special, no longer a science-fiction dream come to reality. It is not a badge of honor when it feels that everyone can solve or do anything, yet nothing is actually getting done. As the umpteenth vibe-coded app hits the market, it is worth wondering what happened to the collective optimism of the tech community just a year and a half ago. For as it stands today, it seems that we are currently living through not unbounded accelerationism, but rather the slow death of the accelerationist.



Discuss

New Fatebook Android App

2026-04-06 11:05:14

tldr; get the new Fatebook Android app!

What is Fatebook?

Fatebook.io is a website[1] for easily tracking your predictions and becoming better calibrated at them. I like it a lot, and find it convenient for practicing probabilistic thinking.

image.png

The Fatebook.io dashboard

That said, I've found Fatebook's mobile version to be clunky, and its email-based notifications to be less-than-ideal...which leads me to:

The New Android App

Over the past two weeks, I've made an android app that wraps the Fatebook API, allowing you to easily make new forecasts, leave comments, resolve old forecasts, and view your stats.

default-screen.png


The default screen



prediction-card.png

A (non-resolved) prediction card


new-prediction.png

Making a new prediction


stats.png

Statistics


A beautiful and intuitive UI combined with a fast offline-first database makes it easy pull open the app and log a prediction within fifteen seconds of thinking of one, while once-daily "remember to predict!" and "x is ready to resolve!" notifications help you remember to make and review new predictions.

Give it a try if you'd like![2]

https://github.com/JapanColorado/fatebook-android

Feedback or development help is very much appreciated! (so far it's just been Claude Code and I)

  1. ^

    Made by the fabulous folks over at Sage Future, also behind the AI Village, Quantified Intuitions, and the Estimation Game!

  2. ^

    It does currently require installing from the GitHub Releases APK file (aka enabling "Install from unknown sources"). Let me know if being non-Google Play Store is a deal breaker for you and I'll bump getting it published in priority!



Discuss

My forays into cyborgism: theory, pt. 1

2026-04-06 09:13:07

In this post, I share the thinking that lies behind the Exobrain system I have built for myself. In another post, I'll describe the actual system.

I think the standard way of relating to LLM/AIs is as an external tool (or "digital mind") that you use and/or collaborate with. Instead of you doing the coding, you ask the LLM to do it for you. Instead of doing the research, you ask it to. That's great, and there is utility in those use cases.

Now, while I hardly engage in the delusion that humans can have some kind of long-term symbiotic integration with AIs that prevents them from replacing us[1], in the short term, I think humans can automate, outsource, and augment our thinking with LLM/AIs.

We already augment our cognition with technologies such as writing and mundane software. Organizing one's thoughts in a Google Doc is a kind of getting smarter with external aid. However, LLMs, by instantiating so many elements of cognition and intelligence (as limited and spiky as they might be), offer so much more ability to do this that I think there's a step change of gain to be had.

My personal attempt to capitalize on this is an LLM-based system I've been building for myself for a while now. Uncreatively, I just call it "Exobrain". The conceptualization is an externalization and augmentation of my cognition, more than an external tool. I'm not sure if it changes it in practice, but part of what it means is that if there's a boundary between me and the outside world, my goal is for the Exobrain to be on the inside of the boundary.

What makes the Exobrain part of me vs a tool is that I see it as replacing the inner-workings of my own mind: things like memory, recall, attention-management, task-selection, task-switching, and other executive-function elements.

Yesterday I described how I use Exobrain to replace memory functions (it's a great feeling to not worry you're going to forget stuff!)

Before (no Exobrain)

After (with Exobrain)

Retrieve phone from pocket, open note-taking app, open new note, or find existing relevant note

Say "Hey Exo", phone beeps, begin talking. Perhaps instruct the model which document to put a note in, or let it figure it out (has guidance in the stored system prompt)

Remember that I have a note, either have to remember where it is or muck around with search

Ask LLM to find the note (via basic key-term search or vector embedding search)

If the note is lengthy, you have to read through all of note

LLM can summarize and/or extract the relevant parts of the notes

Replacing memory is a narrow mechanism, though. While the broad vision is "upgrade and augment as much of cognition as possible", the intermediate goal I set when designing the system is to help me answer:

What should I be doing right now?

Aka, task prioritization. In every moment that we are not being involuntarily confined or coerced, we are making a choice about this.

Prioritization involves computation and prediction – start with everything you care about, survey all the possible options available, decide which options to pursue in which order to get the most of what you care about . . . it's tricky.

But actually! This all depends on memory, which is why memory is the basic function of my Exobrain. To prioritize between options in pursuit of what I care about, I must remember all the things I care about and all things I could be doing...which is a finite but pretty long list. A couple of hundred to-do items, 1-2 dozen "projects", a couple of to-read lists, a list of friends and social.

The default for most people, I assume, at least me, is that task prioritization ends up being very environmentally driven. My friend mentioned a certain video game at lunch that reminds me that I want to finish it, so that's what I do in the evening. If she'd mentioned a book I wanted to read, I would have done that instead. And if she'd mentioned both, I would have chosen the book. In this case, I get suboptimal task selection because I'm not remembering all of my options when deciding.

I designed my Exobrain with the goal of having in front of me all the options I want to be considering in any given moment. Actually, choosing is hard, and as yet, I haven't gotten the LLMs great at automating the choice of what to do, but just recording and surfacing the options isn't that hard.

Core Functions: Intake, Storage, Surfacing

Intake

  1. Recordings initiated by Android app are transcribed and sent to server, processed by LLM that has tools to store info.
  2. Exobrain web app has a chat interface. I can write stuff into that chat, and the LLM has tool calls available for storing info.
  3. Directly creating or changing Note (markdown files) or Todo items in the Exobrain app (I don't do this much).

Storage

  • "Notes" – freeform text documents (markdown files)
  • Todo items – my own schema
  • "Projects" (to-do items can be associated with a project + a central Note for the project)

Surfacing

  • "The Board" – this abstraction is one of the distinctive features of my Exobrain (image below). In addition to a chat output, there's a single central display of "stuff I want to be presented with right now" that has to-do items, reminders, calendar events, weather, personal notes, etc. all in one spot. It updates throughout the day on schedule and in response to events. The goal of the board is to allow me to better answer "what should I be doing now?"
    • A central scheduled cron job LLM automatically updates four times a day, plus any other LLM calls within my app (e.g., post-transcript or in-chat) have tool calls to update it.
    • Originally, what became the board contents would be output into a chat session, but repeated board updates makes for a very noisy chat history, and it meant if I was discussing board contents with the LLM in chat, I'd have to continually scroll up and down, which was pretty annoying, hence The Board was born.
  • Reminders / Push notifications to my phone.
  • Search – can call directly from search UI, or ask LLM to search for info for me.
  • Todo Item page – UI typical of Notion or Airtable, has "views" for viewing different slices of my to-do items, like sorted by category, priority, or recently created.)

(An image of The Board is here in a collapsible section because of size.)

The Board (desktop view)

image.png

There are a few more sections but weren't quite the effort to clean up for sharing.

What is everything I should be remembering about this? (Task Switching Efficiency)

Suppose you have correctly (we hope) determined that Research Task XYZ is the thing to be spending your limited, precious time on; however, it has been a few months since you last worked on this project. It's a rather involved project where you had half a dozen files, a partway-finished reading list, a smattering of todos, etc.

Remembering where you were and booting up context takes time, and if you're like me, you might be lazy about it and fail to even boot up everything relevant.

Another goal of my Exobrain, via outsourcing and augmenting memory, is to make task switching easier, faster, and more effective. I want to say "I'm doing X now" and have the system say "here's everything you last had on your mind about X". Even if the system can't read the notes for me, it can have them prepared. To date, a lot of "switch back to a task" time is spent just locating everything relevant.

I've been describing this so far in the context of a project, e.g., a research project, but it applies just as much, if not more, to any topic I might be thinking about. For example, maybe every few months, I have thoughts about the AI alignment concept of corrigibility. By default, I might forget some insights I had about it two years ago. What I want to happen with the Exobrain is I say to it, "Hey, I'm thinking about corrigibility today", and have it surface to me all my past thoughts about corrigibility, so I'm not wasting my time rethinking them. Or it could be something like "that one problematic neighbor," where if I've logged it, it can remind me of all interactions over the last five years without me having to sit down and dredge up the memories from my flesh brain.

Layer 2: making use of the data

Manual Use

It is now possible for me to sit down[2], talk to my favorite LLM of the month, and say, "Hey, let's review my mood, productivity, sleep, exercise, heart rate data, major and minor life events, etc., and figure out any notable patterns worth reflecting on.

(I'll mention now that I currently also have the Exobrain pull in Oura ring, Eight Sleep, and RescueTime data. I manually track various subjective quantitative measures and manually log medication/drug use, and in good periods, also diet.)

A manual sit-down session with me in the loop is a more reliable way to get good analysis than anything automated, of course.

One interesting thing I've found is that while day-to-day heart rate variability did not correlate particularly much with my mental state, Oura ring's HRV balance metric (which compares two-week rolling HRV with long-term trend) did correlate.

Automatic Use

Once you have a system containing all kinds of useful info from your brain, life, doings, and so on, you can have the system automatically – and without you – process that information in useful ways.

Coherent extrapolated volition is:

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were...

I want my Exobrain to think the thoughts I would have if I were smarter, had more time, and was less biased. If I magically had more time, every day I could pore over everything I'd logged, compare with everything previously logged, make inferences, notice patterns, and so on. Alas, I do not have that time. But I can write a prompt, schedule a cron job, and have an LLM do all that on my data, then serve me the results.

At least that's the dream; this part is trickier than the mere data capture and more primitive and/or manual surfacing of info, but I've been laying the groundwork.

There's much more to say, but one post at a time. Tomorrow's post might be a larger overview of the current Exobrain system. But according to the system, I need to do other things now...

  1. ^

    Because the human part of the system would, in the long term, add nothing and just hold back the smarter AI part.

  2. ^

    I'm not really into standing desks, but you do you.



Discuss

Unmathematical features of math

2026-04-06 06:40:19

(Epistemic status: I consider the following quite obvious and self-evident, but decided to post anyways.[1])

Mathematics is a social activity done by mathematicians.

— Paul Erdős, probably

There've been a few attempts to create mathematical models of math. The examples that come to my mind are Gödelian Numbering (GN) and Logical Induction (LI). Feel free to suggest more in the comments, but I'll use those as my primary reference points. In this post, I want to contrast them with the way human mathematicians do math by noticing a few features of their process, the ones that are hard to describe with the language of math itself. Those features overlap a lot and reinforce each other, so the distinction I make is subjective. There's also probably more of them, those are just what I was able to think of. What unites them is that they make mathematical progress more tractable.

Theorem Selection

The way in which Kurt Gödel proved his incompleteness theorems was by embedding math into the language of a mathematical theory (number theory in that particular case, but the trick can be done with any theory that's expressive enough). But this way of describing mathematics is very eternalistic: it treats math as one monolith. It does not give advice on how to make progress in math. How could we approach it in a systematic way?

Fighting the LEAN compiler

What if we just try to prove all statements we can find proofs for?

Let's do some back-of-the-envelope Fermi estimations. Here's a LEAN proof of the statement "if and if , then " (sorry for JavaScript highlighting):

example (a : ℕ → ℝ) (t :) (h : TendsTo a t) (c :) (hc : 0 < c) :
TendsTo (fun n ↦ c * a n) (c * t) := by
simp [TendsTo] at *
intro ε' hε'
specialize h (ε'/c ) (by exact div_pos hε' hc)


obtain ⟨B, hB⟩ := h
use B
intro N hN
specialize hB N hN
/-theorem (abs_c : |c| = c) := by exact?-/


calc
|c * a N - c * t| = |c*(a N - t)| := by ring
_ = |c| * |a N - t| := by exact abs_mul c (a N - t)
_ = c * |a N - t| := by rw [abs_of_pos hc]
_ < ε' := by exact (lt_div_iff₀' hc).mp hB

It's 558 bits long in its current form. I didn't optimize it for shortness, but let's say that if I did we could achieve 200 bits. Let's say that we run a search process that just checks every possible bitstring starting from short ones for whether it is a valid LEAN proof. There are possible bitstrings shorter than this proof. So if the search process checks proofs a second, we will reach this particular proof in years. Not great.

That marks the first and most important unmathematical feature of math: the selection of theorems. We do not prove, nor do we strive to prove, every possible theorem. That would be slow and boring. GN enumerates every statement regardless of its importance. LI prioritizes short sentences, which is an improvement, as it does allow us to create a natural ordering in which we can try to prove theorems and therefore make progress over time. But it's still very inefficient.

Naming

The way we name theorems and concepts is important. Most of the time we name them after a person (though most of the time it's not even the person who discovered it), but if you think about it, the Pythagorean theorem is actually called "the Pythagorean theorem about right triangles." Each time we need to prove something about right triangles, we remember Pythagoras.

LI and GN all name sentences by their entire specification, and that shouldn't come as a surprise. There wouldn't be enough short handles because, as described above, they try to talk about all sentences.

Naming allows us to build associations between mathematical concepts, which helps mathematicians think of a limited set of tools for making progress in a specific area.

Step Importance

When we teach math, we do not go through literally every step of a proof. We skip over obvious algebraic transformations; we do not pay much attention when we treat an element of a smaller set as an element of a larger set with all properties conserved (when doing 3D geometry and using 2D theorems, for example); we skip parts of a proof that are symmetrical to the already proven ones ("without loss of generality, let X be the first...").

We do that because we want to emphasize the non-trivial parts. And the feeling of non-triviality is a human feeling, not identifiable from a step's description alone. This same feeling is also what guides mathematicians to prove more useful lemmas.

GN doesn't do that — it checks every part of the proof. I'm not as sure about LI; there might be traders that do glance over obvious steps but check more carefully for less trivial ones.

Lemma Selection

Some theorems are more useful and more important than others because they help prove more theorems. This score could hypothetically be recovered from some graph of mathematics, but it is usually just estimated by math professors creating the curriculum. This taste is then passed on to the next generation of mathematicians, helping them find more useful lemmas.

GN doesn't try to do that. LI might do that implicitly via selecting for rich traders.

Real-world Phenomena

The reason humans started doing math was that they noticed similar structures across the real world. The way you add up sheep is the same way you add up apples. Pattern-matching allowed mathematicians to form conjectures and target their mathematical efforts. ("Hmm, when I use 3 sticks to form a triangle, I end up with the same triangle. What if that's always true?")

GN and LI do not do that because they do not have access to outside world. Though there is a mathematical theory that attempts to do precisely that, which is Solomonoff Induction.

Categorising

This is very similar to Naming: we separate math into topics and when we need to prove some statement we know where to look for tools. GN and LI do not attempt to do that.

An important caveat, applicable to most of the features above: there should be a balance. If you stick too much within a topic, you will never discover fruitful analogies (algebraic geometry being helpful for proving Fermat's Last Theorem is a great example). Too much reliance on any one feature and you lose creativity.

Curiosity/Beauty

There isn't much I can add about this one, but it's arguably the most important. It both guides the formation of conjectures and helps with intermediate steps.

GN and LI definitely lack it.

Conclusion

All of this is to support the point that math is invented rather than discovered. I agree that there is a surprising amount of connection between the different types of math humans find interesting, and there is probably more to learn about this phenomenon. But I wouldn't treat it as a signal that we are touching some universal metaphysical phenomenon: this is just human senses of beauty and curiosity, along with real-world utility and patterns echoing each other (partly because human intelligence and the senses were shaped to seek usefulness and real-world patterns).

  1. ^

    Because of this and this.



Discuss