Frames: Tradeable Tools For Metacognition
“Sometimes the dissonance between reality and false beliefs reaches a point when it becomes impossible to avoid the awareness that the world no longer makes sense.” — Gregory Bateson, 1952
Premises
- AI has a trust problem: LLM outputs from AGI labs are notoriously sycophantic, and very confident in areas where they have bad epistemology.
- AI has an abundance problem: The space of ways to apply AI to information is very high dimensional, and varies across both form and content.
- AI personification is a local maximum: People will use AI as a tool rather than a companion. Further, personification has a cost — every exchange with a personified AI carries information about the relationship itself, which is wasteful.
- Social media — and our broader online information diet — is largely unframed, leading to a steady onslaught of pervasive, harmful memes that wreck people’s mental health.
- Sam Altman’s transistor analogy for LLMs is a closer approximation to the truth than Marc Andreessen’s microprocessor analogy.
- Until 2023, humans were needed to apply ideas. With the advent of LLMs, for the first time in history, software can also do it.
- In other words: thoughts have become programs.
If this resonates, write to us.