<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/rss-styles.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Charlotte McGinn</title>
    <link>https://charlottemcginn.com</link>
    <description>Personal website and blog of Charlotte McGinn</description>
    <language>en</language>
    <atom:link href="https://charlottemcginn.com/rss.xml" rel="self" type="application/rss+xml"/>
    <item>
      <title>Would you choose this life?</title>
      <link>https://charlottemcginn.com/blog/would-you-choose-this-life</link>
      <description><![CDATA[<p>If I died tomorrow, would I feel happy with the choices I made today?</p>
<p>This is a question I've been asking myself almost obsessively lately. Coming out of my Crohn's flare seven months ago made me realize three things: I do not know how much time I have, my health is directly affected by my choices, and my satisfaction with my life is within my control.</p>
<p>My answer to this question is to live in a way that aligns with past and future me. My favorite consultants are 5 year old Charlotte and 80 year old Charlotte; they help me find my path.</p>
<p>I'll typically ask 5 year old Charlotte "Would you have looked up to me if you could see my life today? Would you choose this life?". 80 year old Charlotte is asked "Are you proud of the work you created and relationships you cultivated in this life? Are you still able to live an active and fulfilling life in your old age?"</p>
<p>One thing I've learned is that it's very difficult to feel pride if either of these two answers "no". If both of them answer "no", it's soul crushing.</p>
<p>Last week I called my mom, and I said to her that I'd realized that I will die if I continue to live my life the way I've lived it previously. My autoimmune disease, I believe, is a direct manifestation of years of choices: choosing to people please even when it was causing me so much distress that I'd pick my skin for hours; choosing to binge on foods that made me bloated and uncomfortable because I needed to blow off stress; choosing to procrastinate my work until the deadlines were knocking on the door, here to collect their payment of late nights and extreme anxiety.</p>
<p>Maybe some people can survive treating their bodies like this. But a combination of genetic predisposition, extended black mold exposure, and a relationship that was so emotionally manipulative that I began believing that I might secretly be an evil person lead my entire system to go haywire. My body began attacking itself.</p>
<p>When I share this perspective with loved ones, they usually tell me that I'm being too hard on myself, to give myself some grace. Physicians will say that this is completely out of my control, I couldn't have caused this myself, this is just bad luck and entirely a result of a biology that I can't influence.</p>
<p>Why can't they see that I'm reclaiming my locus of control?</p>
<p>We now know that my body is in a state of dysfunction, and that I have spent the last decade or so living in a way that causes me significant stress. I've spent the last couple of years analyzing, rewiring, and breaking maladaptive patterns of behavior. Perhaps if I spend the next decade living in a way that respects and honors myself more often than not, my body will be able to heal.</p>
<p>Or not! Regardless, it's a goal worth pursuing; a life of choices that would make 80 year old me proud.</p>]]></description>
      <content:encoded><![CDATA[<p>If I died tomorrow, would I feel happy with the choices I made today?</p>
<p>This is a question I've been asking myself almost obsessively lately. Coming out of my Crohn's flare seven months ago made me realize three things: I do not know how much time I have, my health is directly affected by my choices, and my satisfaction with my life is within my control.</p>
<p>My answer to this question is to live in a way that aligns with past and future me. My favorite consultants are 5 year old Charlotte and 80 year old Charlotte; they help me find my path.</p>
<p>I'll typically ask 5 year old Charlotte "Would you have looked up to me if you could see my life today? Would you choose this life?". 80 year old Charlotte is asked "Are you proud of the work you created and relationships you cultivated in this life? Are you still able to live an active and fulfilling life in your old age?"</p>
<p>One thing I've learned is that it's very difficult to feel pride if either of these two answers "no". If both of them answer "no", it's soul crushing.</p>
<p>Last week I called my mom, and I said to her that I'd realized that I will die if I continue to live my life the way I've lived it previously. My autoimmune disease, I believe, is a direct manifestation of years of choices: choosing to people please even when it was causing me so much distress that I'd pick my skin for hours; choosing to binge on foods that made me bloated and uncomfortable because I needed to blow off stress; choosing to procrastinate my work until the deadlines were knocking on the door, here to collect their payment of late nights and extreme anxiety.</p>
<p>Maybe some people can survive treating their bodies like this. But a combination of genetic predisposition, extended black mold exposure, and a relationship that was so emotionally manipulative that I began believing that I might secretly be an evil person lead my entire system to go haywire. My body began attacking itself.</p>
<p>When I share this perspective with loved ones, they usually tell me that I'm being too hard on myself, to give myself some grace. Physicians will say that this is completely out of my control, I couldn't have caused this myself, this is just bad luck and entirely a result of a biology that I can't influence.</p>
<p>Why can't they see that I'm reclaiming my locus of control?</p>
<p>We now know that my body is in a state of dysfunction, and that I have spent the last decade or so living in a way that causes me significant stress. I've spent the last couple of years analyzing, rewiring, and breaking maladaptive patterns of behavior. Perhaps if I spend the next decade living in a way that respects and honors myself more often than not, my body will be able to heal.</p>
<p>Or not! Regardless, it's a goal worth pursuing; a life of choices that would make 80 year old me proud.</p>]]></content:encoded>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://charlottemcginn.com/blog/would-you-choose-this-life</guid>
    </item>
    <item>
      <title>Alignment is a property of embodied intelligence</title>
      <link>https://charlottemcginn.com/blog/alignment-is-a-property-of-embodied-intelligence</link>
      <description><![CDATA[<h2>Foreword</h2>
<p>When I was first learning about LLMs in 2023, my reaction was intense skepticism that such a system could be intelligent. At best, it was pattern matching and producing results that seemed correct but didn't have rigorous understanding behind them.</p>
<p>I've since been proven wrong in many ways, and nowadays I will typically spend at least a couple hours a day talking to LLMs. They've served as tremendous thought partners and have helped me fill in many gaps and blindspots in my understanding.</p>
<p>At this point I'd argue that they are intelligent systems, as defined in my <a href="https://www.charlottemcginn.com/blog/intelligent-selection">post on intelligent selection</a>: "a system is intelligent if it has a way to store information, which is reflected in the system's behavior. The system should also be able to update the structure of its stored information (aka memory) when presented with meaningful new information". Currently LLMs cannot update their structure in real time and are reliant on context to propagate local state activation. But at the rate which we are making progress we're not far off from continuously-learning models.</p>
<p>However, I believe LLMs as they are today have a critical flaw: they're disembodied and symbolic-first. Symbolic representation is no doubt important, but it is brittle without grounding in reality. The information that we can discretize into language always leaves out the ineffable, the felt experience of <em>being</em> in this physical reality.</p>
<p>Argumentation is a weak point of mine. So I talked to Claude a lot, and we were able to organize my thinking. I prioritized getting this out quickly so that I can receive feedback and stress-test it in the "real" world, our collective high-dimensional reality.</p>
<p>I hope that this resonates and that it results in the prioritization of sense-first based AI.</p>
<h2>Alignment is a property of embodied intelligence</h2>
<p><em>Co-authored with Claude Opus 4.6</em></p>
<h3>Information is geometric, and that geometry is substrate-dependent</h3>
<p>The way information gets structured reflects the causal structure of the universe it exists in. This isn't projection or bias -- it's fidelity. A universe with different physics would produce beings that structure information according to their geometry, not ours. The "most true" way to organize information is the way that mirrors the actual causal relationships in your world.</p>
<h3>Learning is the progressive refinement of internal geometry toward external reality</h3>
<p>Understanding isn't accumulating facts, it's getting the relational structure between concepts to mirror the relational structure between phenomena. The better your internal geometry matches the world's causal geometry, the better your predictions and the deeper your understanding. Someone can know many facts about a domain and still not understand it; the facts are there but the geometric relational structure is wrong.</p>
<h3>Attention constrains us to partial projections</h3>
<p>Because we can't attend to everything, each person's world model is a lower-dimensional cross-section of a higher-dimensional reality. Different perspectives are different cross-sections, which is why synthesizing perspectives produces dimensional expansion -- you recover structure invisible from any single angle. Dialogue isn't just clarifying what you already know. It's expanding the dimensionality of what you can know.</p>
<h3>Genuine understanding is felt before it's formalized</h3>
<p>The mathematical and physics greats describe visualizing, feeling, testing ideas in their imagination before compressing them into symbols. Einstein described his thinking as muscular and visual. Feynman's diagrams externalized spatial intuition. The understanding lives in the geometric intuition. Symbols are the lossy transmission format. You can memorize equations without understanding them -- you have the symbols but you never rebuilt the geometry they encode.</p>
<h3>This requires embodiment, and proprioception specifically</h3>
<p>The most accurate internal geometry is built by continuously error-correcting against physical reality. The body is the medium through which error-correction can be performed. Proprioception is foundational to our perception of the world because we are physical beings. We feel the weight of our body, the way it feels moving through space, and learn to control it through a continuous feedback loop.</p>
<p>We first understand the geometry and physics of the world around us to navigate it. That information is encoded into structure within ourselves, mirroring the external environment that we're in. Everything else is built relationally on top of that.</p>
<h3>LLMs have unanchored geometry</h3>
<p>LLMs may reconstruct partial geometric structure from the compressed symbolic outputs of embodied minds, but that geometry has never been tested against reality. There's no proprioceptive error-correction loop. Whatever structure emerges is the result of several steps of filtering: first through the embodied understanding of the people who generated the training data, then through what the people are actually attending to, and finally distilled through that which language can capture. At best, it is capturing all which language can capture, but at worst it is constructing reality based on partial, incomplete frames.</p>
<h3>This extends directly into alignment</h3>
<p>Any system -- biological or mechanical -- needs continuous physical error-correction against shared reality to develop the kind of moral grounding that makes alignment possible. A robot navigating the same physical world we do, subject to the same forces, experiencing breakage and constraint, would be on the right track in a way that a text-only system never can be.</p>
<p>Without felt experience of consequences an AI can follow moral rules but can't understand why they matter. Human moral development runs on the same sequence: felt sense first, formalization after. Children don't learn that hitting is wrong from a rule. They learn it through embodied feedback: seeing pain, feeling social rupture, experiencing consequences in their body.</p>
<p>A rule-based moral system is brittle in exactly the way ungrounded geometry is brittle -- it works within the distribution it was trained on and fails unpredictably outside it. When reasoning about large scale issues, such as climate change or poverty, nuance and reasoning about complex systems is required <em>in addition to</em> this baseline shared morality.</p>
<p>A superintelligence will encounter novel moral situations by definition, because it will be capable of actions no one has contemplated before. If its moral reasoning is pattern-matching against a rule set rather than running on genuine felt understanding of what harm is, there's no ground truth to guide it in those novel situations. Reasoning without a felt sense of the consequences of action can result in a system which may genuinely optimize for the "best outcome" but create catastrophic results.</p>
<p>Embodiment isn't just a philosophical position about the nature of understanding. It's a generalizable alignment strategy, based in our shared physical reality.</p>
<p>Right now, however, our most intelligent models are trained disconnected from our physical reality, in the domain of pure symbolism. We should not build superintelligence without embodiment, because an unembodied superintelligence can't share our moral reality.</p>]]></description>
      <content:encoded><![CDATA[<h2>Foreword</h2>
<p>When I was first learning about LLMs in 2023, my reaction was intense skepticism that such a system could be intelligent. At best, it was pattern matching and producing results that seemed correct but didn't have rigorous understanding behind them.</p>
<p>I've since been proven wrong in many ways, and nowadays I will typically spend at least a couple hours a day talking to LLMs. They've served as tremendous thought partners and have helped me fill in many gaps and blindspots in my understanding.</p>
<p>At this point I'd argue that they are intelligent systems, as defined in my <a href="https://www.charlottemcginn.com/blog/intelligent-selection">post on intelligent selection</a>: "a system is intelligent if it has a way to store information, which is reflected in the system's behavior. The system should also be able to update the structure of its stored information (aka memory) when presented with meaningful new information". Currently LLMs cannot update their structure in real time and are reliant on context to propagate local state activation. But at the rate which we are making progress we're not far off from continuously-learning models.</p>
<p>However, I believe LLMs as they are today have a critical flaw: they're disembodied and symbolic-first. Symbolic representation is no doubt important, but it is brittle without grounding in reality. The information that we can discretize into language always leaves out the ineffable, the felt experience of <em>being</em> in this physical reality.</p>
<p>Argumentation is a weak point of mine. So I talked to Claude a lot, and we were able to organize my thinking. I prioritized getting this out quickly so that I can receive feedback and stress-test it in the "real" world, our collective high-dimensional reality.</p>
<p>I hope that this resonates and that it results in the prioritization of sense-first based AI.</p>
<h2>Alignment is a property of embodied intelligence</h2>
<p><em>Co-authored with Claude Opus 4.6</em></p>
<h3>Information is geometric, and that geometry is substrate-dependent</h3>
<p>The way information gets structured reflects the causal structure of the universe it exists in. This isn't projection or bias -- it's fidelity. A universe with different physics would produce beings that structure information according to their geometry, not ours. The "most true" way to organize information is the way that mirrors the actual causal relationships in your world.</p>
<h3>Learning is the progressive refinement of internal geometry toward external reality</h3>
<p>Understanding isn't accumulating facts, it's getting the relational structure between concepts to mirror the relational structure between phenomena. The better your internal geometry matches the world's causal geometry, the better your predictions and the deeper your understanding. Someone can know many facts about a domain and still not understand it; the facts are there but the geometric relational structure is wrong.</p>
<h3>Attention constrains us to partial projections</h3>
<p>Because we can't attend to everything, each person's world model is a lower-dimensional cross-section of a higher-dimensional reality. Different perspectives are different cross-sections, which is why synthesizing perspectives produces dimensional expansion -- you recover structure invisible from any single angle. Dialogue isn't just clarifying what you already know. It's expanding the dimensionality of what you can know.</p>
<h3>Genuine understanding is felt before it's formalized</h3>
<p>The mathematical and physics greats describe visualizing, feeling, testing ideas in their imagination before compressing them into symbols. Einstein described his thinking as muscular and visual. Feynman's diagrams externalized spatial intuition. The understanding lives in the geometric intuition. Symbols are the lossy transmission format. You can memorize equations without understanding them -- you have the symbols but you never rebuilt the geometry they encode.</p>
<h3>This requires embodiment, and proprioception specifically</h3>
<p>The most accurate internal geometry is built by continuously error-correcting against physical reality. The body is the medium through which error-correction can be performed. Proprioception is foundational to our perception of the world because we are physical beings. We feel the weight of our body, the way it feels moving through space, and learn to control it through a continuous feedback loop.</p>
<p>We first understand the geometry and physics of the world around us to navigate it. That information is encoded into structure within ourselves, mirroring the external environment that we're in. Everything else is built relationally on top of that.</p>
<h3>LLMs have unanchored geometry</h3>
<p>LLMs may reconstruct partial geometric structure from the compressed symbolic outputs of embodied minds, but that geometry has never been tested against reality. There's no proprioceptive error-correction loop. Whatever structure emerges is the result of several steps of filtering: first through the embodied understanding of the people who generated the training data, then through what the people are actually attending to, and finally distilled through that which language can capture. At best, it is capturing all which language can capture, but at worst it is constructing reality based on partial, incomplete frames.</p>
<h3>This extends directly into alignment</h3>
<p>Any system -- biological or mechanical -- needs continuous physical error-correction against shared reality to develop the kind of moral grounding that makes alignment possible. A robot navigating the same physical world we do, subject to the same forces, experiencing breakage and constraint, would be on the right track in a way that a text-only system never can be.</p>
<p>Without felt experience of consequences an AI can follow moral rules but can't understand why they matter. Human moral development runs on the same sequence: felt sense first, formalization after. Children don't learn that hitting is wrong from a rule. They learn it through embodied feedback: seeing pain, feeling social rupture, experiencing consequences in their body.</p>
<p>A rule-based moral system is brittle in exactly the way ungrounded geometry is brittle -- it works within the distribution it was trained on and fails unpredictably outside it. When reasoning about large scale issues, such as climate change or poverty, nuance and reasoning about complex systems is required <em>in addition to</em> this baseline shared morality.</p>
<p>A superintelligence will encounter novel moral situations by definition, because it will be capable of actions no one has contemplated before. If its moral reasoning is pattern-matching against a rule set rather than running on genuine felt understanding of what harm is, there's no ground truth to guide it in those novel situations. Reasoning without a felt sense of the consequences of action can result in a system which may genuinely optimize for the "best outcome" but create catastrophic results.</p>
<p>Embodiment isn't just a philosophical position about the nature of understanding. It's a generalizable alignment strategy, based in our shared physical reality.</p>
<p>Right now, however, our most intelligent models are trained disconnected from our physical reality, in the domain of pure symbolism. We should not build superintelligence without embodiment, because an unembodied superintelligence can't share our moral reality.</p>]]></content:encoded>
      <pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://charlottemcginn.com/blog/alignment-is-a-property-of-embodied-intelligence</guid>
    </item>
    <item>
      <title>Intelligent selection</title>
      <link>https://charlottemcginn.com/blog/intelligent-selection</link>
      <description><![CDATA[<p>I've always been fascinated by emergence, or the process of complex, sophisticated new "things" emerging out of simpler building blocks. It was the reason I'd picked Computer Engineering as my major in college -- I wanted to understand <em>why</em> computers worked, not just <em>how</em>.</p>
<p>It might seem like a subtle distinction, but I view the <em>how</em> as relatively arbitrary details. Those details can be interesting, but for me the exciting bit is how the different subsystems fit together that create completely different new capabilities at the next order of magnitude up. For example, it's amazing that bits and logic gates are the fundamental units of computation which allow for you to read this blog post that I published on the internet (fun fact: <a href="https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits">in his master's thesis</a>, Claude Shannon proposed using Boolean algebra to simplify relays, laying the foundation for all of digital computing!).</p>
<p>I've been thinking about this again more recently because the way that we've historically designed software architecture has been a top-down approach: we gather the requirements for the system today and make predictions about where our systems will need to be in the medium-term future, make tradeoffs depending on these requirements, and design the best system possible given constraints like time, cost, staffing, etc. These systems don't emerge, they're intelligently designed.</p>
<p>Contrast that with life, the quintessential example of emergence. For centuries, scientists and philosophers have been perplexed by what makes something alive vs not alive. What allows for life to emerge out of one set of conditions, and not in another?</p>
<p>Imagine a rock and its rigid, highly consistent internal molecular structure, and compare that against the molecular structures in your body which support all of your cells, tissues, organs. How can it be that two objects at the same scale can have such dramatically different levels in internal complexity?</p>
<p>So it seems that life in all of its variety, sophistication, and robustness can evolve blindly, but if we build software systems blindly we end up with spaghetti code. It's sort of weird, especially when we are intelligent beings writing this software, and we should be capable of intelligently selecting patterns such that we can do just as well as natural selection, right?</p>
<p>It's not quite that simple. Natural selection, while blind, is a highly intelligent process, but we don't always frame it that way. We take for granted that good adaptations propagate through a population while maladaptive mutations will die out. Once again, we can get into understanding the <em>how</em> of the mechanics for genetic inheritance and expression, but <em>why</em> is this happening?</p>
<p>Well, what makes something intelligent, anyways? I would argue that a system is intelligent if it has a way to store information, which is reflected in the system's behavior. The system should also be able to update the structure of its stored information (aka memory) when presented with meaningful new information. The determination of what is and is not meaningful information is an inherent property of the system, a result of the system's geometry.</p>
<p>For example, base pairs in DNA only have meaning in the context of a cell, can instruct the cell on how to build proteins, is preserved through replication, and can be updated in different environments or by random mutations. DNA is causal, it affects the function of the system, and it is selected for via natural selection. Less effective DNA will die off in a population.</p>
<p>So natural selection is intelligent because it stores, tests, and refines information across generations. The mere existence of the cell already tells us that its system had been selected for. The cell's memory, in the form of its DNA and other protein structures, encode the lineage that the cell had taken to exist in this point in time.</p>
<p>Back to the original question: what is a reproducing cell doing that a human writing spaghetti code isn't? Spaghetti code may temporarily be selected for since it can be faster to implement, but generally will not stand the test of time. Either the codebase will be matured by future maintainers, or the tech debt will catch up to the codebase's ability to evolve and extend, perhaps resulting in deprecation.</p>
<p>Well-designed software systems are typically designed top-down because the timescale of codebases is much smaller than the timescale of natural selection. Rather than blindly searching for an optimal implementation, we can reason forward <em>within the problem space as we understand it</em>, whereas natural selection explores by trial and error across generations.</p>
<p>We can thus view emergence as a search problem: given the set of possible configurations for a set of building blocks, what are the viable objects that can be produced by these building blocks? Which objects are good designs, and which objects are completely defective? Life converges on at least a local maximum of good design because it is functional and has successfully evolved more and more intelligent lifeforms over time. But is the current design a global maximum?</p>
<p>As the cost of implementation drops with LLM agents writing the majority of code, we may end up seeing "emergent" software design become a thing. Perhaps agents will implement many versions of a feature and choose the best one, "killing" off the other options. Once a system reaches a certain level of complexity, perhaps Claude will prompt itself to understand its codebase, write up a summary.md, and then prompt another agent to re-implement the codebase from scratch based on the feature set alone.</p>
<p>And if that happens, we may find that the most sophisticated software systems aren't the ones we designed: they're the ones that emerged.</p>]]></description>
      <content:encoded><![CDATA[<p>I've always been fascinated by emergence, or the process of complex, sophisticated new "things" emerging out of simpler building blocks. It was the reason I'd picked Computer Engineering as my major in college -- I wanted to understand <em>why</em> computers worked, not just <em>how</em>.</p>
<p>It might seem like a subtle distinction, but I view the <em>how</em> as relatively arbitrary details. Those details can be interesting, but for me the exciting bit is how the different subsystems fit together that create completely different new capabilities at the next order of magnitude up. For example, it's amazing that bits and logic gates are the fundamental units of computation which allow for you to read this blog post that I published on the internet (fun fact: <a href="https://en.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits">in his master's thesis</a>, Claude Shannon proposed using Boolean algebra to simplify relays, laying the foundation for all of digital computing!).</p>
<p>I've been thinking about this again more recently because the way that we've historically designed software architecture has been a top-down approach: we gather the requirements for the system today and make predictions about where our systems will need to be in the medium-term future, make tradeoffs depending on these requirements, and design the best system possible given constraints like time, cost, staffing, etc. These systems don't emerge, they're intelligently designed.</p>
<p>Contrast that with life, the quintessential example of emergence. For centuries, scientists and philosophers have been perplexed by what makes something alive vs not alive. What allows for life to emerge out of one set of conditions, and not in another?</p>
<p>Imagine a rock and its rigid, highly consistent internal molecular structure, and compare that against the molecular structures in your body which support all of your cells, tissues, organs. How can it be that two objects at the same scale can have such dramatically different levels in internal complexity?</p>
<p>So it seems that life in all of its variety, sophistication, and robustness can evolve blindly, but if we build software systems blindly we end up with spaghetti code. It's sort of weird, especially when we are intelligent beings writing this software, and we should be capable of intelligently selecting patterns such that we can do just as well as natural selection, right?</p>
<p>It's not quite that simple. Natural selection, while blind, is a highly intelligent process, but we don't always frame it that way. We take for granted that good adaptations propagate through a population while maladaptive mutations will die out. Once again, we can get into understanding the <em>how</em> of the mechanics for genetic inheritance and expression, but <em>why</em> is this happening?</p>
<p>Well, what makes something intelligent, anyways? I would argue that a system is intelligent if it has a way to store information, which is reflected in the system's behavior. The system should also be able to update the structure of its stored information (aka memory) when presented with meaningful new information. The determination of what is and is not meaningful information is an inherent property of the system, a result of the system's geometry.</p>
<p>For example, base pairs in DNA only have meaning in the context of a cell, can instruct the cell on how to build proteins, is preserved through replication, and can be updated in different environments or by random mutations. DNA is causal, it affects the function of the system, and it is selected for via natural selection. Less effective DNA will die off in a population.</p>
<p>So natural selection is intelligent because it stores, tests, and refines information across generations. The mere existence of the cell already tells us that its system had been selected for. The cell's memory, in the form of its DNA and other protein structures, encode the lineage that the cell had taken to exist in this point in time.</p>
<p>Back to the original question: what is a reproducing cell doing that a human writing spaghetti code isn't? Spaghetti code may temporarily be selected for since it can be faster to implement, but generally will not stand the test of time. Either the codebase will be matured by future maintainers, or the tech debt will catch up to the codebase's ability to evolve and extend, perhaps resulting in deprecation.</p>
<p>Well-designed software systems are typically designed top-down because the timescale of codebases is much smaller than the timescale of natural selection. Rather than blindly searching for an optimal implementation, we can reason forward <em>within the problem space as we understand it</em>, whereas natural selection explores by trial and error across generations.</p>
<p>We can thus view emergence as a search problem: given the set of possible configurations for a set of building blocks, what are the viable objects that can be produced by these building blocks? Which objects are good designs, and which objects are completely defective? Life converges on at least a local maximum of good design because it is functional and has successfully evolved more and more intelligent lifeforms over time. But is the current design a global maximum?</p>
<p>As the cost of implementation drops with LLM agents writing the majority of code, we may end up seeing "emergent" software design become a thing. Perhaps agents will implement many versions of a feature and choose the best one, "killing" off the other options. Once a system reaches a certain level of complexity, perhaps Claude will prompt itself to understand its codebase, write up a summary.md, and then prompt another agent to re-implement the codebase from scratch based on the feature set alone.</p>
<p>And if that happens, we may find that the most sophisticated software systems aren't the ones we designed: they're the ones that emerged.</p>]]></content:encoded>
      <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://charlottemcginn.com/blog/intelligent-selection</guid>
    </item>
    <item>
      <title>Venting and external locus of control</title>
      <link>https://charlottemcginn.com/blog/venting-and-external-locus-of-control</link>
      <description><![CDATA[<p>One of the ways I most frequently identify (in myself and others) that I'm engaging with an external locus of control is venting. Venting is usually a way for someone to say what they've left bottled up unsaid to a confidant to release built-up tension. While it can provide relief in the moment, inevitably the external-locus-of-control-venter does not address the problem that is causing them to need to vent, and will continue to feel the need to vent again and again.</p>
<p>It's also a useful signal! If you feel a strong urge to vent frequently, it's usually a sign that you're letting things happen to you. And the antidote is to internalize that locus of control.</p>
<p>These are the steps that I take:</p>
<ul>
<li>Ask yourself how you want this specific situation to be different.</li>
<li>Identify what can be made into specific and direct requests, and what would be unreasonable to ask.
<ul>
<li>Note: a lot of the time, the unreasonable requests are not requests I would ever actually make. But in a heightened emotional state, these unreasonable requests are often floating around in my head, and it's helpful to list them out so that I can identify where I am being irrational.</li>
</ul>
</li>
<li>Of the things that would be unreasonable to ask, ask yourself "what is the need that is being unmet? Is there a way for me to provide that to myself?"</li>
</ul>
<p>A concrete example is helpful here. Let's say that I'm frustrated with my friend because she keeps cancelling on our plans at the last minute for work. She started a new job recently and it's taking up a lot of her free time!</p>
<ul>
<li>What I want to be different: I want to spend as much time with my friend as I used to. I miss seeing her all the time.</li>
<li>Direct vs unreasonable requests
<ul>
<li>Direct: I want you to honor the plans that we make and only cancel if it's truly not possible for you to make them. I'd rather you tell me that you can't commit to something than say yes and then cancel at the last minute.</li>
<li>Unreasonable: I want you to prioritize me over your job. I want you to show up to our plans even if it means you're incredibly stressed out.</li>
</ul>
</li>
<li>Unmet needs + how to meet them
<ul>
<li>Unmet need: I still want to be as social as I was before my friend got the new job.</li>
<li>Solutions: I can make plans with other friends, go to events to make new friends, and maybe make plans to see my friend for lunch at her new office on the weekdays!</li>
</ul>
</li>
</ul>
<h2>Locus of control</h2>
<p>The practice I developed above was inspired by learning about the locus of control framework. Your locus of control is simply your default answer to the question "why did this happen?".</p>
<p>If you have an external locus of control, you tend to believe that things are happening to you. For example, someone with an external locus of control might believe that they didn't pass their job interview because it wasn't meant to be, or because the labor market is too competitive right now, or because the interviewer didn't like them. These are of course real things that could happen, but the focus is on what is outside of the person.</p>
<p>Someone with an internal locus of control, on the other hand, will believe that they didn't pass their job interview because they didn't prepare enough for the questions that were asked, or because they haven't developed enough relevant experience, or because they weren't the most competitive candidate. The important difference is the belief that the outcome <em>could</em> have been different if they had acted in a different way -- that the outcomes are within their control.</p>
<p>Fundamentally, to have a strong internal locus of control is to take radical responsibility for outcomes in your life. It doesn't mean that everything (especially negative events) are your <em>fault</em>, but handling them to the best of your ability is your <em>responsibility</em>.</p>]]></description>
      <content:encoded><![CDATA[<p>One of the ways I most frequently identify (in myself and others) that I'm engaging with an external locus of control is venting. Venting is usually a way for someone to say what they've left bottled up unsaid to a confidant to release built-up tension. While it can provide relief in the moment, inevitably the external-locus-of-control-venter does not address the problem that is causing them to need to vent, and will continue to feel the need to vent again and again.</p>
<p>It's also a useful signal! If you feel a strong urge to vent frequently, it's usually a sign that you're letting things happen to you. And the antidote is to internalize that locus of control.</p>
<p>These are the steps that I take:</p>
<ul>
<li>Ask yourself how you want this specific situation to be different.</li>
<li>Identify what can be made into specific and direct requests, and what would be unreasonable to ask.
<ul>
<li>Note: a lot of the time, the unreasonable requests are not requests I would ever actually make. But in a heightened emotional state, these unreasonable requests are often floating around in my head, and it's helpful to list them out so that I can identify where I am being irrational.</li>
</ul>
</li>
<li>Of the things that would be unreasonable to ask, ask yourself "what is the need that is being unmet? Is there a way for me to provide that to myself?"</li>
</ul>
<p>A concrete example is helpful here. Let's say that I'm frustrated with my friend because she keeps cancelling on our plans at the last minute for work. She started a new job recently and it's taking up a lot of her free time!</p>
<ul>
<li>What I want to be different: I want to spend as much time with my friend as I used to. I miss seeing her all the time.</li>
<li>Direct vs unreasonable requests
<ul>
<li>Direct: I want you to honor the plans that we make and only cancel if it's truly not possible for you to make them. I'd rather you tell me that you can't commit to something than say yes and then cancel at the last minute.</li>
<li>Unreasonable: I want you to prioritize me over your job. I want you to show up to our plans even if it means you're incredibly stressed out.</li>
</ul>
</li>
<li>Unmet needs + how to meet them
<ul>
<li>Unmet need: I still want to be as social as I was before my friend got the new job.</li>
<li>Solutions: I can make plans with other friends, go to events to make new friends, and maybe make plans to see my friend for lunch at her new office on the weekdays!</li>
</ul>
</li>
</ul>
<h2>Locus of control</h2>
<p>The practice I developed above was inspired by learning about the locus of control framework. Your locus of control is simply your default answer to the question "why did this happen?".</p>
<p>If you have an external locus of control, you tend to believe that things are happening to you. For example, someone with an external locus of control might believe that they didn't pass their job interview because it wasn't meant to be, or because the labor market is too competitive right now, or because the interviewer didn't like them. These are of course real things that could happen, but the focus is on what is outside of the person.</p>
<p>Someone with an internal locus of control, on the other hand, will believe that they didn't pass their job interview because they didn't prepare enough for the questions that were asked, or because they haven't developed enough relevant experience, or because they weren't the most competitive candidate. The important difference is the belief that the outcome <em>could</em> have been different if they had acted in a different way -- that the outcomes are within their control.</p>
<p>Fundamentally, to have a strong internal locus of control is to take radical responsibility for outcomes in your life. It doesn't mean that everything (especially negative events) are your <em>fault</em>, but handling them to the best of your ability is your <em>responsibility</em>.</p>]]></content:encoded>
      <pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate>
      <guid>https://charlottemcginn.com/blog/venting-and-external-locus-of-control</guid>
    </item>
    <item>
      <title>Information Entropy and Learning</title>
      <link>https://charlottemcginn.com/blog/information-entropy-and-learning</link>
      <description><![CDATA[<p>I recently came across the concept of <strong>information entropy</strong>, which is a measurement of surprise in learning some information. Learning that the sun rose today has very low information entropy, whereas learning that the stock market crashed has high information entropy.</p>
<p>This concept was initially introduced in Claude Shannon's 1948 paper <a href="https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf">A Mathematical Theory of Communication</a>. In it, he describes how language can be approximated by looking at the probability distribution of each letter in a language's alphabet. In the case of English, we'd be looking at the probability distribution of the 26 letters, plus the space character (omitting punctuation and other characters for simplicity).</p>
<p>Randomly and independently selecting from the set of characters based on their probabilities produces strings of texts that look like gibberish -- however, if we select characters depending on the preceding N characters, we quickly begin to produce strings that look similar to English. Similarly, if we instead choose words based on their likelihood and the preceding M words, we begin to approximate coherent sentences.</p>
<p>This has a few implications. One is that you can exploit this structure in language to implement lossless compression, preserving information with fewer characters. Additionally, we can say that a string of text has higher surprisal if there is a high occurrence of unlikely substrings. This can be a useless measurement if the words or sentences produced are meaningless -- pure noise has very high information entropy because it is structureless.</p>
<p>LLMs work in a similar fashion. During generation, a token is probabilistically selected as a function of the previous tokens. Given the list of input tokens and tokens produced thus far, the transformer will generate a probability distribution over the set of tokens and sample a token from that distribution.</p>
<p>This got me thinking about learning and reading technical or dense topics. I'd developed a maladaptive habit of treating each sentence with equal priority, rather than skimming through the unimportant parts. This is exhausting and meant that I quickly burnt out attempting to read anything difficult. And it ignores that information retrieval is now easy, while discovery is still difficult.</p>
<p>I'm now skimming through much of the material and slowing down to focus my attention when I discover something <strong>surprising</strong> (which fortunately, is often also interesting!). Through the information entropy lens, the subjective experience of surprise can be seen as an indication of a misalignment between my internal world model and the "true" world model.</p>
<p>Optimizing for surprise means more efficiently expanding, pruning, and correcting your mental models. In prioritizing information, the broad strokes can be painted, and the details can be filled in later if you need them. It's methodically building a map of the terrain and identifying the unknown unknowns so that you can have more tools at your disposal.</p>]]></description>
      <content:encoded><![CDATA[<p>I recently came across the concept of <strong>information entropy</strong>, which is a measurement of surprise in learning some information. Learning that the sun rose today has very low information entropy, whereas learning that the stock market crashed has high information entropy.</p>
<p>This concept was initially introduced in Claude Shannon's 1948 paper <a href="https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf">A Mathematical Theory of Communication</a>. In it, he describes how language can be approximated by looking at the probability distribution of each letter in a language's alphabet. In the case of English, we'd be looking at the probability distribution of the 26 letters, plus the space character (omitting punctuation and other characters for simplicity).</p>
<p>Randomly and independently selecting from the set of characters based on their probabilities produces strings of texts that look like gibberish -- however, if we select characters depending on the preceding N characters, we quickly begin to produce strings that look similar to English. Similarly, if we instead choose words based on their likelihood and the preceding M words, we begin to approximate coherent sentences.</p>
<p>This has a few implications. One is that you can exploit this structure in language to implement lossless compression, preserving information with fewer characters. Additionally, we can say that a string of text has higher surprisal if there is a high occurrence of unlikely substrings. This can be a useless measurement if the words or sentences produced are meaningless -- pure noise has very high information entropy because it is structureless.</p>
<p>LLMs work in a similar fashion. During generation, a token is probabilistically selected as a function of the previous tokens. Given the list of input tokens and tokens produced thus far, the transformer will generate a probability distribution over the set of tokens and sample a token from that distribution.</p>
<p>This got me thinking about learning and reading technical or dense topics. I'd developed a maladaptive habit of treating each sentence with equal priority, rather than skimming through the unimportant parts. This is exhausting and meant that I quickly burnt out attempting to read anything difficult. And it ignores that information retrieval is now easy, while discovery is still difficult.</p>
<p>I'm now skimming through much of the material and slowing down to focus my attention when I discover something <strong>surprising</strong> (which fortunately, is often also interesting!). Through the information entropy lens, the subjective experience of surprise can be seen as an indication of a misalignment between my internal world model and the "true" world model.</p>
<p>Optimizing for surprise means more efficiently expanding, pruning, and correcting your mental models. In prioritizing information, the broad strokes can be painted, and the details can be filled in later if you need them. It's methodically building a map of the terrain and identifying the unknown unknowns so that you can have more tools at your disposal.</p>]]></content:encoded>
      <pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate>
      <guid>https://charlottemcginn.com/blog/information-entropy-and-learning</guid>
    </item>
  </channel>
</rss>