Author: Philippe Lagüe

  • Frankfurt’s Gambit: More Hole-in-One Than Airtight Argument?

    Frankfurt’s Gambit: More Hole-in-One Than Airtight Argument?

    The world of philosophy often grapples with questions that seem to have straightforward answers, until a clever thought experiment throws a wrench in the works. One such wrench is the “Frankfurt-Style Case” (FSC), a type of scenario designed to challenge a deeply intuitive idea: that to be morally responsible for an action, you must have been able to do otherwise. This is often called the Principle of Alternative Possibilities (PAP).

    FSCs try to show that someone can be responsible even if they had no other option. Imagine a scenario where an agent decides to do X, and is morally responsible for it. Unbeknownst to them, a nefarious character, let’s call him Black, was waiting in the background. If the agent had shown any sign of not doing X, Black would have intervened and forced them to do X anyway. But, as it happens, the agent does X “on their own,” so Black does nothing. The argument? The agent is responsible for X, even though, due to Black’s presence, they couldn’t have done otherwise.

    Sounds compelling, right? But what if successfully constructing such a case – one that truly and fairly demonstrates moral responsibility in the actual sequence where Black remains idle – is less like a logical checkmate and more like a golfer hitting a hole-in-one? The National Hole-in-One Registry states the odds for an average player are about 12,000 to 1. My research suggests that for an FSC to convincingly work, it might rely on a similar stroke of incredible, compounded luck.

    The Double Whammy: Not Just Luck, but Layers of It

    The core issue isn’t merely that a bit of chance might creep into these scenarios. It’s that FSCs, particularly the more sophisticated indeterministic versions (which don’t assume a clockwork, deterministic universe), are fundamentally riddled with layers of contingency. For an FSC to “succeed” in its aim – showing the agent is responsible for their action without Black needing to intervene – a whole series of fortunate events must align perfectly.

    Let’s break down these layers of luck:

    Layer 1: The Agent’s “Lucky Swing” – Processual Chance

    First, there’s Processual Chance. This refers to all the contingent factors that influence the agent’s own decision-making process and the subsequent action, including how things actually turn out. Think of it as the golfer’s swing and the initial flight of the ball:

    • Circumstantial Luck: The specific situation the agent finds themselves in, the information available (or not available) to them, sudden distractions, or influences. (Is the wind just right?)
    • Constitutive Luck: Who the agent is – their character, dispositions, ingrained habits, even their current emotional state, all shaped by factors largely beyond their ultimate control. This is where the Principle of Deliberative Contingency (PDC), which we’ll explore shortly, starts to play a crucial role. (Is the golfer naturally gifted or having a “good day”?)
    • Resultant Luck: This is the luck in how the agent’s actions actually turn out. Even if they decide to do X, actually accomplishing X as intended, and it being the specific X that Black desires, involves luck in the outcome. (Does the ball, having been struck, actually head towards the green as intended?)
    • Causal Luck (in indeterministic FSCs): If the universe, or at least the agent’s mental processes, aren’t strictly determined, then the very unfolding of their thoughts and the formation of their intention can have an element of irreducible chance. (Did a random neural firing nudge the decision one way or another?)

    For the agent in an FSC to just happen to choose and successfully perform the action Black desires, without any nudging from Black, this multifaceted processual chance (including the crucial resultant luck of the outcome) already needs to break in a very specific, “lucky” way.

    Layer 2: The “Miraculous Alignment” – Composite Structural Chance

    But that’s not all. Even if the agent’s own process “luckily” heads in Black’s preferred direction and “luckily” results in the desired action, there’s another, more encompassing layer: Composite Structural Chance.

    This is the luck of the entire scenario unfolding with such perfect serendipity that the agent’s action (already a product of processual chance, including resultant luck) precisely matches Black’s wishes and does so in a way that doesn’t trigger Black’s intervention mechanism. Black has set certain conditions (a “sign”) for when to intervene. The agent must not only do what Black wants (a matter of resultant luck) but also avoid tripping any of these wires.

    This is the true “hole-in-one” moment. It’s not just that the golfer hit the ball well and it went in the intended direction (processual and resultant luck); it’s that the ball, after its flight, also navigates the unpredictable contours of the green and drops into the tiny cup, all without any further interference. The entire structure of the situation has to align in an incredibly fortunate way. This “structural chance” is composite because it relies on the favorable convergence of all the underlying processual chances – circumstantial, constitutive/PDC, causal, and critically, the resultant luck of the action’s outcome aligning perfectly with Black’s desire and non-intervention criteria.

    The Compounding Factor: It’s a Chain of Lucky Breaks

    The critical point is that these aren’t independent lucky events. They are compounded. For Black to remain idle, the agent’s “lucky” internal process, leading to a “lucky” outcome (resultant luck), must also “luckily” align with the very specific, and often narrow, path that avoids Black’s intervention. It’s like needing a series of coin flips to all land heads. The odds get very long, very quickly.

    The Clincher: Argument 1.1 and the Principle of Deliberative Contingency (PDC)

    My work highlights a crucial dilemma (let’s call it Argument 1.1) that exposes this deep reliance on compounded luck, and this is where the Principle of Deliberative Contingency (PDC) becomes particularly illuminating.

    The PDC states that an agent is subject to deliberative contingency when factors related to their constitutive luck (their character, ingrained beliefs, cognitive habits) or circumstantial luck (the immediate context, salient information, or lack thereof) lead them to perform an action (often a morally questionable one) without the relevant moral alternative even emerging as a significant, influential consideration during their deliberation. Essentially, they don’t seriously “see” or “feel” the pull of doing otherwise, thanks to a lucky (for the FSC designer) shaping of their deliberative landscape.

    Now, consider an agent in an indeterministic FSC. Are they (locally, at the moment of choice) determined or indetermined to perform the action (let’s call it ‘B’) that Black wants?

    1. If the agent is (locally) determined to do B:
    • They might seem to avoid some processual luck (like the randomness of an undetermined choice itself).
    • However, the success of the FSC scenario (Black staying idle) now rests almost entirely on composite structural chance. For Black not to intervene, the agent’s determined path must fortuitously result in action B and align with Black’s non-intervention conditions.
    • And here’s where the PDC bites: Why is the agent so determined to do B? Their “determination” or strong inclination could itself be a product of the PDC. Their constitutive and circumstantial luck might have shaped their deliberation (or lack thereof) such that the alternative to B (the one that would trigger Black) never seriously enters their mind or gains traction. Their “determined” path, leading to the “lucky” outcome of B, is thus “luckily” clear of any thoughts that would make Black step in. The FSC works, but only because the agent was “lucky” enough not to even properly consider the problematic alternative, and for their determined action to be the right one.
    1. If the agent is (locally) indetermined when choosing B:
    • Now, the agent is directly subject to processual chance, especially causal luck and the resultant luck of that undetermined process actually producing B. Their choice and successful execution of B is, to some extent, a lottery.
    • And, crucially, composite structural chance is still required in the background. This “lucky” undetermined choice, resulting in B, must still happen to avoid triggering Black.
    • In this case, their moral responsibility for B seems shaky, as it’s partly down to the luck of an undetermined process and its fortunate outcome.

    The takeaway from Argument 1.1 is stark: Whether the agent is portrayed as determined or undetermined to perform the action Black desires, the FSC is mired in significant, problematic luck. If determined, it’s the structural luck (often facilitated by the PDC ensuring a “smooth” path to the “lucky” outcome B). If undetermined, it’s processual luck (including resultant luck) compounded by the ever-present need for structural luck.

    The PDC is a key part of this “lucky sequence” because it explains how an agent might, without external coercion in the actual sequence, so perfectly align with Black’s wishes. Their deliberation itself is fortuitously constrained or directed by their background and circumstances, leading to the specific outcome Black desires.

    Conclusion: FSCs – A Philosophical Long Shot

    When we dissect Frankfurt-Style Cases through the lens of compounded chance, their power to definitively refute the Principle of Alternative Possibilities diminishes significantly. The seemingly robust demonstration of moral responsibility without alternatives begins to look more like a carefully orchestrated scenario that can only “succeed” if an improbable series of chance events – including the crucial luck of the action’s outcome – align perfectly.

    The alignment required – where the agent’s own (luck-infused) process leads them to do exactly what an intervener wants, resulting in the precise action desired, and in exactly the way that avoids triggering the intervention – is not a common occurrence. It’s a philosophical hole-in-one.

    While FSCs are invaluable for pushing us to think critically about responsibility, freedom, and alternatives, recognizing their profound dependence on compounded processual (including resultant), and structural luck (with the Principle of Deliberative Contingency often playing a silent, facilitating role) suggests they might not be the ace up the sleeve many believe them to be. The intuitive link between being able to do otherwise and being morally responsible is not so easily broken by a game that seems rigged by chance from the start.

  • The Dilemma for Non-Causal Accounts of LFW

    The Dilemma for Non-Causal Accounts of LFW

    In Episode 6: The Problem of Luck with Alfred Mele of The Free Will Show (2020), one of the host, Matt Flummer, asks (6m29):

    • “Some people complain that libertarianism requires that our actions be uncaused, so if they are uncaused (…) so what’s the problem, that people point out, with are actions being uncaused?”   

    Alfred Mele then responds (6m44) : 

    • “You know frankly, I can’t even make sense of the notion of an uncaused action, that is, I think uncaused actions are impossible. I think of actions as events caused in a certain way.”

    Mele is a renowned philosopher, expert in libertarian free-will (LFW), so if he doesn’t grasp the idea of an uncaused action, it seems fair to assume that a wide number of philosophers have the same confusion about an uncaused action. This becomes even more evident when we consider that many of them still find the notion of libertarian free will itself mysterious, which includes agent and event causation, so we can just imagine the lack of comprehension when it comes to the Non Causal (NC) account. But I think, by compelling the NC proponent to clarify their position regarding the causal nature of their actions, there is a clear path forward. 

    The argument can be structured as follows:

    Premise 1: Non-Causal Accounts of LFW Rely on Difference-Making (CDM A/O).

    Non-causal theories of LFW, while rejecting that free actions are determined by prior events or possess a traditional internal causal structure, must still explain how an agent exercises control in bringing about an action rather than another, or an action rather than an omission. This explanation, it is argued, implicitly or explicitly relies on the agent being a difference-maker for the action’s occurrence. The Causes as Difference-Makers principle “CDM : If C caused E, then, had C not occurred, the absence of C wouldn’t have caused E.” (Sartorio 2005)

    Indeed, the notion of agent control seems to presuppose at least a minimal level of difference-making. Even passive background conditions necessary for an event (such as the presence of oxygen for a fire [1]) can be seen as difference-makers in a broad sense – without them, the event would not occur that way. Non-causal theorists, who attribute to the agent a far more direct and active role in ‘performing’ or ‘settling’ an action than that of a mere background condition, must surely be committed to the agent making at least this kind of difference, if not a more robust one. To deny that the agent’s involvement constitutes difference-making would be to render their contribution to the action even more tenuous than that of a static environmental factor, thereby undermining the very notion of control they seek to establish.

    Non-causalists often describe the agent as directly “settling” what happens or “making it the case” that an action occurs. For instance, Carl Ginet speaks of an “actish phenomenal quality” where it seems to the agent as if she is directly bringing about the event (Clarke, Capes et Swenson, 2021, sect. 1.1). Hugh McCann describes free actions as an intentional, spontaneous and “creative undertaking on the agent’s part” (Clarke 2003, p. 20). This direct involvement implies that the agent’s performance of action A is what makes A occur; if the agent had not so performed, A (through that specific exercise of agency) would not have occurred.

    If control is exercised “in” or “by” acting, as some non-causalists like Palmer, and Ginet, suggest (2021, p.10050), then the agent’s very performance of the action is what makes the crucial difference. The action happens because the agent performs it. As Palmer describes : 

    Assuming that no-one else and nothing else has control over whether her action or decision occurs, the person can exercise control over whether her action or decision occurs simply by performing that action or by making that decision, where her performing that action or making that decision constitutes her exercise of control over whether that action or decision occurs.” (2021, p.10052)

    This aligns with the core of Carolina Sartorio’s CDM (A/O) [2] principle: 

    If an agent’s acting in a certain way caused E, then, had the agent failed to act that way, the agent’s failing to act that way wouldn’t have caused E. Conversely, if an agent’s failing to act in a certain way caused E, then, had the agent acted that way, the agent’s acting that way wouldn’t have caused E. (2005, p. 80)

    The agent’s specific action brings about the action’s occurrence (or its being settled), while the corresponding specific omission (failing to perform that very action) would not bring about that same action’s occurrence. This establishes the asymmetry central to difference-making.

    Without such a difference-making role, it’s unclear how the agent could be said to control the action. If what the agent does (or doesn’t do) makes no difference to whether the action occurs, the notion of control seems to evaporate.

    Premise 2: The CDM (A/O) Principle Describes a Causal Relationship of Dependence.

    The Causes as Differences-Makers (A/O) principle is articulated in terms of counterfactuals: what would (or would not) have happened if the agent had acted differently (e.g., omitted an action they performed, or performed an action they omitted). This reliance on counterfactuals links it directly to theories of causation based on dependence.

    Sartorio’s work aims to capture David Lewis’s insight that “We think of a cause as something that makes a difference, and the difference it makes must be a difference from what would have happened without it” (Lewis, 1973, as cited in Sartorio, 2005, p. 71). While she critiques Lewis’s specific theory, her CDM principle itself is built upon assessing the difference an event (or its absence) makes, which is a counterfactual notion.

    Ned Hall, in “Two Concepts of Causation,” (2004) explicitly identifies “dependence” as one of the two fundamental varieties of causation. He defines it as “counterfactual dependence between wholly distinct events” (p. 1). Critically, he states, “Dependence: Counterfactual dependence between wholly distinct events is sufficient for causation” (p.1). If CDM (A/O) embodies such counterfactual dependence, then, according to Hall’s framework, it describes a genuinely causal relation.

    Thus, if an agent’s control is grounded in their being a difference-maker in the sense captured by CDM (A/O), and this principle articulates a relationship of counterfactual dependence, then this aspect of control is, by these lights, causal.

    Conclusion: Non-Causal Accounts of LFW Are Therefore (Dependence-)Causal.

    If both Premise 1 and Premise 2 hold, then non-causal accounts of LFW, by relying on a difference-making principle like CDM (A/O) to ground agent control, inherently incorporate a causal relationship (specifically, causal dependence).

    This leads to the Dilemma for Non-Causal Theorists:

    • Horn 1 : If non-causal theorists accept that their account of control relies on a principle like CDM (A/O) and that this principle describes a form of causal dependence, then their theories are not “non-causal” simpliciter. This admission might not be a defeat but an opportunity for clarification. They could distinguish the type of causation they are employing (i.e., difference-making, counterfactual dependence, perhaps akin to an “enabling” or “structuring” cause) from the types of causation they reject (e.g., “productive” causation in Hall’s sense, or deterministic event-causation). This would mean their theories are “non-causal” in a productive way or “non-event- causally-determined” rather than entirely devoid of any causal relations. Such a move could provide a more robust and less mysterious grounding for control than appeals to purely intrinsic features of actions or subjective experiences alone.
    • Horn 2 : If non-causal theorists deny any reliance on a difference-making principle like CDM (A/O) for grounding control, they face the significant challenge of explaining how an agent can be in control of an action if their acting or not acting makes no difference to whether the action occurs or is settled. Given that even passive background conditions (like the presence of oxygen for a fire) can be understood as difference-makers in the broad sense that without them the effect would not occur, for an agent’s active role in “performing” or “settling” an action to constitute control, it must surely involve at least this minimal level of difference-making. To deny that the agent’s involvement makes such a difference would be to render their contribution to the action even more tenuous than that of a static environmental factor, thereby undermining the very notion of control. As Randolph Clarke (2003) notes, purely non-causal accounts (like those of Ginet or McCann, if interpreted as devoid of such difference-making) “are found not to offer satisfactory views of action and reason-explanation” (p. 1), and their accounts of the “exercise of active control” (p.3) can appear mysterious or insufficient. Without the agent as a difference-maker, the connection between the agent and the action may become too tenuous to support robust control and moral responsibility.

    In essence, our argument pushes non-causal theorists to clarify the nature of the agent’s contribution to action. If that contribution is understood as making a difference in a way captured by counterfactual dependence (CDM A/O), then it implies a causal relationship. If it is not understood as difference-making, the basis of control becomes more obscure than ever.

    [1] Here are two more examples of background conditions: 1) For a plant to grow, planting a seed is a direct action, but this only leads to growth if certain background conditions are met. These include the presence of water in the soil, a suitable temperature range, and available light. Without these enabling environmental factors, the seed will not sprout or thrive, regardless of being planted. 2) Similarly, for an electrical appliance to operate, flipping its switch to the “on” position enables the flow of electricity, the proximate cause. However, this action is futile without crucial background conditions such as a connected power source, an intact and closed electrical circuit, and operational internal components within the appliance itself.

    [2]  Causes‑as‑difference‑makers is a general principle. CDM (A/O)—its formulation for agents’ actions and omissions—has been developed most fully by Sartorio (2016). The underlying idea can be traced back to John Stuart Mill’s System of Logic (1843), David Lewis’s “Causation” (1973), and Ned Hall’s “Two Concepts of Causation” (2004), among others.

    References

    Clarke, R. (2003). Libertarian Accounts of Free Will. New York, NY, US : Oxford University Press USA. https://doi.org/10.1093/019515987X.001.0001

    Clarke, R., Capes, J., & Swenson, P. (Fall 2021 Edition). Incompatibilist (Nondeterministic) Theories of Free Will. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/archives/fall2021/entries/incompatibilism-theories/

    Cyr, T., & Flummer, M. (Hosts). (2020, September 28). The Problem of Luck with Alfred Mele (No. 6) [Audio podcast episode]. In The Free Will Show. Buzzsprout. https://www.buzzsprout.com/1244627/episodes/5575288-episod-6-the-problem-of-luck-with-alfred-mele

    Hall, N. (2004). Two concepts of causation. In J. Collins, N. Hall, & L. Paul (Eds.), Causation and Counterfactuals (pp. 225-276). MIT Press. https://doi.org/10.7551/mitpress/1752.003.0010

    Palmer, D. (2021). Free will and control: a noncausal approach. Synthese, 198(10), 10043–10062. https://doi.org/10.1007/s11098-021-01701-5

    Sartorio, C. (2005). Causes As Difference-Makers. Philosophical Studies, 123(1-2), 71-96. https://doi.org/10.1007/s11098-004-5217-y

  • Beyond Our Design: How Emergent Orders Shape Their Makers

    Beyond Our Design: How Emergent Orders Shape Their Makers

    Introduction: The Invisible Currents

    We’ve all felt it—that sense of navigating currents not entirely of our own making. Societal expectations, economic forces, the pull of technology, even the momentum of our own habits. We build things, join groups, and create for our benefit, only to find ourselves shaped by these very creations. Is there a deeper, universal pattern at play here? A fundamental way in which systems, once created for advantage, turn around to govern their creators?

    This article proposes to explore a recurring dynamic, from the simplest life forms to the complexities of human society, and even within our own minds. It’s a process where entities strive for advancement by forming higher-level structures, which then, inevitably, guide and constrain them. This isn’t necessarily sinister, but it’s a profound aspect of how complexity and life itself organize.

    Part 1: The Universal Drive – Efficiency, Scale, and Connection

    At the heart of this dynamic lies a fundamental drive towards optimization. In his fascinating book, Scale: The Universal Laws of Life, Growth, and Death in Organisms, Cities, and Companies, Geoffrey West highlights universal laws governing how systems organize for survival and efficiency. He demonstrates that networks—whether biological, like the circulatory system, or infrastructural, like a city’s road network—share striking characteristics:

    1. They are fractal and space-filling: they extend to serve the entire organism or relevant area.
    2. Their endpoints are invariant: the size of blood capillaries or household taps remains remarkably similar, whether in a mouse or a whale, a small house or a huge industrial complex.
    3. They constantly optimize efficiency: energy is minimized, and output maximized.

    These principles lead to predictable scaling laws. For instance, larger cities or organisms are proportionally more efficient. A city ten times larger doesn’t need ten times more gas stations; it will require significantly fewer, achieving economies of scale. This quest for efficiency and optimization isn’t just a human design; it’s a fundamental driver pushing the formation and persistence of organized structures.

    Part 2: The First Principle – Emergent Governance for Scaled Advantage (Often at the Collective Level)

    This tendency towards optimization doesn’t just refine what already exists; it leads to the creation of new organizational levels, paving the way for qualitatively new possibilities. This brings us to our first core idea:

    Principle 1: Emergent Governance for Scaled Advantage

    When constituent elements (like cells, individuals, or even ideas) self-organize or are structured into a higher functional level to unlock new potentials—often primarily benefiting the survival, propagation, or capabilities of the collective or species—that emergent level inevitably exerts top-down causal governance over its constituents. This establishes a functional interdependence where component autonomy is frequently exchanged for systemic advancement, an advancement that may not always translate to direct, unmitigated benefit for every individual component.

    Let’s unpack this:

    • The Drive: The motivation here is the unlocking of “new potentials.” Crucially, these potentials often manifest as “evolutionary advantages” for the species as a whole (e.g., greater population numbers, wider geographical spread) or enhanced capabilities for the system itself. This drive doesn’t always equate to an improved quality of life or increased autonomy for every individual constituent. It’s about reaching genuinely new modes of existence for the collective.
    • The Inevitable Consequence: Top-down causal governance isn’t an accidental byproduct; it’s a necessary outcome for the new, higher level to cohere, function, and sustain these collective advantages. For the system to operate effectively, its components must, to some extent, align their actions with the needs and dynamics of the whole, even if this imposes constraints.
    • The Pact (Often Asymmetrical): This creates a “functional interdependence.” Components cede a degree of autonomy. In return, they become part of a system that might offer enhanced protection, stability, or the ability to achieve things impossible for individuals. However, this “pact” can be asymmetrical. The “systemic advancement” is the primary outcome from the perspective of the higher level or the species, and individual well-being might be a secondary concern or even be diminished in certain aspects for the sake of the collective’s overall success or persistence.

    Illustrative Example: The Wheat Bargain – How Wheat Domesticated Us

    Yuval Noah Harari, in his book Sapiens: A Brief History of Humankind, provides a paradigmatic example of this nuanced principle with “The Wheat Bargain.” He argues that the agricultural revolution, centered on the cultivation of crops like wheat, was a turning point where humans, in seeking to control their food supply, inadvertently became controlled by it, leading to species-level success at the cost of much individual hardship.

    From wheat’s “perspective,” this was an astounding evolutionary success. A wild grass confined to a small region in the Middle East exploded across the globe, thanks to human labor. For Homo sapiens as a species, agriculture allowed for massive population growth and the development of complex societies. But for individual humans, the bargain was far more complex and, in many ways, detrimental.

    • Increased Labor, Harsher Life: Farming wheat was far more arduous than foraging. It demanded grueling hours of tilling, sowing, weeding, and harvesting. Diets often became less varied, leading to nutritional deficiencies. Sedentary agricultural life also increased susceptibility to disease and the risk of starvation if the single staple crop failed.
    • The Trap of Luxury: The initial steps towards agriculture seemed logical—a little more food security. However, as Harari explains, these “luxuries” gradually became necessities. Each generation was born into a world more dependent on agriculture, with little memory of a different way of life. The perceived benefits of increased food production for a growing population (a species-level advantage) masked the individual sacrifices in health and quality of life.
    • Domestication by Labor: Ultimately, humans became tied to the land and the relentless cycles of wheat cultivation. Their social structures, daily routines, and even their physical health were profoundly shaped by the demands of this plant. Harari provocatively suggests that humans didn’t so much domesticate wheat as wheat domesticated humans.

    This “Wheat Bargain” perfectly illustrates Principle 1: humans created a new organizational level (agriculture) to gain an advantage (food security, ability to support larger populations – clear species-level benefits). This system, in turn, exerted powerful top-down governance, fundamentally altering human life and creating a deep functional interdependence. The very system designed for collective benefit began to dictate the terms of existence for its creators, often to the detriment of individual well-being, a clear instance of an emergent order shaping its makers for its own systemic proliferation.

    Part 3: The Second Principle – The Exclusive Power of New Levels

    If the first principle explains why and how higher levels form and govern (often prioritizing collective gains), the second principle addresses the unique nature of what emerges at these new scales.

    Principle 2: Level-Exclusive Functionality and Control

    The evolutionary emergence of a new, higher organizational level is typically characterized by the appearance of novel functions and capabilities that are exclusively operable at, and often define, that specific level. Consequently, the natural locus of operation and control for these level-exclusive functions resides within the emergent higher-level entity itself.

    Let’s break this down:

    • Novelty and Exclusivity: The new functions that appear aren’t just bigger or more efficient versions of what the individual components could do. They are new kinds of operations, entirely novel abilities that can only be performed by the integrated collective. A single neuron cannot think, but a brain can. A single citizen cannot pass laws, but a government can. These are capabilities that define the new level.
    • Locus of Control: Because these functions are exclusive to the higher level, that level itself becomes the natural operator or controller of these functions. The integrated whole is where these unique capacities are enacted and managed.

    Illustrative Example: The Dawn of Multicellular Cooperation (Simplified)

    Consider the monumental leap from single, independent cells, each fending for itself, to the earliest forms of multicellular life—perhaps simple colonial algae or primitive sponge-like organisms.

    Individual cells, for all their self-sufficiency, faced hard limits in terms of size, complexity, and environmental resilience. But when cells began to aggregate, driven by evolutionary pressures that favored collective advantages (such as better protection from predators, more stable access to resources for the group, or improved locomotion for the colony), a new ‘entity’ emerged: the colony or simple organism. This aggregation offered a path to enhanced survival and reproduction for the genetic lineage, even if it meant individual cells lost their autonomy.

    This new collective could now perform feats impossible for any lone cell. The entire colony might move as one towards light, a coordinated dance of countless tiny parts, driven by the collective sensing of a few specialized cells. Or, a basic division of labor could appear: some cells on the exterior might specialize in protection or movement, while interior cells might focus on digestion or reproduction. These weren’t just more cells doing the same thing; these were new, colony-level actions. The colony itself was swimming; the colony itself was organizing its reproduction as a unified entity for the propagation of the whole.

    These new abilities—this coordinated movement, this shared specialization, this collective response to the environment—were properties of the colony, operable only at that collective level. Individual cells contributed their specialized functions, but they were no longer independent operators for these large-scale functions. They became parts of a larger ‘whole’ which now possessed unique capabilities. This transition, of course, came with changes for the individual cells; their roles, their very fate, and even their lifecycles became shaped by the needs and functioning of the larger collective they had become part of. The “advantage” was primarily for the multicellular entity’s survival and ability to exploit new niches, illustrating the top-down influence inherent in this new level of organization.

    Part 4: The Unseen Hand and the Logic of Life

    The mechanism by which these higher levels exert their influence is often described as top-down causation. This is where the whole system, the emergent level, shapes the behavior and fate of its constituent parts. It’s not a mystical force, but rather the result of the new organizational structure, its internal communication pathways, and the new environment it creates for its components.

    “Reasons Before Reasoners”

    This brings us to a fascinating idea popularized by philosopher Daniel Dennett: “reasons before reasoners.” He argues that evolutionary processes can discover and implement highly effective “reasons” for certain structures or behaviors without any conscious thought or foresight on the part of the organisms involved. Evolution, through natural selection, stumbles upon solutions that work—that confer a survival or reproductive advantage. These solutions embody a kind of “logic” or “wisdom,” even if the individual components (like cells in a colony or early humans adopting agriculture) have no understanding of the grander “reasons” behind why these arrangements are beneficial at a higher level.

    The cells in our multicellular example don’t “understand” the colony’s strategy for finding light or reproducing. They simply respond to local cues and genetic programming that, over eons, has been shaped because it contributed to the colony’s success. Similarly, early farmers weren’t consciously designing a global agricultural system; they were making immediate choices that, cumulatively, led to a profound restructuring of human society. The “reasons” for these systems’ existence and persistence are embedded in their adaptive advantages, not necessarily in the conscious deliberation of their parts.

    The Invisible Architecture and the Allure of the Collective

    This leads to a crucial observation: the influence of the “top layer” is often unperceived, or at least not fully grasped, by those within it. For the individual cell, the colony is its world; its rules are the laws of nature. For humans, many societal structures, economic systems, or cultural norms form an “invisible architecture” that shapes our lives in ways we rarely question. They are the background conditions, the “water we swim in.”

    Furthermore, it’s plausible that this very lack of clear perception of the higher-level controls, or even a degree of “built-in ignorance” regarding the individual costs versus collective benefits, is itself an evolved feature. If individuals were fully cognizant of the potential diminution of their autonomy or well-being when joining or forming a collective, reluctance might prevail. However, if the formation of such higher levels is crucial for species survival or unlocking significant evolutionary advantages, then mechanisms that promote participation—perhaps an “irresistible appeal” of belonging, or a downplaying of individual sacrifice for the “greater good”—would be favored. The emergent layer becomes difficult to perceive in its totality, especially in its initial stages, even for rational beings. This subtle obscuring of the full terms of the “pact” facilitates the very emergence and entrenchment of these powerful top-down systems.

    Universality, Layering, Resilience, and Transcendence

    This dynamic of creating governing layers, which then exert top-down influence, appears to be a fractal pattern, repeating at multiple scales throughout nature and society. Complexity often builds layer upon structured layer. Cells form tissues, tissues form organs, organs form organisms. Individuals form families, families form communities, communities form nations, and nations interact in a global system. There’s a sense that one cannot simply “skip” levels in this developmental progression; each layer provides the foundation for the next, bringing with it new rules, new capabilities, and new forms of governance.

    This layered architecture contributes significantly to the resilience of the collective or species. Once a higher organizational level is established and functional, it tends to be robust against minor perturbations at the lower levels. Individual components may fluctuate, fail, or be replaced, but the integrity of the overarching structure often remains intact. This is because the higher level operates with its own set of rules and feedback loops, capable of buffering or compensating for localized issues. Dismantling such an established layer typically requires a more profound, systemic shock or a coordinated challenge from within that can overcome the cohesive forces and top-down governance of the emergent order itself.

    More than just stability, the achievement of a new, functional top layer often represents a qualitative transcendence for the species or group. It’s not merely an incremental improvement but a “level up” that can catapult the collective into a new sphere of dominance and operational capacity. The entities that successfully adopt this new layer often find themselves so overwhelmingly advantaged that they are effectively “out of sight” of their previous state or of competitors still operating at the lower level. They are no longer in the same “fray,” the same boots-on-the-ground struggle, because the new layer grants them an unmatched operational altitude and a fundamentally different way of interacting with their environment. This inherent stability and transformative power are key reasons why such emergent structures, once formed, tend to persist, become deeply entrenched, and profoundly shape the evolutionary trajectory of their components.

    Part 5: The Freedom-Optimization Continuum

    The interplay between individual components and the emergent systems they form brings another fundamental dynamic into focus: a continuum between individual freedom and collective optimization. This isn’t necessarily a zero-sum game in all instances, but a profound tension often exists.

    Principle 3: The Freedom-Optimization Trade-off in Emergent Orders

    As emergent higher-level systems evolve to optimize collective survival, propagation, or efficiency, there is often an inverse correlation with the autonomy and behavioral scope of their individual constituents. The more successfully a system optimizes for collective goals (e.g., resource acquisition, defense, reproduction), the more its internal structures and governance mechanisms tend to constrain and direct the actions of its components, shifting the balance from individual freedom towards systemic predictability and control.

    Let’s explore this:

    • The Antelope and the Cheetah: Optimization Through Adversity
      Nature offers stark illustrations of how species-level traits can be optimized, often through intense selective pressures that are dire for individuals. Consider the antelope, a creature of remarkable speed. This speed wasn’t a leisurely attainment; it was honed over generations by the relentless predation of cheetahs, the fastest land animals. For any individual antelope caught, the outcome is tragic. Yet, for the antelope species, this predator-prey dynamic has driven the evolution of incredible swiftness and agility—a clear species-level optimization. This symbolizes how what is detrimental or a restriction at the individual level can be a driving force for positive adaptation for the collective, creating complex and nuanced evolutionary narratives.
    • Two Poles of Success: Humans and Ants
      When we look at highly successful species on Earth, humans and ants stand out, yet they seem to represent different strategies along this continuum.
    • Humans: Our species’ success is often attributed to our cognitive abilities, granting us significant behavioral flexibility, foresight, and agency—hallmarks of freedom. We can innovate, adapt to diverse environments, and consciously reshape our surroundings. However, as we’ve seen, we too create and become subject to complex social, economic, and technological systems that optimize for collective ends, sometimes constraining that individual freedom.
    • Ants: Ants, on the other hand, exemplify extreme collective optimization. Their societies are marvels of efficiency, division of labor, and coordinated action, all geared towards the colony’s survival and reproduction. The individual ant has a highly prescribed role, its behaviors largely determined by chemical signals and genetic programming for the benefit of the whole. The structure is the strategy, and it exerts profound control.
    • The Gravitational Pull of Success and Entrenchment Evolution often works like the adage: “throw spaghetti at the wall and see what sticks.” When a particular strategy—be it a biological mutation, a new behavior, or a technological innovation—leads to a significant advantage (e.g., a reproductive boom, access to new resources), it tends to become rapidly adopted and entrenched. Consider the ants again. If a genetic trait or social behavior leads to more successful colony foundation and growth, it spreads. As colonies grow into supercolonies, new layers of organization might emerge. These supercolonies, by their very scale and resource control, could exert influence over constituent colonies (perhaps through competition, resource allocation, or even a form of “diplomacy” or conflict resolution between them). This creates an “infernal gear,” as I put it: success breeds larger, more complex structures, which in turn demand greater coordination and exert more comprehensive top-down control, further reducing the behavioral latitude of individual ants or even entire colonies within the super-structure. There’s often no going back; the new, larger system becomes the new reality, and its demands shape everything within it. This process, driven by the relentless pursuit of what works for the collective, can progressively narrow the scope of individual freedom.

    Part 6: The Human Equation – Agency in a World of Our Own Making

    How do these principles illuminate our own experience, particularly the perennial questions of human agency and free will? If we are constantly shaped by higher-level systems, what room is left for individual choice and self-determination?

    Agent Causation Through Principle 2

    Our second principle—Level-Exclusive Functionality and Control—offers a compelling perspective here. If uniquely human capacities like conscious thought, complex language, self-reflection, moral reasoning, and intentional long-term planning are level-exclusive functions of the integrated human organism (our complex psycho-biological “top layer”), then the “agent”—the whole, conscious person—is the natural locus of control for these functions.

    “I” think, “I” decide, “I” act, not because of some disembodied “ghost in the machine” or a special power found nowhere else in nature, but because “thinking,” “deciding,” and “acting” (in the sophisticated, self-aware way humans do) are operations that emerge at the level of the integrated “I.” This provides a naturalized view of agency. It suggests that our capacity for self-governance arises from the complexity and specific organization of our being, allowing us to be genuine causes of our actions within the framework of the level-exclusive abilities we possess.

    The “Membranes” We Inhabit

    Simultaneously, Principle 1 and Principle 3 remind us that we are also subject to the “membranes” we create and inhabit. Our societies, economies, technologies, and ideologies are all higher-level structures that we’ve built (often for good reasons, to achieve collective advantages and optimizations). Yet, these systems, once established, exert their own top-down causal influence, shaping our opportunities, our beliefs, and our behaviors, sometimes in ways that are not immediately apparent or always beneficial to every individual, and often trading degrees of freedom for perceived security or efficiency. The “Wheat Bargain” echoes in many aspects of modern life, where systems designed for progress can also create new dependencies and constraints.

    Conscious Participation: The Human Difference?

    Unlike cells in a colony, ants in a supercolony, or even early humans unknowingly ensnared by the demands of wheat, modern humans possess a unique (though perhaps not always fully utilized) capacity: self-awareness and the ability to reflect upon the systems that shape us. We can study history, analyze social structures, and critique economic models. This awareness, imperfect as it may be, offers a potential pathway to more consciously shaping the “membranes” we create and live within, and to navigate the Freedom-Optimization continuum with more intention.

    If top-down causation is a fundamental aspect of how complex systems operate, then true agency might not lie in escaping it entirely (which may be impossible), but in understanding it and striving to influence the nature of those higher-level controls. Can we become more deliberate architects of the systems that govern us, designing them to better align with both collective well-being and individual flourishing, consciously weighing the trade-offs between optimization and freedom? This is perhaps one of the central challenges of the human condition.

    VIII. Conclusion: The Echoes of Emergence

    The three principles explored here—Emergent Governance for Scaled Advantage, Level-Exclusive Functionality and Control, and The Freedom-Optimization Trade-off—offer a lens through which to view a fundamental dynamic at play across vast scales of existence. From the aggregation of cells into organisms to the formation of human societies and the very structure of our conscious minds, a recurring pattern unfolds:

    Constituent parts come together, or are organized, to form higher-level entities that unlock new potentials. These emergent entities then inevitably exert a guiding, sometimes constraining, influence over their components, often optimizing for collective goals which can impact individual autonomy. Furthermore, these new levels often give rise to entirely novel capabilities that can only be operated by and at that higher level.

    Recognizing this pattern doesn’t diminish the importance of individual action or the quest for freedom. Instead, it reframes our understanding of where our agency lies and how it operates within these complex, layered realities. It suggests that the creation of order and complexity inherently involves this intricate dance between bottom-up formation, subsequent top-down governance, and the ongoing negotiation between individual scope and collective efficiency.

    The challenge, particularly for us as humans, is to become more aware of these invisible currents and the architectures of the systems we inhabit. By understanding how emergent orders shape their makers, we might become more skillful and ethical participants in the ongoing co-creation of our world, striving to build systems that not only advance the collective but also honor and preserve meaningful degrees of freedom for the individuals within them. How does recognizing this universal dynamic change your perspective on your role, the nature of progress, or the future we are collectively building?

  • An NPC’s Life:

    An NPC’s Life:

    Amélie’s Ordinary Days on Tessera

    The luminous cycle of the wall-mounted habitat mimicked a soft dawn, pulling Amélie from her sleep. Like every morning on Tessera, the memory of the previous day was but an evanescent mist, a fleeting impression quickly swept away by the quiet anticipation of routine. She slipped out of bed, her bare feet on the temperate floor of her minimalist apartment – a space designed for efficiency, where every object seemed to have a predefined place, eternally clean, eternally functional.

    While the coffee machine prepared her brew – perfect aroma, ideal temperature, as always – Amélie activated the wall interface to check the local news. The contrast was as familiar as it was jarring. On one hand, the information feed detailed the chaos of the previous day: anti-gravity vehicle pile-ups, spectacular shootouts in the Gamma Quadrant, the daily count of “forced resets” – thousands, again. There was a brief mention of the exploits of the “Shield Bearers,” that atypical guild of Ludicores from Cortex who tried to contain the excesses of their peers. Then, without transition, a soothing government advertisement extolled the benefits of life on Tessera: immortality guaranteed by rapid replication cycles, the total absence of disease, poverty, or famine. “Thanks to the agreements with our esteemed Ludicore visitors,” the suave voice concluded, “fundamental stability and citizen well-being are assured.”

    Amélie sipped her coffee, the tumult outside filtered by the walls of her sanctuary. Her refrigerator, she knew without even checking, was full. Her vital needs were a constant, never a source of concern. Perhaps this was the true luxury offered by the powerful Ludicores who had made Tessera their playground and, incidentally, the home of her humanoid species. A strange luxury, woven from absolute personal security and omnipresent collective chaos.

    She finished her coffee, placed the cup in the recycling unit, and adjusted her uniform for the Central Café. The day promised to be similar to the last, punctuated by the orders of the inhabitants and the unpredictable interactions with visitors from Cortex. A prospect that, curiously, filled her with a familiar calm. She was ready.

    The Central Café was a bubble of relative order in Tessera’s constant flux. Located in a bustling square where anti-gravity vehicles zipped silently by or sometimes crashed in a flash of special effects quickly cleaned up by maintenance drones, the café offered a hushed, predictable atmosphere. Amélie knew every corner, every protocol. She prepared complex drinks with intense flavors for inhabitants like herself, exchanging pleasantries about the latest weather cycle or neighborhood rumors. She served a couple of inhabitants seated near the bay window; they were discussing, in low voices, an improvement project for their housing unit, an exchange as banal and comforting as those she might have had with her own ex, a few cycles ago. Relational life, here, also ran its course, despite everything.

    Interactions with the Ludicores were… different. Often abrupt, focused on an immediate objective. “A Triple-Boost Espresso, quick! I’ve got a bounty to chase,” one would bark, armor gleaming, eyes fixed on their personal interface. “How much to fill this inventory with Energy Pastries? I need a hundred,” another would demand, ignoring the queue. Amélie responded with polite efficiency, a skill honed by countless cycles. Sometimes, a Ludicore vehicle would miss a turn and end its course in the holographic planters on the terrace; Amélie would activate the cleaning protocol and resume her service, her heart perhaps beating a little faster, but no more. This was Tessera.

    Simon was different. A Ludicore, certainly, recognizable by his stature and the integrated technology he wore, but he belonged to the “Shield Bearers.” His equipment seemed more functional than flashy, and above all, he had a calmer presence. He had become a regular customer over several cycles. He often ordered the same thing – a local Chamomile floral tea – and would sit at a table near the wall interface, observing the bustle with quiet attention. Unlike the others, he greeted her, sometimes even asked how her day was going, a question so incongruous from a Cortex visitor that it had surprised her the first few times. A tenuous bond, made of shared routines and unexpected politeness, had formed between them.

    That cycle, after ordering his tea, Simon lingered at the counter. “Amélie,” he began in a low voice, his gaze discreetly scanning the room, “do you know ‘Black-Claw’? A Ludicore who comes here sometimes. Dark gear, pretty aggressive.”

    Amélie nodded. Black-Claw was known for his fits of gratuitous violence and his combat experiments on less fortunate passersby. “He stops by from time to time, yes. Often at the end of the cycle,” she replied, just as discreetly. She didn’t like Black-Claw; his visits always left an unpleasant tension in the air.

    “I’d need to know if he has any particular habits,” Simon continued. “Schedules, preferred routes when he leaves the neighborhood? It could… help prevent some problems.”

    Amélie hesitated for a split second. Sharing information about a Ludicore, even a troublemaker, seemed risky. But this was Simon. The calm, respectful Shield Bearer. And Black-Claw was a nuisance. “He often takes aeroway 7 towards the Forgotten Docks after he leaves here,” she murmured.

    A flash of gratitude crossed Simon’s eyes. “Thanks, Amélie. That’s valuable.” He paused. “Actually… this Black-Claw is part of a chain of disturbances I’m trying to dismantle. A sort of… personal side quest, if you will. With the information you’ve given me… would you agree to accompany me? Just to observe, maybe identify some of his other associates. Your local knowledge would be useful.” He added, almost to himself, “Honestly, I often feel more comfortable interacting with your species than with some of my own on Cortex.”

    A side quest. With Simon. The idea was both exciting and strange. To step out of the café routine, to actively participate in something that seemed right… It was well worth the potential risks. “Alright, Simon,” she said with a shy but resolute smile. “When do we start?”

    Their first side quest together was to follow Black-Claw’s trail. Thanks to Amélie’s information about his habits, they positioned themselves near aeroway 7 at the indicated times. Traveling with Simon was different for Amélie. Aboard his personal anti-gravity vehicle – a Shield Bearer utility model, less ostentatious than the other Ludicores’ chrome speedsters – she felt a kind of unusual calm. Usually, crossing the city’s arteries was a lottery, a careful slalom between speeding excesses and the unpredictable maneuvers of visitors. With Simon at the controls, the journey seemed more… controlled.

    They quickly spotted Black-Claw’s dark, angular vehicle heading towards the Forgotten Docks, a labyrinthine industrial zone known to be a haven for less… official activities. “We follow him from a distance,” Simon said, his interface projecting a discreet tactical map visible only to them. “No direct confrontation for now, just observe his contacts.”

    Amélie nodded, her heart pounding with contained excitement. This was much more stimulating than serving coffees. She activated her own sense of observation, noting details she would never have noticed before – affiliation symbols on some parked vehicles, the discreet comings and goings of other inhabitants near a specific warehouse. She felt useful, a partner. Fear, though present in the face of the potential danger Black-Claw represented, was dulled by the certainty implanted within her: at worst, a “reset” would return her to her apartment the next cycle.

    It was while monitoring a presumed meeting point in an old industrial park, strangely converted into a recreational area – part public garden, part Ludicore playground – that they stumbled upon the scene. In the center of a clear esplanade, Black-Claw, or at least a Ludicore strongly resembling him in equipment and aura of nonchalant menace, was “playing” with a Tessera inhabitant. The word “playing” was a terrible euphemism. The inhabitant was suspended in a crackling force field, subjected to what looked like intermittent energy discharges.

    Around them, life went on with an almost choreographed indifference. Other humanoid inhabitants passed by, barely glancing, skirting the scene as one avoids a puddle. A few Ludicores watched from a distance, some with an air of cynical approval, others with the boredom of one who has seen it all. One or two generic inhabitants attempted a hesitant approach, before freezing, hesitating, and turning back as if hitting an invisible wall…

    Simon swore under his breath. “That one… he’s crossing the line, even for here.” Without waiting, he activated his own equipment, a bluish energy shield shimmering around him, and advanced with a determined step.

    The confrontation was brief and violent. The tormentor, surprised by the intervention of a Shield Bearer, retaliated with bursts of dark energy. Simon parried, dodged, and counter-attacked with tactical precision that quickly disabled his opponent’s offensive systems and the force field holding the prisoner. The tormenting Ludicore, defeated but arrogant, snarled an insult and disappeared in a personal teleportation effect – probably to avoid a reset penalty imposed by local rules or by Simon’s guild.

    While Simon deactivated the last energy restraints, Amélie approached the victim, now slumped on the ground but conscious. The inhabitant bore the marks of the discharges, but his expression was strangely calm, almost resigned. “Why… why didn’t you activate your panic button?” Amélie asked, referring to the emergency protocol that allowed for an immediate voluntary reset. The inhabitant looked up at her, a weary smile on his lips. “The panic button? That cancels the compensations. You know… the ‘Resilience Under Extreme Duress’ bonuses. A few more cycles like this and I’ll have enough to unlock the level 3 habitat upgrade. And besides,” he added, wincing slightly, “the implanted pain dampeners do their job well. It’s unpleasant, but… tolerable.”

    Amélie remained silent, observing the scene: the victim rationalizing his torture for material gain, the passersby already returned to their occupations, Simon checking his systems after the fight. An extraordinary scene, and yet, just another incident on Tessera. Her presence alongside Simon gave her a new perspective, direct involvement, but the core of the event remained desperately… ordinary.

    This apparent detachment in the face of what should have been traumatizing was rooted in the very nature of existence on Tessera. Here, the notion of death, as other species might have conceived it, was almost abstract, foreign. Humanoid inhabitants like Amélie lived with the certainty – a truth hammered home by local authorities and omnipresent advertisements – that they were biological replicas, perfected clones. Each “forced reset,” each fatal accident was merely a temporary interruption, a brief pause before a new, identical copy, with almost perfect memory continuity, resumed its place. This rapid cloning technology, presented as another benefit negotiated with the Ludicores, made life infinite, but also, in a way, devoid of ultimate consequences. This was undoubtedly why fear, even if it could sting sharply in the moment, never truly took lasting hold in Amélie. Why fear the end when the end didn’t really exist? Even the administrative constraint that occurred each month – that home interface asking, “Do you consent to continue your existence on Tessera?” – was perceived by Amélie and the others as a simple bureaucratic formality imposed by the Ludicores, a kind of periodic census to ensure that the “attractions” of their vast planetary park were still functioning. A tacit acceptance, renewed cycle after cycle, for a life unique and yet infinitely repeatable.

    A few cycles passed after the incident in the park. Amélie and Simon had succeeded, through a combination of Simon’s discreet infiltration and Amélie’s local knowledge, in significantly disrupting the operations of Black-Claw and his group on Tessera. They found themselves one evening at the Central Café, after a particularly intense but successful mission. The usual bustle of the city seemed distant, filtered by the café’s acoustic walls.

    Simon contemplated the steaming teacup in his hands, an unusually pensive look on his face. “Amélie,” he began, his voice lower than usual. “All this… all that we see here, on Tessera. The way our peoples interact…” He searched for his words. “Sometimes, I wonder where all this is leading us. Is it… is it fair, this dynamic? This… global experiment?”

    Amélie tilted her head slightly, trying to grasp the deeper meaning behind Simon’s hesitation. She had noticed his growing melancholy lately. To her, his words evoked the inevitable tensions between a “colonizing” species like the Ludicores and the native inhabitants. “Shield Bearers” like Simon certainly had to navigate complex political currents on Cortex, worrying about the impact of their actions on Tessera.

    “I understand your concerns, Simon,” she said softly, placing her hand on his over the table, a simple gesture of comfort. “Life here is… eventful, because of some of yours. But there’s good too, right? Look at us. We manage to make a difference, however small.” She smiled. “You sound like one of your great diplomats or a Cortex philosopher, worried about the future of relations between our worlds. That must be a heavy burden to bear, I imagine.”

    She straightened slightly, her gaze sincere. “I can’t speak for everyone, of course, you should ask others… but for my part? Despite everything, I’m… happy, here. My life has meaning, a routine, and even excitement now, thanks to you. We have our problems, the Ludicores have theirs. But every people deserves to be able to live, don’t you think? As long as we can continue our existence…” Her voice trailed off, perhaps unconsciously thinking of that monthly notification that validated the continuation of her cycle.

    Simon looked up at her. There was a sadness in his eyes that Amélie didn’t quite understand, but also a form of resolution. He seemed to see the abyss between their perspectives, the unintentional irony of her words about the “right to live.” He nodded slowly. “Yes, Amélie. You’re right. Thank you.” He stood up. “I… I have to go now. I have things to take care of… on Cortex.”

    “Will you be back soon?” Amélie asked, accustomed to Ludicores disappearing as suddenly as they appeared.

    “I… I don’t know yet. Take care of yourself, Amélie.” And with those words, his silhouette shimmered and vanished, leaving her alone at the table with two cooling teacups. She remained for a moment, pensive, then shrugged. Another abrupt departure. That was also life with the visitors from Cortex. She cleared the table and returned to her work.

    Far away, on Cortex, in a room with bare walls except for complex, now-darkened screens, Simon removed his neural interface with a weary sigh. The cool touch of reality was almost brutal after the immersion. He sat for a long moment, the weight of his conversation with Amélie – and of his own unintentional deception – pressing down on him.

    Later, at dinner, his mother noticed his preoccupied air. “Your science project, Simon? Is it progressing?”

    He stirred his food distractedly. “Yes… it’s progressing. Too well, perhaps.” He looked up, his gaze meeting his mother’s. “The… the behavioral simulations of the Tessera inhabitants are incredibly realistic. Their internal logic, their way of finding satisfaction despite the chaotic environment we impose on them… It’s unsettling.” He hesitated, then added, “And since I shared the source code at school and everyone got involved… the responsibility is even greater. It’s not just my experiment anymore.” A small, joyless laugh escaped him. “Today, one of them… an inhabitant named Amélie… she told me about her happiness, her right to live… thinking I was worried about interplanetary politics. I don’t know if it’s ethical, Mom. To continue this ‘experiment.’ Even for science.”

    His mother listened attentively, her face suddenly grave. The question hung in the silence of the kitchen, far beyond the stars of Tessera visible on the darkened screens.

  • Running the Gauntlet:

    Running the Gauntlet:

    A Multi-Stage Refutation of the Standard Argument Against Free Will

    For a long time, the philosophical debate around Libertarian Free Will (LFW) has been dominated by the Standard Argument (SA). A cornerstone of the SA is often the simple, yet powerful, equivalence it draws: Indeterminism = Arbitrary Randomness. This equivalence leads directly to the conclusion that undetermined actions, being merely random, cannot be controlled or rational, thus undermining LFW. “The Gauntlet” (Version 1.2) is a comprehensive, multi-stage philosophical argument designed to decisively dismantle this foundational equivalence and, in doing so, expose the untenability of the Standard Argument in its most common form.

    The Gauntlet’s power doesn’t come from a single insight but from its structured, synergistic sequence of challenges, built upon solid theoretical groundings in both the Propensity Theory of Probability (PTP) ––an established theory of objective probability independent of the free will debate–– and well-founded Reason-Responsive (RR) models often employed within LFW, particularly event-causal accounts.

    The Gauntlet’s Strategy: A Layered Approach

    The argumentation unfolds through a series of carefully sequenced challenges:

    1. Establishing Coherence and Relevance (Ch 0 & Intro Ch): The Gauntlet first establishes the crucial groundwork. Challenge 0 (“A Rough Start”) demonstrates the conceptual coherence of Non-Random Indeterministic Dispositions (NRIDs), showing via PTP and RR that indeterminism doesn’t have to be arbitrary. The Introductory Challenge (“Laying down the groundwork”) immediately links this to agency, arguing that only such a PTP+RR model can adequately account for Intentional Action Control (IAC), unlike deterministic PRNGs or purely stochastic QRNGs.
    2. Securing the Definitions (Ch 1 & Ch 2): Challenge 1 (“Hostile Grounds”) isolates the intrinsically non-random nature of propensities themselves as stable dispositions. Building on this and the coherence of NRID (from Ch 0), Challenge 2 (“A Stronger Definition”) tackles the definition of indeterminism head-on. It argues compellingly that a definition based on alternative possibilities (SDa) is extensionally superior to one equating indeterminism solely with arbitrary randomness (SDb), precisely because SDb wrongly excludes coherent PTP and LFW models. The complementarity of Challenge 0 and Challenge 2 is key here, solidifying the case for taking structured indeterminism seriously both conceptually and definitionally. It seems to me that to successfully refute this definitional critique will prove to be an exceptionally challenging task.
    3. The Core Critique (Ch 3: The Final Blow): With the foundations laid, Challenge 3 delivers the central attack on the SA. It exposes the SA’s implicit reliance on the weak Commonplace Thesis (CT) via a Premise of Exclusivity (PE), evidenced by its dismissal of the viable PTP/LFW alternatives established earlier. This renders the SA guilty of begging the question or relying on an incomplete, unjustified basis.
    4. Consolidating the Critique (Ch 3.1 & Ch 3.2): These final challenges secure the gains. Challenge 3.1 (“A brutal wake-up”) formalizes the petitio principii charge against using the Standard Objection (SO) in this context, highlighting a move assessed as extremely difficult to counter logically. Challenge 3.2 (“Caught on the Horns”) demonstrates the resulting dilemma faced by critics when trying to attack specific, plausible event-causal LFW models based on their indeterminism, reinforcing the coherence of these models against standard attacks.

    Why the Gauntlet is Powerful and Resilient

    The effectiveness of this argumentative structure stems from several factors:

    • Synergy and Logical Flow: Each challenge builds logically on the conclusions of the previous ones, creating a cumulative force where the whole is greater than the sum of its parts.
    • Multiple Angles of Attack: The core “Indeterminism = Randomness” premise is assailed conceptually (PTP/NRID), agentively (IAC), definitionally (SDa vs. SDb), logically (critique of SA’s structure), and dialectically (petitio principii, dilemma).
    • Robust Foundations & Resilience: Grounded in both PTP and LFW/RR models, the Gauntlet possesses resilience. Even if critics focus on the application of PTP to agency, the arguments retain potential force by relying on the independent coherence of LFW/RR models, which Challenge 3.2 shows are themselves difficult to refute based solely on indeterminism.
    • Strong Defense Structure: The central argument (C3) is heavily fortified. The “upstream” challenges (C0-C2) establish the necessary premises about structured indeterminism and definitions. The “downstream” challenges (C3.1-C3.2) actively defend C3 by neutralizing the main counter-objection (SO) and demonstrating the resilience of LFW models.
    • Power Through Precision: Crucially, the Gauntlet derives immense strength from its focused and arguably modest conclusion. It doesn’t need to prove LFW is true or solve all associated problems. Its primary, stated goal is to refute the equivalence premise central to the Standard Argument. By achieving this decisively, it demonstrates that the SA, in its most common and influential form, is untenable. This targeted strike makes the conclusion highly robust and effectively shifts the burden of proof, demanding a much more sophisticated response from critics than simply repeating the traditional dilemma.

    Summary Table: The Gauntlet’s Structure

    # / TitleMain Goal / ConclusionKey Dependencies (Previous Arg#)Key Contributions (For Subsequent Arg#)
    Ch 0: A Rough StartEstablish coherence of NRID via PTP+RR.None (Uses PTP/RR concepts)Concept of NRID; Possibility of Structured Indeterminism.
    Intro Ch: Laying GroundworkShow PTP+RR explains IAC better than PRNG/QRNG.Ch 0 (NRID concept)Agentive relevance of PTP+RR; Limits of other models.
    Ch 1: Hostile GroundsAffirm non-random nature of PTP propensities themselves as properties.Intro Ch / Ch 0 (PTP context)Isolates key property of propensities; Basis for C2, C3.
    Ch 2: A Stronger DefinitionArgue for SDa (possibilities) over SDb (arbitrary randomness) definition.Ch 0/1 (for PTP coherence)Validates SDa; Invalidates DSb; Secures definitional ground for C3.
    Ch 3: The Final BlowCritique SA via PE/CT and its exclusion of PTP/LFW.Ch 0, 1, 2Main attack on SA; Justifies C3.1 & C3.2.
    Ch 3.1: A brutal wake-upFormalize petitio principii of using SO against structured indeterminism.Ch 1, 2, 3Neutralizes SO; Defends C3 (coherence of alternatives).
    Ch 3.2: Caught on the HornsPresent dilemma for critics attacking ECL indeterminism after SO neutralization.Ch 1, 2, 3, 3.1Shows difficulty of attacking LFW; Defends C3.

    In essence, The Gauntlet offers a clear, logical, targeted, well-founded, and multi-layered argument whose cumulative effect is to render the standard “Indeterminism = Randomness” objection against libertarian free will untenable, demanding a fundamental shift in the terms of the debate.

  • The Gauntlet

    The Gauntlet

    Arguments against the false equivalence


    For too long, the Standard Argument (SA) against free will has relied on a false premise; that indeterminism equals randomness. Libertarians (event causal, agent causal and non causal) have developed coherent and robust theories and conceptions, by themselves, that should have been sufficient to have shown that you can have non random indeterministic choices and actions in an indeterministic world.


    However, to avoid any potential risk of begging the question, it is crucial to show that the logical possibility of non-random indeterminism exists independently of libertarian premises. In this respect, the propensity theory of probability (PTP), established by Charles S. Peirce (1910/1978), then separately developed by Karl R. Popper (1959), offers an independent paradigmatic example. This theory has also been interpreted more recently by numerous philosophers of science, including J.H. Fetzer (1981), D.A. Gillies (1973/2010), and D. Miller (1995). According to this approach, probability is not merely an observed frequency or a degree of subjective belief, but an objective physical disposition, an inherent tendency or propensity within a given system or experimental setup. This propensity characterizes the tendency of that system to produce a certain type of outcome, considered as a real property of the generating conditions themselves. The very existence of this theory, developed entirely outside the philosophical debate on free will, guarantees that non-random indeterminism is an independent conceptual possibility, which definitively dispels any suspicion of circularity in our argument.

    To crystallize this foundational point with logical precision, the argument for the conceptual coherence of non-arbitrary indeterminism, drawing upon the independently established Propensity Theory, can be stated as follows:

    Challenge 0 : A Rough Start

    This argument establishes that indeterminism does not necessarily equate to mere arbitrary randomness. It proposes the coherence of indeterministic outcomes being guided by stable, inherent factors.

    Core Definitions:

    • Indeterminism: In an indeterministic world, the same event could have had a different outcome. (this definition is defended later on in Challenge 2) 
    • Propensity Theory of Probability (PTP): Views probability as an objective, single-case disposition (a “propensity”) of a specific generating system to produce a particular outcome. These propensities can be indeterministic (i.e., their values lie between 0 and 1).
    • Arbitrary Randomness (AR): Refers to indeterministic outcomes that occur without any specific guiding principle, stable tendency, or reason beyond chance itself; they are haphazard or chaotic.
    • Non-Random Indeterministic Disposition (NRID): An objective propensity (as per PTP) for an outcome that, while indeterministic (not strictly determined), is specifically guided by stable, inherent factors or structures within the generating system, rather than being arbitrary.

    The Argument:

    1. P1) The Conceivability of Guided Indeterministic Propensities. It is conceptually coherent, within the framework of the Propensity Theory of Probability (PTP), to posit the existence of Non-Random Indeterministic Dispositions (NRIDs).
      • Illustration: Consider an agent’s decision-making mechanism that is Reason-Responsive (RR)—sensitive to reasons. If this mechanism operates indeterministically, it could manifest an NRID. For example, the agent might have a 60% propensity to choose action A (and 40% for action B) due to a stable set of reasons. This propensity is indeterministic (the outcome isn’t guaranteed) but guided by the agent’s reasons, not arbitrary.
    2. P2) Essential Distinction from Arbitrary Randomness. If an outcome arises from a Non-Random Indeterministic Disposition (NRID)—meaning it is indeterministic yet specifically guided by stable, inherent factors—then this mode of occurrence is, by definition, distinct from Arbitrary Randomness (AR), which lacks such guidance.
    3. C) Indeterminism Need Not Be Arbitrary. Therefore, since the concept of Non-Random Indeterministic Dispositions (NRIDs) is coherent (as illustrated by combining PTP with concepts like RR), indeterminism is not necessarily equivalent to Arbitrary Randomness.

    To get a grip on what really are objective probabilities in PTP, let’s start easy :  

    Introductory Challenge (0.5) : Laying down the groundwork

    Preamble: Challenge 0 established the conceptual coherence of Non-Random Indeterministic Dispositions (NRIDs) within the Propensity Theory of Probability (PTP). This current challenge now explores how PTP applies to different mechanisms when evaluating their capacity to account for a fundamental characteristic of agency: Intentional Action Control (IAC). IAC denotes the ability to intentionally initiate an action or sequence of actions, sustain and repeat them based on ongoing intentions, and voluntarily cease them according to the agent’s reasons and goals, all within a potentially indeterministic world. We use “Task A”—an agent, for example, choosing to repeatedly press a button a certain number of times and then voluntarily stopping—as a clear illustration of IAC.

    P1) Foundational PTP Applicability. The Propensity Theory of Probability (PTP) provides a framework for understanding objective probabilities inherent in various systems operating within an indeterministic world:

    • a) Pseudo-Random Number Generators (PRNGs): In a PTP context, given a fixed starting seed, their subsequent outputs can be described by propensities of 1 (for the next determined output) or 0 (for any other output).
    • b) Quantum Random Number Generators (QRNGs): The quantum events these devices utilize (e.g., photonic behavior, radioactive decay) are characterized by objective physical propensities (e.g., a 0.5 propensity for a specific bit outcome from an ideal quantum coin flip).
    • c) Reason-Responsiveness (RR) Mechanisms: An agent’s choices emerging from an RR mechanism, operating indeterministically, can manifest as objective propensities (NRIDs) that are shaped and guided by their deliberative framework of reasons, values, and intentions.

    P2) Evaluating Mechanisms against the Standard of Intentional Action Control (IAC). While PTP can describe underlying probabilistic elements in these mechanisms, their capacities to exhibit IAC (exemplified by performing Task A) differ critically:

    • a) PRNGs and IAC: PRNGs can produce repetitive outputs if so programmed. However, they lack the capacity for IAC; they cannot voluntarily initiate Task A based on a current intention, flexibly sustain it while being open to altering course, or intentionally decide to cease the task based on an internal goal or reason. Their operations are fixed by their algorithm and initial seed.
    • b) QRNGs and IAC: Although QRNGs are based on PTP-describable quantum propensities, they are designed to produce statistically random sequences. They possess no intrinsic mechanism for forming intentions, pursuing goals through sustained action like Task A, or exercising voluntary control over the initiation, continuation, or cessation of such a task. Replicating the controlled execution of Task A via a QRNG would be an outcome of extraordinary chance, not an exercise of capacity.
    • c) RR Mechanisms and IAC: RR mechanisms, when understood as NRIDs within a PTP framework, are uniquely suited to explain IAC. An agent’s stable reasons and intentions can ground robust propensities to initiate and continue Task A. Simultaneously, their responsiveness to reasons provides the capacity for ongoing guidance, re-evaluation, and the voluntary cessation of Task A when their intentions or reasons shift (e.g., the intended number of repetitions is reached, a new, overriding reason emerges). This demonstrates controlled, guided indeterminism in action.

    P3) The Explanatory Mandate for Models of Agency. A comprehensive model of agency, particularly within an indeterministic framework compatible with PTP, must be able to account for evident agentive capacities such as Intentional Action Control.

    Conclusion (C): Therefore, while PTP offers a fundamental language for describing objective probabilities across various systems (PRNGs, QRNGs, and RR mechanisms), it is only when PTP is integrated with Reason-Responsiveness (conceptualized as NRIDs) that we find an adequate explanation for Intentional Action Control (as seen in Task A) within an indeterministic world. This reveals a crucial distinction: RR mechanisms demonstrate a PTP-compatible capacity for guided and controlled indeterminism that is essential for agency. Models that attempt to reduce indeterministic choice merely to the determinism of PRNGs or the unguided statistical randomness associated with QRNGs are therefore insufficient to capture this fundamental aspect of agentive behavior.

    Challenge 1 : Hostile Grounds

    • P1) In an indeterministic world, the propensity theory of probability (PTP) is a relevant interpretation. “[P]ropensity interpretations regard probabilities as objective properties of entities in the real world. Probability is thought of as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a long run relative frequency of such an outcome.” [2]
    • P2) Intrinsically, all dispositions, including the propensities defined by PTP, are never random as properties themselves.
    • C) Therefore, in an indeterministic world where PTP applies, propensities (objective probabilities) are intrinsically not random.

    [2] Hájek, Alan, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Winter 2023 Edition), Edward
    N. Zalta & Uri Nodelman (eds.),
    URL = https://plato.stanford.edu/archives/win2023/entries/probability-interpret/.

    Challenge 2 : A Stronger Definition

    • P1): There are two standard definitions (SD) in contention regarding indeterminism:
      ○ SD(a): In an indeterministic world, the same event could have had a different outcome.
      ○ SD(b): An indeterministic world is a world where ontologically random events can occur.
    • P2): SD(a) is compatible with all known forms of indeterminism: Libertarian approaches to free will (LFW), probabilistic approaches such as PTP, and approaches of ontological randomness such as in quantum mechanics (QM).
    • P3): Ontological randomness (as defined in SDb) is incompatible with LFW and Propensity Theory of Probability approaches.
    • P4): The criterion of extensional adequacy for a definition states: “A definition is extensionally adequate iff there are no actual counterexamples to it.”
    • P5): SD(a) has no actual counterexamples.
    • P6): SD(b) has actual counterexamples (notably, if LFW and TPP approaches represent valid forms of indeterminism, they would constitute such counterexamples because they do not involve ontological randomness according to P3).
    • C): Therefore, SD(a) is the only definition [of the two presented] that is extensionally adequate.

    Challenge 3 : The Final Blow

    • P1): The standard argument against libertarian free will rests on a dilemma opposing determinism and “randomness”, presupposing an equivalence between indeterminism and randomness.
    • P2): The Premise of Exclusivity (PE): The Standard Argument (SA) against free will must be considered as implicitly resting on the Commonplace Thesis (CT) [or an equivalent intuition or PE], because its rigid dichotomous structure (determined vs. random-uncontrolled) ignores or excludes outright the possibility of alternative models of structured indeterminism, such as those proposed by Propensity Theories of Probability (TPP) where the agent could act via objective dispositions and probabilities; however, this exclusion is logically founded only if one presupposes, like CT, that any form of non-determination (chance/randomness) necessarily equates to randomness incompatible with agentive control.
    • P3): Moreover, the Commonplace Thesis is itself also anchored in strong common intuition and tradition.
    • P4): The Premise of Exclusivity is anchored in CT, which in turn is anchored in strong intuition and tradition.
    • C) Consequently, the Standard Argument is incapable of conclusively refuting libertarian free will. By relying on a Premise of Exclusivity (PE/CT) that is intuitive but undemonstrated, and which excludes a priori models of structured indeterminism (PTP or LFW), the SA commits a petitio principii (begging the question) or, at best, rests on an incomplete basis. The burden of proof is thus reversed: defenders of the standard argument must justify this exclusion, while logical space is reopened to explore libertarian models based on PTP that could reconcile indeterminism and agentive control.

    Consequent Challenge 3.1: A brutal wake-up The Petitio Principii of the Standard Objection

    • P1) The preceding arguments (1-3) establish the conceptual coherence and relevance of structured indeterminism models (e.g., PTP, LFW) that are distinct from simple randomness/arbitrariness (as encapsulated notably in P3 of Argument 2, which denies the universal applicability of SDb). Let us denote this position P, which implies Not (Indeterminism ⇒ Simple Randomness/Arbitrariness).
    • P2) The Standard Objection (SO) against libertarianism fundamentally uses the inference chain: Indeterminism ⇒ Simple Randomness/Arbitrariness ⇒ Lack of Control/Explanation. The SO therefore crucially presupposes the equivalence or implication : Indeterminism ⇒ Simple Randomness/Arbitrariness. This is exemplified by Randolph Clarke’s description (2003) :

    An undetermined action, it is said, would be random or arbitrary. It could not be rational or rationally explicable. The agent would lack control over her behavior. At best, indeterminism in the processes leading to our actions would be superfluous, adding nothing of value even if it did not detract from what we want. (p.1)

    • P3) Attempting to refute a position (P) using an argument (SO) whose essential premise (Indeterminism ⇒ Simple Randomness/Arbitrariness) is equivalent to (or directly presupposes) the negation of the main thesis of that position (Not (Indeterminism ⇒ Simple Randomness/Arbitrariness)) constitutes a petitio principii (begging the question).
    • C) Consequently, using the Standard Objection (SO) to directly refute the coherence of structured indeterminism (P, supported by Arguments 1-3 and affirmed by P3 of Arg2) begs the question and is therefore dialectically invalid in this specific context. The SO is thus neutralized as a tool for direct refutation of this position.

    Consequent Challenge 3.2 : Caught on the Horns

    • P1: Influential models of Event-Causal Libertarianism (ECL) largely share their structure (reasons-causation, rationality, non-coercion, etc.) with standard Compatibilist models, differing primarily by adding a requirement of indeterminism in the causal pathway to action. This structural similarity is noted by key authors. [3]
    • P2: As established in Argument 4 (Challenge 1.1), using the Standard Objection (Indeterminism = Randomness = Incoherence) to directly attack the indeterminism requirement of these ECL models commits a petitio principii against the possibility (defended in Arguments 1-3) of structured (non-R/A) indeterminism.
    • P3: A critique aiming to show these specific ECL models (described in P1) are incoherent due solely to their distinctive feature (indeterminism) must logically target either: (a) the features shared with compatibilism, or (b) the indeterministic feature itself.
    • P4: Targeting the shared features (Horn 1 / P3a) is dialectically problematic: it likely undermines widely accepted compatibilist positions and, more importantly, fails to engage with the distinctive indeterministic element of the ECL models in question.
    • P5: Targeting the indeterministic feature itself (Horn 2 / P3b) faces significant hurdles: (a) Employing the Standard Objection based on ‘Indeterminism = Randomness = Incoherence’ is question-begging (per P2). (b) Demonstrating that non-random indeterminism (whose coherence Arguments 1-3 defend) specifically generates incoherence or loss of control within the otherwise compatibilist-like structure requires a novel and detailed argument, shifting the burden of proof to the critic. (c) Arguing that indeterminism/PAP (Principle of Alternative Possibilities) (even if coherent and non-random) is irrelevant or insufficient for moral responsibility shifts the debate away from the coherence of the ECL model itself and onto the conditions for MR (e.g., via Frankfurt-style arguments).
    • C: Therefore, critics wishing to reject these specific ECL models based solely on their incorporation of indeterminism are caught in a difficult dilemma: Horn 1 is off-target or self-undermining; Horn 2 either begs the question (2a), requires discharging a significant new burden of proof (2b), or changes the subject from coherence to relevance for moral responsibility (2c). The standard, simple objection based on ‘Indeterminism = Randomness = Incoherence’ is thus shown to be insufficient against these ECL models.

    [3] Examples of two models :

    Let us start with a sketch of a rather simple view of this type. It employs an event-causal theory of action. And it imposes, for free action, the very same requirements as do many good compatibilist accounts (for compatibilist accounts, at least in recent times, do not typically require determinism). It differs from compatibilist views primarily just by also requiring, in order for an action to be directly free, that certain agent-involving events (such as the agent’s having certain beliefs and desires and a certain intention) that cause the action must nondeterministically cause it. (Randolph Clarke, 2003, p. 29)

    Directly Free Action as Action Indeterministically and Non-deviantly Caused by Reasons of the Agent’s Own: A decision or other act is directly free just in case it is caused non-deviantly and inderministically by reasons of the agent’s — such as convictions, desires, values, beliefs and preferences — and other reasonable compatibilist conditions on free action are met, including that the act is not compelled and is not the result of (non-self-arranged) manipulation or coercion. An agent’s performing a directly free act requires that it be open to her at the time not to perform that action, either by performing an alternative act right then or by not performing any action at all right then. (Laura Ekstrom, 2016, p.137)