Introduction
In a recent Boston Globe op-ed, two researchers proposed a linguistic fix to an ontological dilemma: rename our relationships with AI. Rather than referring to generative systems as “coworkers” or “collaborators,” they suggest we replace the human “co-” with a machine-coded “c0-”: c0worker, c0creator, c0mpanion. The goal is to reassert the boundary between human and machine by embedding it in our language. It's a kind of clarity via typography.
It’s a clever gesture, but telling. Faced with the entangled realities of cognitive labor shared across human and machine systems, the instinct is not to inquire but to quarantine. The prefix becomes a kind of firewall. We’re no longer debating what AI is or does; we’re drawing thicker lines around who gets to count as meaning-making. I am all for developing new language to meet new circumstances. But this language operates to reassert a very old divide in very reductive terms. It reduces thought instead of expanding it's field and scope of articulation.
The persistence of the binary—human as subject, machine as tool—isn’t just about fear, though fear tends to be part of it. Rather, the binary reflects reflects an older humanist philosophical architecture in which cognition is tied to consciousness, and meaning to interiority. When AI generates output that seems intelligent, we either mistake it for a person or strip it of all significance. The problem isn’t that machines confuse us. It’s that our frameworks for interpretation haven’t kept up.
The enduring fantasy that humans think while machines merely compute has less to do with silicon circuits than with the sediment of a philosophical tradition that still hasn’t quite gone out of fashion. Of course, fear plays its part—see my musings elsewhere on the 'Monstrous Virtue of AI'—but the real culprit is conceptual inertia. We remain enthralled to a humanist metaphysic in which thought is coterminous with consciousness and meaning floats somewhere behind the eyes, like a soul in a cathedral. When AI produces language that smacks of intelligence, we tend to anthropomorphize it—or, in a fit of existential hygiene, deny it any significance at all. The fault, dear reader, lies not in our machines, but in our models of meaning.
This is precisely the terrain N. Katherine Hayles has been navigating for decades, dismantling the creaky apparatus of human-versus-machine and replacing it with a more supple architecture—one that actually takes thinking seriously. The real scandal, if one can call it that, is not that Hayles challenges the binary, but that the binary persists in spite of her. What she offers is not a verdict on whether machines “really” think—a theological question in technological drag—but a shift in the question itself. What kinds of cognition are already unfolding? And what does our insistence on guarding the gates of thought say about the politics of authorship, the epistemology of panic, and our lingering nostalgia for minds that wear faces?
When we stop fussing over whether machines can think and start asking how meaning actually comes into being, the debate begins, at last, to show some signs of life. What’s at stake in machine cognition isn’t the spectacle of mimicry or the sci-fi melodrama of human obsolescence, but the rather more awkward question of how threadbare our own ideas about thought have become. The real drama here lies not in the triumph of artificial minds but in the slow unraveling of the metaphysical furniture we’ve been sitting on since Descartes. AI, in this sense, is less a Promethean breakthrough than a glorified parlour trick that leaves us scandalised not by what it can do, but by what it reveals about the paltry ambitions we’ve long mistaken for thought. If the machine unsettles us, it’s not because it trespasses on sacred ground, but because it starts to look suspiciously like us—verbose, overconfident, and not entirely sure what it means half the time.
Taking Hayles seriously requires that we stop inspecting our machines like overzealous customs officers, searching for smuggled traces of consciousness. The real ethical question isn’t whether AI is really thinking, but which cognitive routines our systems quietly enshrine—and which they discard as inefficient, unmonetizable, or insufficiently aligned with quarterly projections. Far from ushering in a new epoch of reason, our current architectures mostly excel at laundering certain forms of intelligence into legitimacy while leaving others to rot in the unparseable margins.
Binary Dead Ends
Like all respectable binaries, this one survives by staging a quarrel with the very thing it secretly relies on. “Human” cognition earns its pedigree by casting out anything that smacks of machinery—habit, repetition, the unseemly mechanics of thought—while the “machine,” poor thing, is defined chiefly by its alleged lack of all the traits we’re not quite sure we possess ourselves: agency, meaning, the occasional emotion. This isn’t so much an insight as a parlour trick. The opposition doesn’t describe a pre-given reality; it fabricates one, chops the world into two tidy categories, and then feigns surprise when those categories turn out to fit a little too well. And so the conversation marches grimly on, forever scandalized by its own premises: overlap becomes heresy, ambiguity a moral panic, and anything that smacks of integration is hauled before the tribunal for treason against definition. The real achievement of the binary is not resolving the problem—it’s keeping it alive just long enough to be endlessly misunderstood.
In simple terms, binary logic, for all its swagger, has the intellectual dexterity of a coat rack. The human is either ineffably deep or hopelessly replaceable. The machine is either dead inside or secretly planning to unionize. We ricochet between these positions like philosophers trapped in a hall of mirrors, pausing occasionally to announce that language is broken before producing another thousand words to prove it. The problem isn’t that these binaries are false; it’s that they’re so astonishingly boring. They resolve nothing, clarify less, and serve mainly to keep everyone too busy to notice that the categories themselves were never that stable to begin with.
Hayles's Theory of Cognition
N. Katherine Hayles defines cognition as “a process that interprets information in contexts that connect it to meaning.” Each part of this definition is essential, and the meaning of the whole depends on understanding how these parts relate.
First, cognition is described as a process. This means it is not a fixed property that a system possesses, but an activity—a sequence of operations that unfold over time. A process involves change, interaction, and responsiveness. It is something a system does, not something it simply has.
Second, this process involves interpreting information. To interpret information is to take in data or signals and do something with them—not just to receive them, but to respond to them in a way that is selective. Interpretation means that the system identifies which aspects of the information are relevant, and in doing so, distinguishes what matters from what does not.
Third, this interpretation happens in contexts that connect it to meaning. In other words, the system does not interpret information in isolation. It does so within a specific context—a situation, environment, or structure—that shapes how the information is understood. It is this connection to context that allows the interpretation to produce meaning. Without context, information is just noise. With context, it can be meaningful.
Together, these three parts describe cognition as an interpretive activity that is shaped by context and directed toward meaning. This definition does not require consciousness, language, or human-like awareness. It applies to any system—biological, technical, or hybrid—that is capable of interpreting information in context. A bacterium moving toward nutrients, a human responding to speech, and a machine learning model adjusting its internal weights are all examples of systems that may be understood as engaging in cognition under this definition, so long as their activity meets these three conditions.
This account of cognition allows us to analyze a wide range of systems in terms of how they generate meaning through context-sensitive interpretation, without reducing cognition to either human experience or computational function.
Applying Hayles’s Definition: A Predictive Text Example
To see how this definition works in practice, take a predictive text system—the kind embedded in your email or smartphone keyboard. Beneath its banal utility lies a series of interpretive decisions that, in Hayles’s terms, exemplify how cognition can unfold without consciousness.The example below uses a predictive text model—such as the kind found in a smartphone keyboard or email interface—to show how each part of the definition operates in practice.
Consider a phone that suggests the next word after a user types “Looking forward to seeing…”
- The process: The predictive text system engages in a process. This means that it is not producing a fixed output from a static rule. Rather, it carries out a sequence of operations over time in response to input. The system updates, re-evaluates, and adjusts its activity based on prior input and internal state. It is this dynamic and responsive activity that constitutes the process.
- The information: The information in this case is the user’s input: the phrase “Looking forward to seeing.” The system receives this as data and uses it as the basis for its next action. The specific sequence of words constitutes the informational content that the system works with.
- The interpretation: The system interprets the information by selecting from among many possible continuations. Based on its prior training, it identifies some possible next words as more likely or more relevant than others. For instance, it may prioritize “you” over “them,” “everyone,” or “nothing.” This is not just registration of input; it is an act of filtering and prioritizing—deciding which parts of the information are relevant to its current action. That act of selection is the interpretation.
- The context: The context is the specific situation in which the system interprets the information. In this example, it includes factors such as the user’s past writing behavior, the type of application (e.g., text messaging vs. email), and the linguistic patterns the model has been trained on. These conditions influence how the information is received and what the system treats as relevant. For instance, if the user often types casual messages, the system might prioritize informal suggestions like “you” over more formal alternatives. The context does not determine meaning on its own, but it shapes how the interpretation connects to meaning. The system’s interpretation happens within, and in response to, this context.
- The connection to meaning: The suggestion generated by the system becomes meaningful because it is used by the person composing the message. The suggestion may be accepted, modified, or ignored, but in all cases, it participates in shaping communication. The interpretation connects to meaning through the function it performs in that interaction. The meaning is not in the word itself, but in the role it plays in the context of use.
This example illustrates that cognition, as Hayles defines it, does not require consciousness or intention. What matters is that the system carries out a process, interprets information, and does so in a context that connects that interpretation to meaning. This framework makes it possible to identify cognition across a wide range of systems—human and nonhuman—without collapsing the differences between them.
Understanding Cognitive Assemblages
Building on this framework, Hayles introduces the concept of the cognitive assemblage: a configuration in which multiple agents—human, technical, or organic—engage in interpretive activity shaped by context and directed toward meaning. These agents may be human, technical, or nonhuman organic—including plants, animals, microbial life, or artificial systems—so long as they meet the criteria for cognition. That is, they must engage in a process that interprets information in contexts that connect it to meaning.
To understand this more fully, it is important to define what an assemblage is. An assemblage refers to a contingent, heterogeneous configuration of interacting elements—material, symbolic, organic, and technical. Assemblages are open-ended and non-totalizing: the components retain their distinct identities and operate with different affordances, even as they influence one another. Assemblages are not reducible to systems. A system typically implies boundary, hierarchy, and coherence. An assemblage has none of these guarantees. It is structured by relations, not by function.
A cognitive assemblage, then, is a configuration in which multiple agents—human, technical, or nonhuman organic—engage in cognition. These agents differ in form and capacity, but each participates in the interpretive activity of connecting information to meaning within context. Meaning does not emerge from any one agent alone, but from the interactions and feedback loops among them.
Hayles emphasizes that cognitive assemblages are characterized by emergence. Emergent properties arise from the dynamics of the whole but are not traceable to any single part. In the case of cognition, meaning is an emergent product of recursive, context-sensitive interpretation across agents. For example, a forest ecosystem may involve plant roots, fungal networks, soil bacteria, and moisture sensors in an agricultural monitoring system. Each of these may engage in interpretive activity based on environmental signals, and the resulting behavior—say, nutrient exchange or adaptive irrigation—is not the output of one cognitive subject, but of the assemblage as a whole.
This framework shifts our focus. Instead of asking whether a given agent is “really” thinking, we ask how interpretive activity unfolds within an assemblage, and what the consequences of that activity are. It also clarifies that cognition is not confined to minds, machines, or organisms. It happens in networks of relation—distributed across embodied, organic, technical, and material agents that co-produce meaning in shared contexts.
Example: A Smart Agricultural System as a Cognitive Assemblage
Consider a precision agriculture system designed to monitor and optimize soil conditions for crop growth. This system involves several components:
- Moisture sensors and nutrient monitors embedded in the soil,
- Root systems of plants, which adjust growth patterns in response to water and nutrient gradients,
- A fungal mycorrhizal network, which mediates nutrient exchange between plants,
- A machine learning model, which predicts irrigation needs based on sensor data and environmental variables,
- A human operator, who oversees the system, interprets alerts, and adjusts planting or watering strategies based on outputs.
Let’s analyze this assemblage according to Hayles’s definition of cognition. Each of these components engages in cognition under her criteria—though in radically different ways.
- The plant root system interprets chemical gradients in the soil. In response to information about moisture and nutrient concentration, it alters its growth direction. This is a process of interpretation: the root selects what matters (e.g., nitrogen levels), and its growth behavior changes in context. It connects information to meaning through adaptive response.
- The fungal network similarly interprets chemical signals from plants and the surrounding soil. It redirects resources or alters exchange patterns depending on what it detects. Again, the signal is interpreted within a biological context, and the result is meaningful coordination.
- The sensors detect moisture levels and convert them into data. On their own, these sensors are not cognitive—they register input, but do not interpret. But…
- The machine learning model receives that information and interprets it in relation to a learned environmental model. It classifies the soil as too dry or nutrient-deficient and adjusts its output accordingly (e.g., triggering irrigation). This process is shaped by prior training, threshold settings, and environmental variables—all of which constitute context. The model selects relevant information and connects it to a meaning-bearing action.
- The human operator interprets the system’s output—e.g., a recommendation to irrigate—and weighs it against other factors: weather forecasts, crop cycles, or experience-based judgment. The human’s cognitive activity is embedded in institutional and environmental contexts that also shape what the information means and how it is used.
Interpretation as Selection: Cognitive Agency
Hayles’s definition of cognition emphasizes that it is not an internal property of conscious minds, but a functional process. This redefinition foregrounds interpretation as central to cognitive activity. But what exactly does it mean to interpret information? And what kind of agency does this process involve? It is here that Hayles departs sharply from familiar models of cognition grounded in consciousness, intention, or rational choice. In her account, interpretation does not require awareness, deliberation, or language. What it requires is selection: the capacity to treat some signals as relevant and others as noise, in ways that affect how future signals will be received. This, too, is agency—though not always in the form we expect.
For Hayles, interpretation isn’t just pattern recognition. It’s selection. And selection, despite sounding like something you do in a wine shop, entails agency. Not the chest-thumping variety in which a rational subject triumphantly makes choices, but the less glamorous kind where a system quietly reshapes the conditions under which its next move will make sense. To interpret is to decide what gets through the filter, what gets ignored, and what gets translated into action—often without so much as a memo to consciousness.
Cognition, in this light, is not about heroically processing data like some Cartesian spreadsheet manager. It’s about deciding what even counts as data in the first place. When a bacterium alters its trajectory in response to chemical signals, or when a machine-learning model adjusts a few billion weights after misidentifying a sheepdog, what we’re seeing isn’t just reaction. It’s a shift in how the next signal will be received. The system hasn’t just noticed something—it’s changed what counts as noticeable. That’s agency, albeit the kind that wouldn’t make for a very compelling movie.
This is Hayles’s point: cognition doesn’t wait for a spotlight and a soliloquy. It’s already in motion, shaping salience, tuning the volume on what matters, brushing aside the rest. Interpretation isn’t decoration; it’s world-building. And because this happens recursively—each interpretive act nudging the next—agency ends up smeared across the entire system. Meaning doesn’t drop from the sky. It condenses out of a long sequence of context-sensitive guesses, each one making the next a little less random.
Which is why cognition isn’t the same as behavior, output, or the ability to answer trivia questions. It’s the capacity to sift relevance from noise in a way that changes the game going forward. And that selection doesn’t need consciousness; it just needs consequences. Agency, here, isn’t about asserting your will—it’s about re-tilting the playing field so that the next move falls differently. Quiet, unspectacular, and completely indispensable. Like a good editor.
Recursivity in Cognitive Assemblages
Within a cognitive assemblage, interpretation is never singular. Each technical cognizer—whether neural network, language model, or auto-tagging algorithm—does not simply receive information and spit back answers like a digital oracle. It reshapes the context through which the next interpretive act must move. One model modulates a prompt, the prompt reorients a user’s query, the user adjusts their language, and the system shifts its probabilistic terrain accordingly. Every output alters the conditions of future inputs. Meaning becomes less a destination than a recursive choreography of transformation—performed not by individuals, but across a shifting, multi-agent ecology of sense.
This is what it means to treat cognition as distributed and co-emergent. Not to declare that machines have minds, or that humans don’t—but to recognize that meaning is always assembled, always provisional, always caught in the act of becoming. It is not given in advance, and it is not generated in solitude. It arises through the uneven, recursive translation of signals into structures of sense, each act of interpretation bending the assemblage anew. If that sounds suspiciously like thinking, that’s only because we’ve spent centuries calling thinking something it never quite was.
Cognition Without Consciousness
For those accustomed to locating cognition somewhere between the ears and behind the eyes, Hayles offers a subtle but radical provocation: what if most of it happens elsewhere? In Unthought, she defines nonconscious cognition as “cognition that occurs in the absence of consciousness but is nonetheless intentional, flexible, and capable of adapting to changing environments.” This is not the Freudian unconscious, seething with repression and sublimation, nor is it the Cartesian cogito, busy congratulating itself for having thoughts. It is something stranger: cognition without subjectivity, thought without thinker, responsiveness without reflection.
This form of cognition unfolds through fast, low-level processes—sensorimotor routines, affective modulation, environmental attunement—that interpret and respond to stimuli before awareness kicks in. It is not organized around symbols or representations, but around relational responsiveness. “The fast, low-level processes that filter stimuli before they crowd the stage of awareness,” as Hayles puts it, are not just preconditions for thought—they are its infrastructure.
Crucially, this nonconscious cognition is not marginal or auxiliary. It is foundational. It underwrites all higher-level reasoning and operates across a wide spectrum of entities—biological, mechanical, and hybrid. Once cognition is decoupled from consciousness, it becomes possible to recognize its operations in all sorts of unorthodox locations.
What Hayles’s framework unlocks—quietly at first, like a polite cough in the back of a crowded lecture hall, then with the slow inevitability of a bureaucratic error—is the realization that cognition does not begin with the brain, nor end with the chip. Cognition becomes an emergent property of systems that interpret information in context and connect it to meaning—not through language or logic, but through modulation, selection, and adaptation.
She identifies three principal domains of nonconscious cognition, each of which challenges anthropocentric accounts of intelligence:
Embodied Nonconscious Cognition
This includes sensorimotor adjustments, affective responses, and physiological interpretation. When your body stiffens in response to a sudden noise, or your breath slows as you settle into a chair, you are not enacting conditioned reflexes. You are performing embodied interpretation: differentiating signals, modulating states, and enacting meaning through posture, tension, and orientation. Most of this happens without your permission—and it happens better that way.
Technical Nonconscious Cognition
Machines may not dream of electric sheep, but they do interpret inputs and generate adaptive outputs. A thermostat modulating room temperature or a language model generating prose are not conscious, but they are cognitive by Hayles’s definition. They process information, weigh probabilities, and select from among possibilities based on contextual criteria shaped by their training, design, and architecture. These are not symbolic acts of understanding—they are indexical acts of selection.
Biological Nonconscious Cognition Beyond the Nervous System
Cognition isn’t reserved for creatures with brains. Plants adjusting to light gradients, fungi modulating growth to nutrient availability, bacterial colonies adapting through chemical signaling—all perform interpretive labor. Even within animals, cognition occurs in places far from the cortex: immune systems evaluate threats, gut flora regulate systemic conditions, and cellular systems repair tissue based on distributed criteria. These are not metaphorical minds. They are operational cognitive systems, grounded in context-sensitive interpretation.
Nonconscious cognition is not a defective version of thought—it’s a distinct form of interpretation, often foundational, but not subordinate. It does not evolve into consciousness. It operates alongside it, beneath it, and—sometimes—without it entirely. Across human bodies, machine systems, and distributed biological assemblages, cognition emerges not from introspection, but from the recursive interpretation of information in relation to context. For Hayles, meaning doesn’t wait for consciousness to show up. It begins in motion—in the recursive calibrations of systems that register, sort, and respond before awareness even knows what it’s looking for.
Embodied Nonconscious Cognition
The human body, for one, carries on with its interpretive business long before the self stirs itself to take credit. Your arm adjusts mid-reach for a cup nudged three inches to the left, your vestibular system calibrates your balance while stepping off a curb, and your pupils dilate at the sight of an oncoming threat before you consciously register fear. These aren’t idle reflexes; they’re acts of interpretation embedded in sensorimotor routines, tuned by context, and historically inflected by embodied experience. They don’t wait for narrative coherence. They act. And among the most immediate of these interpretive acts is affect.
In Hayles’s framework, affect is not a mood, nor a garnish on rationality. It is cognition in a nonpropositional key. Affect arises when the body interprets its environment—not by thinking about it, but by registering its salience. A sharp intake of breath in a tense meeting, the involuntary stillness when a room turns quiet, the surge of unease when someone’s smile feels too delayed—all of these are affective responses, but they are also cognitive operations. They integrate information across multiple modalities: proprioception, hormonal signaling, past experiences, environmental cues. They do so not through deliberation, but through what Hayles calls nonconscious cognition—a distributed, embodied mode of interpretation that connects inputs to meaning and response without passing through the bottleneck of language.
Take, for example, the feeling of walking into a room and sensing that “something is off.” There’s no thesis statement. No single identifiable stimulus. But your skin tightens, your attention narrows, your posture shifts. What just happened? Your body processed a constellation of micro-cues—an irregular cadence of voices, a lack of eye contact, a sudden drop in ambient noise—and integrated them with memories of prior encounters in similar spaces. It interpreted the scene as potentially threatening. That interpretation, though not consciously articulated, was nevertheless meaningful. It connected information to action, filtered relevance from noise, and reoriented bodily disposition. As Hayles writes, “Nonconscious cognition is not simply reactive; it is interpretive, selective, and often predictive, precisely because it operates through complex feedback loops grounded in the body’s sensorimotor processes” (Hayles, Unthought, 83).
Cognition, in this sense, does not sit idle awaiting instructions from the executive function. It is already in motion—in the readiness potential of muscles, in the expansion or contraction of attention, in the barely perceptible affective shifts that precede awareness. Hayles’s point is not simply that consciousness is late, but that it is partial—dependent on what embodied systems have already selected as salient. These systems filter, rank, and modulate sensory inputs before they are ever available to reflective thought. Thought, as we like to imagine it—composed, articulated, self-aware—is scaffolded on systems that are recursive, affectively modulated, and fundamentally embodied.
Affect, then, is not a footnote to cognition. It’s how the body does the thinking before thought arrives in proper dress. It leans, listens, tenses, recalibrates. It doesn’t name the mood—it is the mood, registering the difference between a room that welcomes and a room that warns. Not as metaphor. As posture. As breath held a half-second too long. These aren’t symptoms. They’re sense-making in slow motion. Before the story forms, the body already knows how it ends.
Biological Nonconscious Cognition Beyond the Nervous System
Microbes don’t compose symphonies or ruminate on the meaning of life, but they’re not drifting passively through the void either. Consider quorum sensing. A bacterium releases signaling molecules into its environment, not as a form of idle chemical chatter, but as a way to monitor population density. As those molecules accumulate, they begin to register the presence of others. At a certain threshold, the bacterium alters its behavior—perhaps initiating movement, producing toxins, or contributing to the formation of a biofilm. This shift isn’t a reflexive twitch but a patterned, conditional response to environmental information. The bacterium evaluates signal concentration, timing, and composition, adjusting its behavior in relation to these inputs. That adjustment is not symbolic; it is contextual. And in Hayles’s terms, it qualifies as cognition—not because it resembles what we conventionally call thinking, but because it interprets information in a way that modulates behavior relative to changing conditions. The meaning is enacted through activity, not contemplation. There is no inner voice narrating the decision, only a system that connects perception to consequence. That may not count as deliberation in philosophical circles, but it’s enough to form a consensus among microbes.
Plants, for their part, enact cognition without the burden of consciousness—or the temptation to write manifestos about it. They begin with information, registering variables like light intensity, wavelength, gravity, moisture, and the presence of chemical compounds in the soil. But data alone does nothing. What matters is that these inputs are interpreted in context: photoreceptors, for example, don’t just detect sunlight; they adjust sensitivity based on the time of day, season, and the plant’s own developmental stage. A seedling doesn’t just grow toward light in general—it selectively modulates its growth angle in relation to a shifting gradient, dynamically altering cellular elongation to maximize exposure. In doing so, it is making a distinction between more and less optimal orientations—not in the abstract, but relative to its situated goals: survival, reproduction, flourishing. That is what meaning looks like here: not symbolic or linguistic, but operational and embodied. Similarly, stomatal openings aren’t managed like valves on a schedule—they are regulated based on an ongoing synthesis of internal hydration, atmospheric CO₂ concentration, and environmental humidity, all interpreted through the plant’s distributed sensing architecture. Even underground, root systems engage with fungal networks not as passive pipelines but as sites of informational exchange—with plants adjusting chemical signals to warn neighbors of pest attack or nutrient depletion. These responses are not just reactions—they are selections from among multiple possibilities, shaped by history, situation, and adaptive purpose. The plant, in short, doesn’t think. But it interprets. It responds. And it acts in ways that are meaningfully modulated by context. Cognition enough, Hayles would say—and she would be right.
To clarify how this definition travels across organic forms, we can look more closely at how cognition unfolds in something far less glamorous than a neural network: a plant.
- Process: Cognition begins with activity. A plant doesn’t simply receive its environment; it interacts with it. Through its leaves, stems, and roots, it initiates a range of physiological processes—phototropism, hydrotropism, gravitropism, chemical signaling, and more. These aren’t just automatic reflexes; they are dynamic, ongoing modulations of growth and behavior. The sunflower’s daily tracking of the sun, for instance, involves complex internal signaling, hormonal redistribution, and temporal calibration. This is not a passive mechanism. It is a living system processing stimuli over time.
- Interprets information: A plant receives multiple streams of information—light intensity, moisture levels, mechanical stress, the presence of nearby roots or herbivores—and makes distinctions among them. Light hitting the upper leaf and a sharp drop in humidity aren’t equivalent signals. Nor are they met with equivalent responses. A plant might slow transpiration, redirect growth, or change its root spread. It doesn’t do all things at once. It selects, modulates, and adjusts. That is interpretation—not in a conscious sense, but as a selection among possibilities that differentiates inputs based on relational relevance.
- In context: These interpretations aren’t made in a vacuum. A leaf’s response to light will depend on the plant’s stage of development, the time of day, whether water is plentiful, and whether another plant is casting shade nearby. Context isn’t background—it’s the active conditioning of response. The same light signal that prompts one plant to grow tall might prompt another to spread low, depending on species, surroundings, and situation. The meaning of the stimulus is emergent, shaped by ecological and internal factors.
- Connected to meaning: In this framework, meaning isn’t a product that comes after interpretation—it’s what makes the interpretation matter. When a plant turns toward light, that action is meaningful not because the plant knows what it’s doing, but because the response fits the situation. Meaning is just relevance in context. It’s the fact that the interpretation leads to something that makes a difference—to the plant, in that moment, in that environment. No symbolism, no introspection—just the quiet precision of doing the right thing, in the right way, at the right time.
Technical Nonconscious Cognition
And then we come to machines. Their cognition isn’t introspective—they don’t daydream, ruminate, or develop complicated feelings about their mothers—but by Hayles’s standard, it is still cognition. A language model, when prompted with “Write a breakup letter as if you’re a time traveler,” begins with information: a string of words it receives as input. But the words alone are inert. What makes them meaningful is how the model interprets them—not arbitrarily, but through an intricate network of weighted associations built from exposure to millions of prior texts. The phrase “breakup letter” activates one set of patterns—emotional register, epistolary form, performative closure—while “time traveler” activates another—anachronism, loss across dimensions, the tragic burden of temporal discontinuity. These associations don’t float freely; they operate in context: the unfolding prompt, the statistical shape of the sentence so far, the training corpus, the model architecture, and the probabilistic conditions of the next word prediction task. As it moves through each token, the model modulates its internal parameters, selecting one response among many based not on fixed rules, but on a recursive computation of contextual fit. It doesn’t “understand” love, time, or longing. But it processes input, relates it to a learned history, and produces output whose coherence is not pre-scripted, but emerges through interpretive selection. The result may be clumsy or oddly moving—but it is, in Hayles’s terms, cognition: the dynamic interpretation of information in context to produce meaning, however synthetic or secondhand that meaning may be.
Simple Machines
Even more modest machines qualify. Take, for instance, the humble thermostat—not exactly a Heideggerian, but no less a participant in what Hayles calls cognition. The thermostat receives information in the form of ambient temperature readings. But information alone doesn’t get you cognition. What matters is how the thermostat interprets that data in relation to its contextual parameters: the target temperature, the current system mode (heating or cooling), and whether the device is actively controlling the environment or idling. The thermostat doesn’t simply follow orders; it selects among possible behavioral states—turn on, turn off, maintain—as a function of this dynamic context. The same 70-degree reading might call for heating on a winter morning and for nothing at all on a spring afternoon. This is not human-style reasoning, but it is interpretation: a selection from among multiple potential responses based on shifting environmental and internal conditions. Meaning here is not abstract or symbolic; it is operational—“too cold” or “not cold enough” emerges from the device’s relational logic, not from an external narrator whispering instructions. The thermostat does not feel anything, but it constructs meaning-function from data through contextual differentiation. In Hayles’s terms, it is not merely executing a script. It is, modestly and without fanfare, a technical cognizer.
Hayles gives us, in all this, not a flattening of cognition into some democratic goo, but a framework finely attuned to difference without hierarchy. Meaning does not spring from consciousness like Athena from the forehead of Zeus; it is cultivated, interpreted, and reinterpreted through the architectures of context, whether chloroplast or silicon. The question isn’t “do they think?”—which is the philosophical equivalent of asking whether a crow is qualified to teach ethics—but rather, “how do they interpret, and what systems shape that interpretation?”
From the Cognitive Nonconscious to the Planetary Cognitive Ecology
And once consciousness is dethroned—or at least gently relocated to middle management—the view gets considerably more interesting. If the cognitive nonconscious is the stage on which meaning first forms, then it’s not just happening in us, or even for us. It’s everywhere: underfoot, overhead, and under-analyzed. Hayles’s framework doesn’t merely expand the definition of cognition; it redraws the map entirely. Suddenly, root systems communicating nutrient shortages, slime molds solving mazes, and octopuses adjusting skin tone in response to mood lighting are not eccentric outliers but co-workers in a vast, interlocking system of interpretive labor. Not a pantheon of little minds mimicking ours, but a whole world doing meaning differently.
This is what it means to speak of a planetary cognitive ecology: not a mystical Gaia rerun, but a sober recognition that meaning doesn’t begin in language or end in the neocortex. It is produced, circulated, and acted upon in systems that feel no need to announce themselves. The wind-shift that causes leaves to curl, the electrical flicker in a mycelial net responding to footfall, the alignment of magnetic fields that guide migration—all are acts of interpretation, embedded in context, connected to consequences. They do not wait for our awareness to authorize them as cognitive. They already are.
The implications are not merely poetic, though they are that too. If cognition is this widely distributed, then human intelligence is no longer the universal benchmark but one node among many. We become participants in, rather than proprietors of, meaning-making. The question is no longer whether plants feel or microbes think in ways that resemble us, but what forms of cognition we have failed to recognize because we kept mistaking consciousness for the whole story. Or to put it more plainly: we’ve been talking over the room, and the room has been answering all along. We just weren’t listening in the right register.
Hayles’s account doesn’t just clarify what cognition is; it reveals what our systems are missing. Without a framework attuned to distributed, nonconscious interpretation, we will keep mistaking output for understanding, behavior for thought, and seamlessness for intelligence. We will keep designing systems that optimize for fluency while hollowing out meaning. The problem is not that AI is alien—it’s that we continue to ask the wrong questions, shaped by conceptual tools too brittle for the systems we’ve built. Until we reframe cognition itself, we will go on confusing the surface of sense with its source—building ever more powerful interfaces that cannot think, and calling them insight.
The Cognitive Intraface: From Assemblage to Recursion
Hayles’s theory of cognition allows us to sidestep the theological question of what AI is—a person, a parasite, a very talkative mirror—and instead ask what it does. Her concept of cognitive assemblages describes how meaning is distributed across networks of human and nonhuman agents—neurons, code, hormones, circuits, all elbowing for semiotic space. Cognition, here, doesn’t reside inside skulls or chips; it happens across systems, emerging wherever information is interpreted in context and made to mean something, even if just barely.
This distributed model marks a fundamental shift in how we understand sense-making—not as the output of individual minds, but as the emergent property of interaction among diverse interpretive agents. Yet while Hayles gives us the map, she doesn’t always linger on the street corners. That is: she shows us how cognition travels across systems, but spends less time in the thick of specific sites where human and machine cognition collide, misalign, and produce something neither quite intended.
To name that site, I introduce the cognitive intraface. The cognitive intraface is a particular kind of cognitive assemblage—one in which recursive interpretation unfolds between structurally asymmetrical agents, typically a human and an AI system. It is not an interface in the UX sense, nor a metaphor for dialogue. It names a zone of recursive tension, where each interpretive act by one agent reshapes the epistemic field of the other. The result is not shared understanding, but co-emergent meaning: a shifting context neither system possessed beforehand, and which neither could produce alone.
This is not a matter of prompts and responses, as if one agent always speaks first. Even the human’s “initial” input is already shaped by a web of anticipations—of what the AI will understand, ignore, distort, or aestheticize beyond recognition. In turn, the AI’s output is shaped not only by prior training, but by the strange gravitational pull of the human’s phrasing, its tone, its ambiguity, its refusal to simplify. Each move within the intraface is already a response, already a recalibration. What emerges is not dialogue but recursion—a spiraling reconfiguration of the semiotic field.
What distinguishes the intraface is not that it produces insight, but that it produces capacity—new epistemic orientations, new interpretive affordances that neither the human nor the machine could enact alone. Like any system, an assemblage can generate emergent properties. But the intraface is unique in that its emergence is relational, structured by asymmetry, friction, and the recursive instability of meaning itself. The cognitive surplus it generates does not reside in either node. It belongs to the relation.
In this light, the cognitive intraface is not just a site of interaction. It is the recursive threshold at which cognition becomes visible as difference in motion. It offers a conceptual hinge for understanding how AI systems are not just tools or interlocutors, but structural participants in the recursive co-formation of thought. To theorize the intraface is not to map communication between agents. It is to name the zone in which difference becomes a condition of cognition, and where meaning arises not through clarity, but through ongoing structural misalignment.