Discussions of AI’s social and ethical implications often assume that AI cannot make meaning, a conclusion that follows once meaning is defined exclusively through human experience.
Within long philosophical and humanistic traditions, meaning designates the emergence of sense within lived experience, shaped by embodied history, affective investment, and ethical exposure to others. Meaning in this register presupposes a situated perspective for whom something matters and can place a demand. This conception remains central to moral philosophy, phenomenology, and hermeneutics, and it continues to orient much contemporary ethical discourse about AI.
That lineage also informs influential critiques such as the stochastic parrot argument associated with Emily Bender. By anchoring meaning in human semantic grounding and communicative intent, the argument resists anthropomorphic projection and foregrounds political, ecological, and labor concerns surrounding large language models. Meaning, under these terms, remains inseparable from human participation, while technical systems operate through patterned form without sense.
A different account emerges in the work of N. Katherine Hayles. Meaning, in this formulation, refers to the connection of information to contexts that render it significant for a system. Contexts become meaningful insofar as distinctions guide selection, weighting, and subsequent operations. Meaning takes form here as operational relevance, shaping what matters within technical activity even in the absence of lived implication. This mode of meaning differs in kind from human meaning without dissolving into formal emptiness.
Across complex sociotechnical arrangements, these meanings exert tangible effects. Institutional procedures, administrative judgments, and everyday decisions increasingly depend on distinctions produced within technical contexts of significance. What becomes relevant, actionable, or ignorable is already structured by meanings generated outside human experience, yet deeply entangled with it.
A concept of meaning restricted to the human obscures this condition. At the same time, extending ethical standing to technical systems misplaces responsibility. Meaning unfolds across heterogeneous systems, while ethical obligation remains bound to forms of life capable of response. Attention to this distribution clarifies how agency is reorganized when significance emerges from more than one register.
Meaning already governs from multiple sites. The question is no longer whether non-human systems participate in meaning, but how their forms of significance reshape the environments in which human sense, judgment, and responsibility take shape. This reconfiguration relocates responsibility toward those who design, deploy, and legitimate the technical distinctions that now condition judgment in advance.
Rethinking Meaning in the Age of AI
Operational Significance and the Ethical Conditions of Sociotechnical Life
Pubblicato il 21 dicembre 2025