Go down

Francesco Varanini’s response to my essay Rethinking Meaning in the Age of AI, published on Stultifera Navis, approaches its subject with a seriousness of ethical intent that deserves recognition. His concern is clearly directed toward responsibility, judgment, and the conditions under which ethical agency remains possible in a technological environment increasingly shaped by automated systems. The difficulty lies elsewhere. His critique repeatedly misplaces the object under discussion, relying on a metaphor of enclosure that substitutes for analysis and attributes to my work positions it explicitly resists.


Varanini organizes his response around the image of the box, presenting my argument as an instance of what he calls “pensiero in scatola” (boxed thinking). He writes that my thought “mi sembra chiuso in una scatola” (seems to me closed inside a box), and asks why reflection should “considerare solo il pensiero formulato all’ombra di quella cosa cui diamo il nome immaginifico di intelligenza artificiale” (consider only thought formulated in the shadow of that thing to which we give the imaginative name of artificial intelligence). From this diagnosis follows a prescription. Thought, he argues, must exit the box, abandon what he calls a “fumosa filosofia che giustappone oggi umani e macchine” (a hazy philosophy that juxtaposes humans and machines), and return to “la saggezza umana” (human wisdom), expressed through “parole libere” (free words). The metaphor carries rhetorical force, yet it begins to displace the work of explanation when the act of naming an enclosure is treated as evidence that one has already escaped it.

The most consequential misrepresentation in Varanini’s critique concerns his claim that my work treats humans and machines as ethically or ontologically equivalent. He repeatedly characterizes my position as one that places humans and nonhumans together, “chiusi insieme in una scatola” (closed together in a box), and later describes it as presenting “una scena dove appaiono alla pari umano e non umano” (a scene in which human and nonhuman appear on equal footing). This attribution does not merely stretch my argument. It inverts it. My work is organized around a sustained effort to dismantle precisely those anthropomorphic projections through which machines are imagined to possess moral, intentional, or experiential capacities they do not have. The analysis of mediation and infrastructure in my writing exists in order to prevent such projections, by locating ethical responsibility within human institutions, practices, and forms of life rather than dispersing it into technical artifacts.

That this inversion occurs becomes especially clear when Varanini himself cites and endorses my statement that “l’obbligo etico rimane legato a forme di vita capaci di rispondere” (ethical obligation remains tied to forms of life capable of responding). This formulation directly contradicts the symmetry he attributes to my position. The disagreement, then, does not concern whether humans remain responsible. It concerns whether an analysis of how judgment is organized under contemporary conditions implies a collapse of that responsibility. It does not. Reading an account of infrastructural mediation as an assertion of ethical equivalence shifts the argument from how judgment is structured to who is accountable, thereby attributing to my work a position it explicitly works to undo.

My analysis addresses the conditions under which judgment is exercised, not the source of ethical obligation. To say that meaning and judgment are increasingly shaped within technical and institutional arrangements that operate prior to individual deliberation is to describe how responsibility is organized, distributed, and constrained under present conditions. It does not assign agency to machines, nor does it dilute human accountability. On the contrary, it seeks to clarify where responsibility resides by making visible the systems through which judgment is preformatted. Treating this analytic effort as a denial of responsibility replaces engagement with the argument by suspicion of the language used to articulate it.

A second overreach appears in Varanini’s repeated framing of my analysis as an emphasis on novelty that mistakes a historically specific configuration for something fundamentally new. He asks whether the condition I describe is genuinely distinctive, and answers by reframing it as a familiar dynamic of speculative capitalism, writing that “il capitalismo speculativo, orientato a preformattare il giudizio, finanzia lo sviluppo di macchine atte a preformattare il giudizio” (speculative capitalism, oriented toward preformatting judgment, finances the development of machines designed to preformat judgment). This observation aligns closely with my own argument, which treats the present as a configuration in which longstanding economic and institutional forces are recomposed through technical systems rather than replaced by them.

My work has consistently drawn on Marxist critical traditions, including figures such as Benjamin and Horkheimer among many others, precisely to show how contemporary technologies extend and intensify earlier arrangements of media, administration, and capital. Within that tradition, continuity never licenses abstraction. Marxist critique has always insisted on attending to emergent material conditions as they take form, rather than retreating to inherited categories whose critical force has been drained by historical change. Where a rhetorical perception of rupture has occasionally been inferred from my work, I have consistently clarified that inference as a misunderstanding, precisely in order to reassert the historical continuity of the processes under analysis. The overreach occurs when attention to the present configuration is treated as an exaggeration of novelty rather than as a requirement of historical analysis. To specify how judgment is currently organized is not to elevate the present above history. It is to situate it within history.

Varanini’s ethical emphasis carries genuine weight. He insists that obligation remains tied to forms of life capable of response and extends responsibility beyond engineers and legislators to “ogni cittadino, che è chiamato a capire, a pensare da sé” (every citizen, who is called upon to understand and to think for themselves). This insistence belongs to a long moral tradition suspicious of delegating judgment to abstractions. The difficulty arises when ethical affirmation becomes a substitute for analysis, as though declaring responsibility were sufficient to explain how responsibility now operates. Ethical responsibility gains substance through an account of the institutional and technical arrangements that shape interpretation, authorization, and decision prior to reflection. Assertion alone leaves those arrangements unexamined.

This is where Varanini’s appeal to an outside becomes problematic. His call to exit the box through human wisdom, free language, and citizenship presumes a standpoint untouched by mediation or institutional formation. These idioms possess their own histories, vocabularies, and exclusions. Treating them as self-evident grants them an exemption from the historicity applied elsewhere in the argument. What is contemporary is rendered suspect, while what is familiar appears self-grounding. Constraint then recedes from view, presenting itself as common sense rather than arrangement.

It is in this context that Varanini invokes a Nietzschean image of chains, suggesting that “la condizione digitale è forse l’ultima catena” (the digital condition is perhaps the last chain). He does not cite a source for this formulation, and I therefore cannot say with certainty which passage he has in mind. For that reason, it would be inappropriate to treat the phrase as a quotation or to respond through close textual exegesis. What can be addressed is the methodological use to which Nietzsche is being put in the argument, and whether that use aligns with the genealogical orientation that characterizes Nietzsche’s work across multiple texts.

I assume Varanini is gesturing toward a family of reflections that appear in works such as Daybreak, The Gay Science, and Beyond Good and Evil, where Nietzsche develops a sustained critique of morality, habit, and the internalization of constraint. In these texts, Nietzsche’s concern is neither with ranking constraints by age nor with dismissing attention to particular historical configurations as superficial. His concern is genealogical. Genealogy examines how values, practices, and forms of judgment come to feel natural, inevitable, or beyond history through repetition and inheritance. It insists that what appears most obvious often exerts the greatest power precisely because it escapes scrutiny.

Read in this light, Nietzsche’s reflections on constraint demand historical specification rather than retreat from it. Genealogical critique reconstructs the histories through which moral categories acquire the appearance of timelessness. In Daybreak, Nietzsche traces how moral habits form through long accretion. In The Gay Science, he explores how responsibility and conscience emerge from historically contingent practices rather than stable essences. In Beyond Good and Evil, he extends this analysis to philosophy itself, showing how claims to universality often mask unexamined historical and psychological conditions. Nietzsche’s method cuts against appeals to an outside secured by familiarity. It asks how familiarity becomes authoritative.

This is where Varanini’s rhetorical use of Nietzsche diverges from its methodological context. When contemporary technical systems are described as historically suspect while inherited moral categories are treated as self-grounding, the genealogical burden is unevenly distributed. Nietzsche’s method reverses this asymmetry. It subjects precisely those categories that present themselves as obvious, humane, or foundational to historical scrutiny. My argument proceeds in that genealogical spirit. Attention to contemporary technical and institutional arrangements does not elevate them to ultimate causes. It subjects them to analysis as historically specific formations that reorganize judgment under present conditions. Naming those formations does not enclose thought within them. It makes their influence visible.

For this reason, my reference to the age of artificial intelligence identifies a configuration rather than a rupture. It marks a moment in which longstanding dynamics of capital, administration, and governance acquire new technical forms. Varanini himself gestures toward this continuity when he observes that speculative capitalism finances machines designed to preformat judgment. The divergence arises when attention to mediation is cast as a distraction from ethical self-reflection, summarized in the claim that “guardare alla macchina e alla nostra relazione con la macchina finisce per essere un modo per non guardare a noi stessi” (looking at the machine and at our relation to the machine ends up being a way of not looking at ourselves). Ethical responsibility does not flourish through withdrawal from analysis of the conditions shaping judgment. It depends on such analysis.

A space such as Stultifera Navis earns its importance through its willingness to host disagreements that resist premature harmony and easy moral alignment. Its editorial vocation lies in sustaining exchanges that approach contemporary conditions with analytic patience rather than retreating toward idioms that present themselves as exempt from history. The present exchange illustrates that task. It shows how appeals to responsibility and citizenship remain inseparable from the arrangements that organize judgment in advance. Critique proceeds through such visibility, since responsibility gains purchase only where the conditions guiding interpretation receive sustained attention. In that sense, the disagreement marks the work such a space exists to support.

Note: For readers interested in the broader body of work informing this response, I am happy to provide links to prior essays and posts where these positions are developed in detail.

 

 

Pubblicato il 24 dicembre 2025

Owen Matson

Owen Matson / Designing AI-Integrated EdTech Platforms at the Intersection of Teaching, Learning Science, and Systems Thinking