Go down

Viviamo in tempi molto difficili. La conoscenza e la sua produzione sono sottoposte a un duro attacco da parte del capitalismo liberale, che incentiva l'uso dell'astrazione statistica. L'astrazione statistica, nella forma della razionalità computazionale (IA), produce conoscenza probabilistica operativa. Ciò significa che la domanda che questo tipo di produzione di conoscenza ci pone – o il modo in cui rappresenta la scienza – non riguarda più cosa si debba dire o sapere oggettivamente. Ora la domanda è piuttosto cosa fare della conoscenza prodotta (da qui la conoscenza operativa). (Il saggio che segue è in lingua inglese)


We are living in very troubled times. Knowledge and its production are under severe attack by liberal capitalism which incentivizes the use of statistical abstraction. Statistical abstraction, in the form of computational rationality (AI), produces probabilistic knowledge that is operational. This is to say that the question this sort of knowledge production presents us with—or the way it portrays science— is no longer one concerned with what is to be said or known objectively. Now the question is, rather, what to do with the knowledge produced (hence operational knowledge).

today the real question is what to do with the knowledge produced in a statistical and probabilistic way

The production of statistical probabilistic knowledge produces abstractions—local to the data from which they arise—which abstractions become to be treated as real (real abstractions). Hence epistemic detachment from the real-world particulars such knowledge production is supposedly meant to represent. Statistical abstraction being subjective (by way of its being a product of the subjective choices of ideals and data by the researcher), it may be left in want of a foundation outside of itself (a metaphysical ground) as well being epistemologically invalid. Therefore, applying the methodology of computational rationality to science is a fundamental challenge to the pursuit of science itself.

But this is not all. There is an accompanying challenge which endangers the social realm, as well. And this is why we are living in such troubled times. It is one thing to face a challenge that concerns our scientific knowledge production as a society. It is something completely different to face a situation where the modus operandi of our whole societal order is being usurped. Let us state this bluntly. With AI, we are witnessing a coup the aim of which is to take over knowledge production and to redefine its logic in line with liberal capitalism. This is being done by substituting subjective knowledge production for traditional objective production. In other words, knowledge production gets privatized while still under the pretence of being general and objective. 

With AI, we are witnessing a coup the aim of which is to take over knowledge production and to redefine its logic in line with liberal capitalism.

How have we come to this situation? What is going on here? Let us get into this.

We are witnessing the enclosure of knowledge production and with this an attempt at a change in our social order. In what follows we will take a look at what is involved in the creation of a data economy, or what amounts to the same, the enclosure of knowledge production. What we discover is a wholesale attack by liberalism on a shared reality based on shared knowledge in favour of subjective knowledge by which some can impose upon others their own subjectively constructed realities justified under the guise of objectivity enjoyed by shared knowledge. What we will enquire about is how does the enclosure of knowledge come about. What kind of thinking and mechanisms bring it forth?

Importantly, the idea of enclosing knowledge production itself is a result of financialization.

These two, enclosure and financialization, are interrelated, as is discussed by Katharina Pistor in The Code of Capital. Enclosing is a legal codification of goods into recognized resources and assets that can then be distributed to individuals who are granted property rights. Financialization, in turn, follows after the legal codification of goods as ‘legal property [that is] assigned a pecuniary value in expectation of a likely future pecuniary income’ (Jonathan Levy quoted in Pistor, 2019, p. 12). This is what capital is; ‘a legal quality’ which protects an asset’s capacity to yield income (Pistor, 2019, p. 12). As Pistor argues, capital, and by extension, financialization, only ‘works because states back and, if necessary, coercively enforce the legal code of capital’ (Pistor, 2019, p. 21). Legal codification creates capital, which in turn enables financialization. 

capital is not just a physical or economic object but is literally created by law (Katharina Pistor)

About Financialization

So why do we say, as we do above, that the enclosure of knowledge production is a result of, that is, has been brought about by, financialization, rather than the enclosure occurring first after which financialization can take place? The reason is that legal questions are solved through adjudication between claims. Claims themselves are attempts to persuade the coercive power of the State to back up narratives driven by individuals. The use of AI is a claim made by financial capital. Financial capital wants the right to subject all people to its own hegemonic decision-making machine.

This is no different from my demanding the right to order others to mandatorily take my point of view. Importantly, just like in the land enclosure process of England (which took place between the 16th and 19th centuries), the claim concerning AI justifies itself by appeal to productivity and increased prosperity. Financial capital is trying to force AI on others under the claim of productivity, and this attempt at enforcement is happening by sinking so much money into AI that abandoning it will become difficult. Here we can refer to an insightful exposition by Aline Blankertz where she considers the social costs of technological path dependencies created by overinvestment.

This is in line with long wave economist Carlota Perez who writes about the dynamics of production capital and financial capital in free markets. These dynamics, argues Perez, are inherent to capitalist production as well as being the source of both overinvestment (i.e. bubbles) and the erratic course of technological development in the form of technological revolutions. When certain technologies attract enough investment, path dependencies (or lock-in) begin to take place as investment leads to ‘the overadaptation of the environment to the established paradigm… systematically excluding, underestimating or marginalizing the innovations that fall outside the established trajectories’ (Perez, 2005, p. 88).

Importantly, technological revolutions are not at all necessarily some new, brilliant and radical blue-sky inventions. Although Perez herself remains seemingly subtle regarding this point, we will emphasize that it is, according to her, financial capital which in its search for rent will go on and try to initiate ‘the articulation of a new paradigm’ (Perez, 2005, p.89). Note here the aggressiveness implied by the word articulate: to define, to give definition to. Financial capital wants to define the future of others on their behalf. On this hunt, it will look at already existing inventions and innovations and the opportunities they may provide for making more money. This attempt at articulating a new paradigm—far from being some efficacious and beneficial creative destruction as some confused commentators would have it—is actually a process of greedy laymen looking to make money out of money: ‘Financial capital can successfully invest in a firm or a project without much knowledge of what it does or how it does it’ (Perez, 2005 p. 72). There is very little expertise or creativity here.

Financial capital wants to define the future of others on their behalfthe right to subject all people to its own hegemonic decision-making machine

The question now is, what is happening currently when it comes to AI. In accordance with history, we could be at the tail end of the fifth technological revolution, namely the ‘Information Age or Knowledge Society’ (revolutions appearing every 50-60yrs, the current fifth one having begun in 1971). (Perez, 2005 p. 10) This would appear not to be the case, however, as AI is very much about the Information Age and Knowledge Society. Be this as it may, it is not of much interest. What is of interest is whether or not there is a technology (or cluster of technologies) with which to either construct a new paradigm or else continue the growth of an old one. Although, as Perez argues, financial capital induces technological revolutions, for one to occur there actually has to be a real and exceptional use case made for technology capable of yielding massive productive capital; that is, capital which is a result of real productivity, not imaginary productivity such as is financial capital. It is only on this condition that there is a “creative” process in the dynamics between financial capital and production capital, which process creates prosperity (albeit in a very indirect and wasteful manner). And here the question is, if there is such a technology, what need is there for financial capital: why pay rent? But that is another discussion. Currently, we see that what Perez calls technological revolutions are in reality financial revolutions, although these are underwritten by technology.

The question confronting us now is whether, in the case of AI, there is a technology capable of underwriting the financial push for money to make money. It is obvious that in the case of AI there is no technology capable of underwriting the value being conferred upon it by financial capital. The definitional requirement for a technological revolution is high generalizability in use cases which underlie all economic activity. As an example, the invention of engines automatically leads to the mechanization of basic activities (such as transportation, or construction) which, in turn, leads to more productivity.

With digital information technology, gains in productivity come from speed and the opportunity to scale. The use cases here are pretty evident, as well as being of fundamental importance to economic activity in many industries. This is what high generalizability means. In the case of AI, the question is this: what are the use cases for statistical probabilistic inferences? Is this something that can be applied generally to a large part of economic activity? Evidently not. This is a “technology” (really a methodology) the use of which is to generate patterns out of a model of the world—so, crucially, these patterns reflect not even the real world—which patterns then may or may not provide useful insight regarding some real-world phenomenon.

Determining whether these machine outputs are useful or not requires a lot of work. This means that using them slows down work. However, regardless of whether or not the insights gained by the use of these machines are useful, it is clear that insight is not a generalizable economic activity. Ships do not sail by insight, cars do not run by insight. There may be a data economy, or a data industry. Such an economy or industry is, however, not a technological revolution as one is an economic activity of financial capital while the other is just one industry. If so, then, in the case of AI—expressed in the old terms of cycles of technological revolutions—we are talking about a desperate last-minute attempt to juice all profits from a dying mature paradigm.

there is no mature paradigm of AI

But, clearly, this is not at all what is happening. For there is no mature paradigm of AI. Rather, this is a methodology (disingenuously being called a technology) which everyone is touting as being under development. Its “potential” benefits are said to be forthcoming. Crucially, however, this attempt by financial capital at producing a new paradigm on the back of AI is not underwritten by an existing productive technology, nor can the technology it is trying to sell ever reach the level of general applicability that is being promised.

On the surface, it seems like we are witnessing a Ponzi-scheme. But is this so? The answer is yes and no. On the one hand, yes, the Ponzi-scheme is aimed as an attempt to become too big to fail (as Blankertz discusses in her piece, citing the work of AINow Institute). Here the strategy is to ensure that the conception of investing in AI is seen as legitimate. This is achieved by appealing to collective ignorance. This strategy was used, for example, in the 2008 housing debt crisis, as well as the Chicago real estate bubble in the 19th century which Perez makes reference to, of which the Chicago Tribune wrote saying that " people bought at prices they knew perfectly well were fictitious… [being sure] that some still greater fool could be depended on to take the property off their hands and leave them with a profit" (Perez, 2005, p. 101). A Ponzi-scheme which becomes too big to fail means that too many businesses, institutions, and individuals in a society have invested too much money in something to let the resulting redistribution of wealth take place in the realm of market participants, lest, lo and behold, these market participants lose their faith in markets. Instead, a societal redistribution of wealth is affected by use of the coercive powers of the state in enforcing the legal code of capital.

The losses of speculators are externalized and are to be covered societally. Acting stupid really pays off, and so it can come as no surprise that participants plead ignorance; “who could have known?”, they will say, “The investor literature said it would be a productive venture.” Further, the gravitas needed to become too big to fail requires institutional collaboration between the private and public sectors. This is why the public sector of some nations are pushing AI, as well. If, as Perez observes, one can be among the core countries from which a new paradigm spreads outward, there is, in the words of Pistor, ‘an exorbitant privilege’ awaiting in the form of the ability of the ‘respective owners to amass wealth’ (Pistor, 2005, p. 19).

Insofar as States are pushing for the adoption of AI, the appeal of ignorance is not limited to private speculators. Rather, the State backs speculation. This perverse liberal capitalist arrangement is a question of dogmatic doctrine which functions on fairy tales concerning productivity. This is something Mariana Mazzucato has drawn attention to in The Value of Everything, pointing out that unlike in the past, finance is today counted as a productive activity in national accounting (Mazzucato, 2018, p. 105). These narratives should be debunked and the financial sector culled. However, we will now leave these considerations. Suffice it to say that the Ponzi-scheme is only an insurance-policy. What is really going on with the push of AI is an attempt at what Damien Williams (University of North Carolina) has recently called a ‘techno-social regime purchase’. This is the much more pernicious and inimical reality which we are facing: the attempt at enclosure of knowledge production. If successful, this amounts to taking over the regime of governance of the State.

technological revolutions are in reality financial revolutions, although these are underwritten by technology

About Enclosure 

What is enclosure? The simple answer is theft.

Enclosure is the appropriation of the commons (or common land in historical context). In other words, enclosure is the taking of what is shared and declaring it as being private property. But the very term “commons” becomes obsolete in the act of recognition of equal right to the commons. The commons originally refers to resources and goods provided by nature. The unquestionable premise, one which must be agreed upon by anyone, is that no man owns nature. Nature is not created by any man, nor has it been given as a gift to any single man. All men enter the world with nature already in place. Therefore, it must be agreed with Locke in saying that if any man has right to the commons then all men have equal right to the commons. But that is about the only honest thing Locke had to say. If all men have equal rights to commons then there are no commons for the reason that man/mankind now owns nature. There is no category of unowned resources which an individual may take and make use of as his own. Once individuals have (equal) right to nature, there is no such thing as pleading to a natural right of the individual in relation to an originally unowned nature which, of course, Locke disingenuously and promptly proceeds to do. In summary, all property of which profit is privatized is theft. The liberal narrative of ownership is fundamentally unjust. To speak about the commons after recognizing the equal rights of all individuals is only to speak about a reappropriation mechanism by which to retrospectively deny the rights first given to all individuals.

Enclosure is the appropriation of the commons is the taking of what is shared and declaring it as being private property

 

The Enclosure of Knowledge Production 

What, then, is the enclosure of knowledge production? As an example, UNESCO has provided a revealing document arguing the case of the enclosure of knowledge and knowledge production. The strategy deployed in the document is precisely the use of the liberalist reappropriation mechanism, arguing that publicly produced knowledge should be considered as being commonly owned property: ‘The authors propose that both knowledge and education be considered common goods’ (UNESCO, 2015, P. 11).

The aim of this seemingly innocuous and benevolent sounding aspiration is, however, anything but. The authors go on to make the deceptive statement that ‘The notion of common good allows us to go beyond the influence of an individualistic socio-economic theory inherent to the notion of ‘public good’’ (UNESCO, 2015, p. 11). As we know after the above consideration, this is precisely in reverse order: the commons (common goods) are shared goods (and therefore public goods) which by the deceitful use of reappropriation disguised under the idea of natural right are portrayed as once again common (regardless of the shared equal rights that make them public goods). The aim of this self-contradicting manipulation is nothing else than the deifying of an individualistic socio-economic theory. In the words of liberal scholar Eric Mack: ‘the natural right of property is to make things one’s own.

Through the category of common goods, goods that are publicly owned and produced are illicitly appropriated in an attempt to portray them as given from an outside source and therefore as being free gifts for humans to share. Predictably, then, the very next question which follows is how to share these goods. It comes as no surprise that the argument proceeds to positing that every individual and/or group has a stake in these goods in the form of “rightful” demands: ‘the demand has grown in recent years for voice in public affairs and for the involvement of non-state actors in education, at both national and global levels’ and that ‘in short, there is a growing need to reconcile the contributions and demands of the three regulators of social behaviour: society, state and market’ (UNESCO, 2015, p. 10-11). In the words of its authors, the UNESCO document serves as ‘a call for dialogue among all stakeholders.’ (UNESCO, 2015, p. 9)

In other words, the “market” wants to privatize (steal profit from) publicly produced goods, in this case knowledge and knowledge production (or the ‘creation of knowledge’, as it is called in the UNESCO document, p. 11) To be sure, the privatization of publicly produced knowledge is nothing new. Such liberal theorising as is witnessed in the UNESCO document, however, is meant to both reinvigorate and justify such privatization. This is for the benefit of market actors who wish to create a knowledge society based on a data economy, something which is very much the goal of those market actors discussed above who wish to articulate a new paradigm of knowledge production through an attempt to force the acceptance of AI by way of financial coercion. The claim that the market has demands on property (public property) which it has no ownership right over is idiotic. Persons—both natural and legal—have no private right of ownership over public property and therefore no demands. That they may fund public property through taxes in no way alters this fact. Through paying taxes, persons merely affirm their equal right to public ownership. But a share in public ownership is not a positive right to property allowing to make things one’s own. Rather, public ownership means that the property in question is under reserve, protected by the equal right of each individual.

the “market” wants to privatize (steal profit from) publicly produced goods, in this case knowledge and knowledge production

 

The Role of Liberalism as the Enabler of an AI-Driven Shift of Social Governance

Now that we have gone through the mechanics through which new financial and productive paradigms are articulated—namely by financial force, injury of rights, and legal coding—we can and must look at the new paradigm which this enclosure of knowledge production seeks to usher in. While the following cannot be expected to be exhaustive, three outcomes of basing a knowledge-paradigm on AI stand out as prominent: liberal inequality concerning knowledge, strengthening of anti-democratic liberal governance, and a shift to a more exhaustive dehumanizing rationality. We will look at each in turn. 

Liberal Inequality Concerning Knowledge and its Production 

Liberal inequality concerning knowledge and its production by AI arises as a result of what was said in the beginning of this text, namely from the lack of an outside (or metaphysical) grounding for knowledge thus produced.

AI knowledge production is founded on the internal logic of statistics, probabilities, and machine learning, which logic involves the construction of abstractions from data: data which is first selected according to the subjective preferences of the experimenter. This presents science with a huge problem as both the meaning and universality of scientific results is called into question. The problem is that this method of knowledge production exhibits a chronic lack of external validation.

In the words of Justin Joque, ‘Knowledge founded on an external referent… functionally resists enclosure, for this ground must be shared among society to make the knowledge socially usable’ (Joque, 2022, p. 189). However, we are not currently concerned with the detrimental effects of AI on science. Instead, we are asking about its wider deleterious societal effects. The societal problem with producing knowledge with AI is, the same as with science, the lack of an external referent (note: this has nothing to with the question of whether the AI-system can be made transparent to an external observer or not.

The transparency in question here concerns the epistemological status of the statistical approach to knowledge). The lack of an external referent means that the knowledge produced by AI is quite literally separated, or enclosed, off from common scrutiny. This is in line with the liberal ideal of humanity which celebrates competition among unequal individuals. We may ask what, if not inequality, has the liberal doctrine produced. The liberal idea of distribution of property is premised on each getting what they deserve in competition against one another. One need only consult Oxfam reports to put the mind at rest regarding this point (or, if one is a hard-core sceptic, Piketty’s Capital in the Twenty-First Century).

What, then, can be expected as a result of subjecting knowledge and knowledge production to enclosure? Competition encourages creation and hoarding of knowledge in order to create asymmetrical relations between participants. As Joque argues, this competition does not subscribe to enriching the general intellect or common knowledge. Rather, argues Joque, it is part and parcel of the political economy of neoliberalism. This analysis can be agreed with by making one correction, that of replacing neoliberalism with liberalism: for the neoliberal project was to revive classical liberalism, and therefore there is no basis for exempting liberalism from the ongoing charges. Joque writes: ‘The political economy in which these transactions occur encourages and rewards the accumulation not just of capital, but of knowledge—and with it the ability to construct reality such that the other party in the exchange does not know’ (Joque, 2022, p. 173).

This political economy, namely the liberal political economy, incentivises an asymmetrical distribution of knowledge and therefore serves to ‘stupefy rather than to add to collective or even individual knowledge’ (Joque, 2022, p. 173). However, arguably, even worse than the hoarding of knowledge is the subjection of one group of people to the manipulation of another under the pretext of knowledge (i.e. the ability to construct reality). This is a grab of political power from and against the people. This brings us to the second outcome of the AI-knowledge paradigm. 

the liberal political economy, incentivises an asymmetrical distribution of knowledge 

Anti-Democratic Governance 

AI is inherently anti-democratic. Because of this, it fits perfectly in line with liberalism, which is also inherently anti-democratic. Consequently, AI is the perfect tool for anti-democratic liberal governance. Before going on with AI, we will quickly remind those who are in doubt concerning the anti-democratic nature of liberalism. Historically, the idea of equal individuals governing themselves was considered to be communism. This, to the liberal, is the outcome of Rousseau’s social contract: the “terror of the mob” of individuals with equal claims, leading to individuals having to yield to that which they had subscribed to (i.e. the rule of the people).

Liberals were vehemently opposed to rule by the people, where individuals had equal right. It is only when liberals realized they cannot win against democracy that they decided to appropriate the term and the uneasy and incompatible combination of liberal democracy came about.

On their behalf, liberals only wanted to grant voting rights to those who shared their worldview. Historically, the reason for this was that liberalism was reserved for the higher end of society who were educated. The uneducated masses were considered by liberals as being ignorant, and therefore not capable of governance.

For liberals, nothing else matters than that their idea of order dominates over all other alternatives. One of the most clear and iconic expressions of this intention came from Hayek in his saying ‘Personally I prefer a liberal dictator to democratic government lacking liberalism.’ Needless to say, concerning this objective of domination they have unfortunately been extremely successful.

AI is inherently anti-democratic. Because of this, it fits perfectly in line with liberalism, which is also inherently anti-democratic.

AI is the perfect governance tool to circumvent democracy. And again, just like historically, the masses have not, indeed, so far shown themselves able enough to effectively oppose those working hard to oppress them. This is no shame of the masses. It is difficult to discern the truth in a society which encourages, nay, idolises the telling of lies for private gain. To make matters worse, AI has a sufficient air of technoscientific sophistication to its name that in a society indoctrinated on religious worship of both science and technology, the challenging of these idols can feel daunting. It is hard to lay blame on anyone in particular. Now, however, in the interest of the general good we must definitively debunk AI as a potential tool in service of democracy.

There is a widespread agreementsometimes explicit, sometimes implicit—that AI decision-making is here to stay and that it is simply an inevitable evolution. Perversely, even the majority of critics sing this tune. With this, many are engaged in a confused errand of accepting AI as a tool for decision-making while talking about wanting to ensure it is in service of both democracy and people, that its development is opened up for democratic scrutiny, and that its outputs enjoy democratic transparency (see, e.g., Helbing, 2019; Sudmann, 2019; Bernholz & Al, 2021;  Simons, 2023; Coeckelbergh, 2024).

This unaware form of criticism is much more dangerous than the direct promotion of unchecked AI for the reason that it obviously accepts that AI may be put in service of democracy. Above we have already seen how the enclosure of knowledge production directly opposes democratic control. Now we must make it clear why, even if AI was subjected to democratic control, it would still be against democracy if and when used in decision-making. In short, we must understand why AI is incompatible with democracy on any occasion it is used in decision-making, be that even in a supportive role. T

he reason, once heard, may appear deceptively simple. Consider what philosophers Anthony Kenny and Peter Hacker have pointed out, here in Kenny’s words: ‘if having information is the same as knowing, then containing information is not the same as having information. An airline schedule does not know the time of departures of the flights’ (Kenny, 1971). Electronic systems can only ever contain information, and therefore can never know anything.

But knowledge is a prerequisite of decision-making. We may, to be sure, choose to use AI in order to reach a decision. But this can only mean the relinquishing of decision-making in favour of an abstraction just the same as when we decide to flip a coin in order to arrive at a decision. When we flip a coin, it is obvious no decision-making in weighing the options presented has been involved: we have not made a decision, nor has the coin. Even less have we come up with a shared decision by considering votes.

Any democratic use of AI in decision-making is to collectively refuse decision-making.

Any democratic use of AI in decision-making is to collectively refuse decision-making. If democracy is rule by the people in decision-making—indeed, if ‘Central to democratic theory is the ideal of collective self-determination’ (Bernholz et al. 2021, p. 7)— this is inherently against democracy for the reason that there is no decision-making involved. To democratically refuse democracy does not lead to democracy. Rather, it leads to not being democratic. AI is inherently anti-democratic; there is no use of it that can be democratic. The demand of democratic participation in development and deployment of AI in decision-making, when coming from a supporter of democracy, is an act of profound self-harm.

Now, there are those who find it difficult to understand why not use AI in support of human decision-making. The answer should be clear by now, even though it is again deceptively difficult to understand in its apparent simplicity. Again, there is no decision-making involved when AI is used to spit out an output, apart from the decision to relinquish the making of a decision. So, to be sure, yes, one may use AI in support of decision-making in precisely the same sense as one would consult an encyclopaedia, a book, a journal, one’s notes, or the Cynefin-framework. Doing so is not by any means advisable as AI only regurgitates material in such a way as to come to a median vanilla result. On top of this, the burden of work checking what it has produced easily negates the supposed benefit of its use. Nevertheless, such use is admissible.

However, in many cases, this is not the sense in which using AI in decision-making is being discussed by its proponents as well as critics. Not at all. What the discussion concerning the use of AI in many cases expresses an interest in is AI assisting in decision-making itself—in the sense of it being a participant in decision-making itself like a fellow agent that has information and can act on the basis of it. Here the discussion concerning AI centres on easing decision-making by letting at least some, if not all, decisions be made by the machine.

In using AI in the first sense, as supporting research material, it cannot be said that it is used as support by way of helping decision-making. Rather, it is here being used by a being capable of decision-making. For the human author is in this case making all the decisions concerning the use of the supporting material. If, on the other hand, AI is used in the second sense, that is, as a participant helping in decision-making, then the human author has relinquished the making of a decision. As there is no sense in suggesting that AI can make a decision—anymore than there is sense in saying that the coin flipped is engaged in making a decision—no decision at all has been made in this case. It is not only incorrect, but a dangerous and confused muddle to speak of AI as a participant in decision-making. To be sure, this is in no way challenged or mitigated by the confused retort of human oversight for the precise reason that there is, by definition, no oversight happening here. Oversight is what happens when using AI in the sense of dubious supporting research material the context and content of which requires an extra layer of work to go through.

This is why we should hold people, especially politicians, to severe account over the use of AI, particularly as a participant. Recently, there was commotion in Sweden when prime minister Ulf Kristersson discussed his use of AI as a participant. One of the critics was Virginia Dignum (professor in responsible AI) who criticised this by saying ‘The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope’ and that ‘We must demand that reliability can be guaranteed. We didn’t vote for ChatGPT.’  Indeed. If someone is so incompetent as to be tempted to use AI as a participant in their own work, as Kristersson suggests, ‘for a second opinion. What have others done? And should we think the complete opposite? Those types of questions’, they should not hold their job. More importantly, if someone who is trusted with decision-making betrays this trust by relinquishing decision-making and flipping coins instead, they are not doing what they are entrusted to do. This should lead to immediate dismissal. 

if someone who is trusted with decision-making betrays this trust by relinquishing decision-making and flipping coins instead, they are not doing what they are entrusted to do

Dehumanizing Rationality 

Finally, we consider the dehumanizing rationality inherent in AI. As Wendy Brown discusses in Undoing the Demos, there are systems that come to dominate our human condition. Brown refers to Max Weber, who distinguished between value rationality and instrumental rationality. Value rationality refers to the ends that we as humans choose to value, while instrumental rationality refers to the means that we employ to reach our desired ends. One would hope to think it is obvious that the ends are not the same thing as the means. But, alas, no. So strong is the cynical mindset of some celebrated liberals, marginalists, and engineers that in their busying themselves only with knowing the price of everything, they have become unable to see value in anything. It is a common challenge which some computer scientists—in their supposed rationality— think they are presenting others with for them to question the relevance of the difference between ends and means.

By now, surely, everyone is familiar with this type of rhetoric: what does it matter how an end result was obtained? If to swim is to move through water, challenge computer scientists Stuart Russell and Peter Norvig (Russell & Norvig, 2022, p. 1035), what does it matter whether this is achieved by limb or propeller? In other words, the question here is: if the right thing gets done, what does it matter how it gets done? If machines spit plausible answers, what does it matter whether they can think or have knowledge? It is perhaps easiest to understand the folly in this line of thinking by considering a painting. What is the difference between the end, that is, a picture and the means, that is, the paint?

The difference is that the paint is not the picture. Rather, the paint is the medium through which a picture is rendered visible. No paint will by itself ever contain the picture as a quality within itself. Just the same, values are not the actions taken to reach them. Now since, in principle, machines are not and cannot be capable of knowledge they cannot make decisions that would make means (actions) meet with ends (values). In other words, no machine can, nor will ever be able to, do the right thing in so far as decision-making is concerned. There is no such question as “what does it matter how a machine reaches a decision” because a machine can never reach a decision. All that happens in our using AI in decision-making is that we treat the output of a machine as if it was a point of view of a being capable of decision-making.

So strong is the cynical mindset of some celebrated liberals, marginalists, and engineers that in their busying themselves only with knowing the price of everything, they have become unable to see value in anything

It is by not understanding this simple fact that we cross the line separating us from the ultimate danger. As long as we use information systems to our own ends, which should serve us humans, we are on the right side, that is, on the side of value rationality. But if we anthropomorphise an information system output and take that output into consideration in our own decision-making, then we are considering things from the point of view of the design of the system.

The information system, however, is only an inanimate means and it makes no sense to consider its outputs from its point of view as it does not and cannot have one. Nevertheless, if we consider information system output in our decision-making, we are by definition considering this output from the point of view of the system. Yet, the system only ever represents a model-world that is constructed from data, and in our considering system outputs we are only ever considering them ‘as if’ they were real (see Mühlhoff below). In other words, in considering system outputs we are only ever considering the world in the model, as Mary Morgan would say.

This is the solemn insight provided by Rainer Mühlhoff (professor of AI ethics) in his calling our attention to how the ideology behind building AI is one of a certain kind of rationality. Mühlhoff aptly describes this ideology as a ‘set of practices of self-optimisation’. Like he says, AI works on a ‘subjective interpretation of probability’. Now, an AI-enhanced information system is an exponential intensification of the as if. For such a system enables the production of practically endless prediction, all from the point of view of the system.

AI represents the transcription of an irrational ideology (which ideology is thought of as being rational by its proponents) in the form of an information system which is automated in order to perpetuate itself as long as we keep on feeding it with data. In other words, AI is a semi-automated sociotechnical assemblage which uses up both resources as well as humans in a self-feeding loop in service of the production of the as if. Needless to say, AI itself does nothing here. Rather, we inflict AI upon ourselves by being so foolish as to listen to a few unscrupulous actors telling us to believe in it, acting just like children whose imaginations convince them their figurines have feelings.

AI is built on the subjective measures chosen by its developers

While there is no underestimating the grave fact that AI is built on the subjective measures chosen by its developers, therefore handing them extraordinary (as well as illegitimate) power, the much more fundamental and detrimental problem lies precisely in our accepting AI as a point of view. This acceptance is nothing else than the surrender of value rationality to instrumental rationality. This surrender is the ultimate dehumanizing act because from this point onwards humans are enslaved to serve an instrument, rather than the instrument serving humans.

We must refuse this surrender in an absolute manner, not by creating an environment of negotiation and concessions like so many critical of AI do (we will elaborate on this shortly in the concluding remarks). For this surrender is to surrender ourselves to a perverse logic of governance that is being forcibly coerced upon us by AI protagonists. These protagonists are effectively attempting to change the way society is governed by substituting a world according to their models for reality. In doing so, they get to project subjective predictions which pre-emptively decide on matters over and against reality. In the words of Mühlhoff, ‘This mechanism of betting on the individual and treating them as if they already manifest a certain (in reality, unknown) trait constitutes a limitation of individual autonomy’ (Mühlhoff & Ruschemeier, 2024, p. 270, my italics).

The crucial point here is not so much the limitation of individual autonomy. Rather, it is the cancelling of the real humanity (or the reality of the humanity) of the individual in favour of a representation of the humanity of an individual by imposing on them guesses of traits that are derived from statistical inference. For individuals here do go on about directing themselves just as they do in our current liberal order, namely by making their own decisions. This stands unaffected. What changes, however, is the rationality used in the governance of outcomes: these must now serve the system which produces them. Here the outcomes that matter are those which are beneficial from the point of view of the system: e.g. that a benefits scheme is running “optimally”, usually meaning that decisions are made quickly, that potential receivers of benefits are excluded, or that there is a reduction in the human labour needed to operate the system.

With the shift from value rationality to instrumental rationality, we witness a change in the ideology of governance. Unlike in value rationality where means are meant to serve people, the goal with the use of instrumental rationality is no longer to directly enable the reaching of ends for people. Rather, in line with liberalism, the idea here is to consider the means as being the enablers of ends (autonomy, freedom, human flourishing). Advocating AI for use in the public sphere is part of the archetypal liberal attack against the State the aim of which is to dictate to the State—and therefore impose upon others—the means it may and must provide citizens with while at the same time minimizing its resources through claims of optimisation and efficiency. Consequently, the goal here is the more successful achievement of (liberal) ends indirectly by way of optimising and making more efficient the means (governance, policies, regulations, etc.,). In other words, AI protagonists share with liberalism the same confused idea: achieving ends by replacing them with means. This type of an attempt at an explanation is circular in its reasoning: it takes the end, the means to which it is trying to explain, to be found in the means. But, as we have argued, means are not the same thing as ends.

 

Objectification, or the Role of AI as an Enabler of a Political Shift of Power: Some Concluding Remarks 

As we have seen, liberalism perfectly supports the purchase of an AI-driven knowledge and knowledge production paradigm shift. This purchase is in complete agreement with the liberal ideal of competition (inequality), as well as with the liberal hatred of democracy. However, here we will reflect on how the privatisation of knowledge and knowledge production leads to an illegitimate use of power over others.

Crucially, privatisation of knowledge is in complete agreement with the confused liberal idea of equating between means and ends. Here privatization is supposed to lead to the achieving of ends through means (or instrumental rationality): it is the individual who is privately responsible for using means to achieve ends. The means become equated with ends as ends are to be achieved within and from means. Such is instrumental rationality. This is in opposition to value rationality where the means are not tied to ends, and where ends are therefore independent in relation to means.

The shift of logic to instrumental rationality, embracing the point of view of a system (the as if)—which shift is greatly intensified through the use of AI—is made possible in the first place only because of our believing in the scientific reductionist view that means are the same as ends and our therefore holding the assumption that in speaking of AI (means) we are discussing the achieving of ends by AI (it does not matter whether it is said that AI output requires human oversight, for the assumption is still that AI has delivered an output which merits attention just like in the case of considering a statement coming from a human being). This belief is amply testified to in the discussion of AI critics who point out to the fact that AI is not a neutral technology, but one which reflects the biases and values of its builders (Helbing, 2019; Sudmann, 2019; Bernholz & Al, 2021; Simons, 2023; Coeckelbergh, 2024). Implicit in this critical discussion is the question “whose ends does AI serve?”.

When we mistakenly believe AI can achieve ends, we believe that AI is capable of serving its own end, even if AI cannot really achieve ends

Although this question is still important in another sense—namely in the sense of whose ends does the use of AI serve—it is both misplaced as well as wholly secondary here where the assumption held by both proponents and critics alike is that AI itself can achieve ends. For the answer is that AI itself, as opposed to the use of AI, can serve no ends at all. When we mistakenly believe AI can achieve ends, we believe that AI is capable of serving its own end, even if its own end was in service of the end of someone else, such as its creator, or (cringe) humanity. For even if its end was not its own, the assumption is that AI can assume an end and achieve it by its own means; that it will do its master’s bidding through its own decision-making act. But, as we have seen, machines can have no ends which they would supposedly achieve. They cannot, therefore, serve any ends at all.

The question of whose ends does AI serve is nonsense, that is, meaningless, precisely because AI cannot achieve ends. In other words, the idea that AI is a technology that can make decisions and can therefore be used like a participant helping in decision-making, is nonsense. Consequently, there is no scientific validity whatsoever to such a belief, and this precisely because it is nonsense. Note that nonsense, per se, refers neither to error of fact nor to being stupid. Rather, nonsense is a string of words that makes no sense. The inevitable confusion in our current context is due to the idea of AI decision-making being latent (i.e. non-obvious) nonsense as opposed to patent (i.e. obvious) nonsense. Latent nonsense may have an appearance of making sense, even if after closer examination all such appearance evaporates. Here is an example by Peter Hacker of patent nonsense: ‘The number 3 fell in love with the number 2 and they got married in world three’ (Bennett & Hacker, 2003, p. 242)

Although not stupid on its own, what would lead to stupidity is if someone were to insist that it is intelligible that numbers can actually marry, and that they were engaged in scientific research that might prove this (and that once this is done, we will all be invited to the wedding). Here—just like in the idea of artificial intelligence making decisions—there is no scientific question to study to begin with because the premise for a study is nonsense (i.e. there is no premise for a study). This, unfortunately, is not widely communicated and therefore is not widely understood, leading to a situation such as liberalists pride themselves in exploiting: an ignorant, uneducated mass who are ripe for being subjected to control by those who know better. 

With the idea of decision-making AI lacking scientific validity, it is not, in reality, a technological paradigm that is causing disruption and challenging us to consider how to make use of it. AI does not, and never will, deliver on the idea of it becoming an agentic tool assisting in decision-making. Therefore—contrary to the commonly accepted view—we are not facing a technological paradigm shift where AI would supposedly improve things which require knowledge, thinking, or intelligence. Rather, AI is the vehicle of a financially engineered attempt at purchasing a change of political order.

This attempt itself is not enabled by AI supposedly being a technological game-changer. Quite to the contrary, this attempt enables itself through telling a fairytale of AI supposedly being a technological game-changer. The reason such a financially engineered self-enabling is possible is that it itself is enabled by 1) a society which believes that markets are best at delivering innovation and change, and 2) theft through the reappropriation mechanism of the commons aimed at enclosure (privatisation). AI itself is merely an objectification of the narratives of markets and commons. Yet, this objectification is crucial since the economy requires objects and services that can be sold consistently.

The ability to consistently sell objects, in turn, requires both that the objects to be sold are met with demand and that they are standardised to a high degree so as to achieve a guarantee of some expected level of consistent quality. This process of developing both markets (demand) as well as objects ready for market exchange is called commodification. It is essentially a process of developing an object for sale, but the activity required to achieve this can vary significantly. In many cases, the object in question is already a recognised commodity, say, footwear or headphones. This is to say that in such cases there already exists an established market for the object to be sold. Therefore, in such cases, there is no need to create a new category or form of commodity; although the developing of an object for sale can still be called commodification, there is no need here to create a market for a new category of object. In some cases, however, commodification requires the creation not just of an object, but also that of a market. Without a market for it, AI is not an interesting object except for the few experts working in the field.

So, what has created a market for AI? Obviously not AI itself, for it has not shown itself to be capable of the things promised on its behalf. This much everyone is in agreement with: by far the most common trope heard in relation to AI—even from its most enthusiastic supporters—is that although not yet there, someday it will become able to fulfil its promise. The answer can be found in asking what is AI for; what is its use case.

Well, the use case for AI is the processing and analysis of data. Without data, AI is thoroughly useless. Therefore, for AI to be of any general interest, data must first be made interesting. To do so, data must be commodified. In other words, a data economy must first be created in order for AI to become interesting. But, when an economy is created in a liberal society, this means the enclosure of the commons; that is, theft, under the name of privatisation, of public property. Once this enclosure is enacted, that which was once common to the public becomes fractionalised and is auctioned off for private purposes the aim of which is to use the newly minted commodity in competition against both other private owners as well as the public.

The aim in creating a data economy AI is to privatise knowledge and knowledge production in order to allow private actors to produce unaccountable knowledge suited to their own goals.

What, then, is the idea behind creating a market for data? The aim in creating a data economy is to privatise knowledge and knowledge production in order to allow private actors to produce unaccountable knowledge suited to their own goals. When knowledge and knowledge production is public—as opposed to a common good which may be made use of by private individuals—this means that it is shared among society. This sharing is what makes knowledge robust and gives it legitimacy as it serves to make knowledge transparent by the fact that everyone can study it, verify it, contest it, and add to it. Being shared and transparent, no one can make up their own knowledge and claim that it must be recognised as knowledge by others. When knowledge and knowledge production is privatised through the theft-mechanism deployed in the concept of the commons, the legitimating standard for qualifying knowledge is thrown overboard. This allows private actors to construct reality by making up knowledge in light of which to place claims on how others should act, what others should take into account, or what others can be expected to agree with.

This privatised knowledge production allows the portrayal of some individuals wishing to impose their opinions and power over others as being justified by claim to the external referent provided by public knowledge. But here, of course, this justification of publicly acknowledged knowledge is lacking. This is an extraordinary assault on the independence on people as what is here granted is an extraordinary power for some individuals to impose their claims over and against others in a way an absolute sovereign in a monarchy would. The creation of a data economy is fundamentally a move to autocracy and dictatorship. AI serves the role of posing as an external referent offering a veil of objectivity that those wishing to oppress others may point out to as justification. 

we must recognise and correct the unfortunate, yet understandable tendency of AI criticism to accept the narratives concerning the potential or possibilities of AI to do with decision-making

Finally, we must recognise and correct the unfortunate, yet understandable tendency of AI criticism to accept the narratives concerning the potential or possibilities of AI to do with decision-making. What follows is not a criticism aimed at critics who have accepted the narrative of AI hype. For refuting the capabilities of AI is not something everyone is prepared or equipped to do, and, in any case, there is room for a large number of various critical perspectives on the use of AI. However, we must expose propositions that do not make sense once we stumble upon them. Now, it seems like the majority of criticism recognises the potential for AI to be a participant in decision-making while, of course, also wishing to point out to risks involved in such use of AI.

What this line of criticism concerns itself with is a “good” use of AI: that AI be used democratically and in service of people. Unfortunately, this is even more dangerous than the hype coming from uncritical proponents. For it gives the concession—from people who are seen as having authority, being considered, sagacious and impartial—that AI really is (or is at least very likely to be) capable of being used as a participant in decision-making.

This concession serves as verification for the idea that we really are facing some magical technology which—just like in some bad teen-film where protagonists face decisions that will change the course of humanity—we must now stare in the eye and steer wisely in a bid for our continued prosperity. Such a narrative of a cross-roads is promoted by many (see, for example, Helbing 2021, or Bernholz & Al 2021). However, this narrative is fundamentally confused: it is to accept as real the science fiction premise that it is intelligible that a machine can be made use of in making decisions by way of considering its outputs like we would consider those of human beings who act on information and come to make decisions. But AI cannot make decisions.

Consulting AI in decision-making is the same as to relinquish decision-making in favour of flipping a coin. If democracy requires the making of decisions, then using AI as a participant in decision-making is anti-democratic. Moreover, AI cannot be made use of in service of people for the reason that in following and adapting to its outputs we are only capitulating to the instrumental rationality of the system. In doing so, we are by definition making people serve the system. As a world in a model, a world arising from shuffling data into constructed abstractions, AI is inherently incapable of serving people for the reason that people are not abstract entities occupying the world in a model. Since we now understand that the idea of an AI capable of participating in decision-making is pure science-fiction, we must absolutely refuse entertaining it as a possibility and thereby refuse any discussion regarding the potential for the right kind of (good) use of AI when it comes to democracy and serving people.

It is unacceptable to entertain the suggestion that people might be subjected to the arbitrary control of the instrumental rationality of actors participating in a data economy. To allow this is to go against the constitutional right of people to hold agency over their own lives by allowing some to place ‘a limitation of individual autonomy’ on others (Mühlhoff & Ruschemeier 2024, p. 270). To be sure, there can be no democratic use of AI in decision-making. The common suggestion to make the development and use of AI more democratic is fundamentally confused. We may collectively decide to relinquish decision-making, but this can never be a democratic action since it eschews decision-making while democracy requires decision-making.

Most importantly, however, the use of AI in predictive decision-making in any context where those decisions concern the opportunities and behaviour of humans is inherently against humans and life for the obvious reason that this use ‘implies the transformation of a statistical inference, which is always knowledge related to the population as a whole, into a prediction about an individual case (point prediction)’ (Mühlhoff & Ruschemeier 2024, p. 270). In other words, such use of AI reduces people into abstract entities serving a model world.

surrending of value rationality to instrumental AI rationality is the ultimate dehumanizing act because from this point onwards humans are enslaved to serve an instrument, rather than the instrument serving humans.

Bibliography

Bennett, M.R., Hacker, P.M.S. (2003). Philosophical Foundations of Neuroscience. Oxford, UK: Blackwell Publishing. 

Bernholz, L., Landemore, H., and Reich, R. (2021). Digital technology and democratic theory. Chicago/London: University of Chicago Press. 

Brown, W. (2020[2015]). Undoing the Demos. Neoliberalism’s Stealth Revolution. New York: Zone Books. 

Bryant, M. (5.8.2025) ‘We didn’t vote for ChatGPT’: Swedish PM under fire for using AI in role. The Guardian. Available at: https://www.theguardian.com/technology/2025/aug/05/chat-gpt-swedish-pm-ulf-kristersson-under-fire-for-using-ai-in-role. Last accessed 7th Nov, 2025. 

Coeckelbergh, M. (2024). Why AI Undermines Democracy and What to Do about It. Cambridge: Polity. 

Helbing, D. (2019). Towards Digital Enlightenment—Essays on the Dark and Light Sides of the Digital Revolution. Switzerland: Springer International Publishing AG. 

Helbing, D. (2021). Next Civilization—Digital Democracy and Socio-ecological Finance: How to Avoid Dystopia and Upgrade Society by Digital Means. Switzerland: Springer Nature Switzerland AG. 

Joque, J. (2022). Revolutionary Mathematics—Artificial Intelligence, Statistics and the Logic of Capitalism. London: Verso. 

Kenny, A. (n.d.) The Homunculus Fallacy. Made available by University of Nantes, Centre Atlantique de Philosophie. Available at:

https://ifac.univ-nantes.fr/IMG/pdf/Kenny-Homunculus-Fallacy.pdf.

Landemore, H. (2021). Open Democracy and Digital Technologies. IN: L. Bernholz, Landemore & R. Reich (eds.) Digital Technology and Democratic Theory (pp. 62–89). The University of Chicago Press. 

Mazzucato, M. (2019[2018]). The Value of Everything—Making and Taking in the Global Economy. UK: Penguin Random House. 

Mühlhoff, R. & Ruscheimer, H. (2024). Predictive analytics and the collective dimensions of data protection. Law, Innovation and Technology. 16(1), https://doi.org/10.1080/17579961.2024.2313794. 

Perez, Carlota. (2005[2002]). Technological Revolutions and Financial Capital—The Dynamics of Bubbles and Golden Ages. UK: Edward Elgar Publishing Limited. 

Pistor, K. (2019). The Code of Capital—How the Law Creates Wealth and Inequality. UK: Princeton University Press. 

Russell, S. & Norvig, P. (2022) Artificial Intelligence – A Modern Approach. 4th Ed, Global Ed. Harlow: Pearson Education Limited.  

Simons, J. (2023). Algorithms for the People. Democracy in the Age of AI. Princeton/Oxford: Princeton University Press.


 

 

StultiferaBiblio

Pubblicato il 26 novembre 2025

Rauli Westerstrand

Rauli Westerstrand / Insight, Foresight & Strategy | Techistential | Disruptive Futures Institute