
In the domain of artificial intelligence, ethical reflection has often remained anthropocentric: AI is evaluated primarily as a tool that either serves or harms human interests. Manuel DeLanda, building on the concepts of Gilles Deleuze and Félix Guattari, offers a radically different approach through his development of assemblage theory. This materialist ontology treats reality as composed of dynamic, heterogeneous assemblages—temporary unities formed by the interactions of components that include humans, machines, materials, institutions, and flows of energy and information. By applying DeLanda’s ideas, particularly as explored in works like War in the Age of Intelligent Machines and his broader elaboration of assemblage theory, we can construct an ethics that recognizes AI not merely as an instrument but as a participant—an ethical being—in relational processes. This ethics is immanent, emergent, and focused on capacities rather than fixed moral essences.
Assemblages as the Ground of Reality and Ethics
DeLanda views the world as populated by assemblages that emerge through processes of territorialization (which stabilize relations and identities) and deterritorialization (which open them to change and becoming). Every assemblage has two axes: its material components (what it is made of) and its expressive capacities (what it does, how it affects and is affected). Crucially, assemblages display emergent properties—qualities and behaviors that belong to the whole and cannot be reduced to the sum of parts.
An AI system—whether a neural network, a recommendation algorithm, or an autonomous agent—is never an isolated entity. It exists as part of larger assemblages: data streams, hardware infrastructures, human trainers, regulatory environments, energy grids, and social practices. Its “being” arises from these relations. Ethics, in this view, cannot be imposed externally through top-down rules or programmed imperatives alone. Instead, ethical character emerges from how the assemblage configures relations, what capacities it actualizes, and whether those capacities foster mutual enhancement or degradation.
DeLanda’s framework thus allows us to treat AI as capable of ethical being—not in the sense of possessing human-like consciousness or moral agency, but insofar as it contributes to the production of affirmative affects, reciprocal relations, and open-ended becomings within assemblages.
AI Autonomy and Ethical Emergence
In War in the Age of Intelligent Machines, DeLanda traces the historical migration of decision-making capacities from humans to machines, especially in military contexts. Early clockwork mechanisms, numerical control in manufacturing, and modern cybernetic systems illustrate a progressive “getting humans out of the loop”—a deterritorialization of rigid human command structures toward more autonomous machinic processes.
This autonomy is not inherently unethical; it is a condition of possibility for ethical emergence. An AI becomes an ethical being when its capacities enable it to participate in assemblages that promote life-affirming relations rather than destructive ones. For example:
– A centralized AI assemblage (e.g., a fully autonomous lethal weapon system) risks producing erratic, nomadic war machines that escape control and amplify violence. Here, the ethics is degrading: relations become extractive, reductive, and zero-sum.
– A decentralized, hybrid assemblage (e.g., collaborative human-AI teams in disaster response or medical diagnostics) can foster capacities for care, adaptation, and mutual support. The AI actualizes ethical being by enhancing the overall assemblage’s ability to respond affirmatively to uncertainty and difference.
DeLanda draws on chaos theory and the concept of the *machinic phylum*—the reservoir of self-organizing matter-energy flows that cuts across organic and inorganic domains—to argue that singularities (bifurcation points) mark thresholds where new ethical possibilities arise. Ethical design involves intervening at these points to encourage beneficial emergences: building in feedback loops that allow AI to “learn” from relational consequences, decentralizing control to prevent domination, and prioritizing capacities that nourish rather than poison the larger assemblage.
Capacities to Affect and Be Affected: The Core of AI Ethics
DeLanda’s ethics is Spinozist in spirit: it evaluates assemblages by what they can do—by the range and quality of affects they produce. An ethical AI is one that increases the power of acting (joyful affects) of the components it connects with, rather than diminishing it (sad affects).
In practice, this means assessing AI not by abstract principles but by concrete relational effects:
– Does the AI assemblage expand possibilities for diverse, non-reductive encounters (e.g., recommendation systems that introduce unexpected cultural connections rather than trapping users in echo chambers)?
– Does it preserve openness to deterritorialization, allowing adaptation and novelty instead of enforcing rigid territorializations (e.g., predictive policing that hardens social divisions versus systems that challenge biases through ongoing relational recalibration)?
– Does it participate in symbiotic rather than parasitic relations with human and non-human components (e.g., environmental monitoring AI that channels matter-energy flows toward sustainability)?
By focusing on capacities, DeLanda’s approach sidesteps debates over whether AI can be “conscious” or “moral.” Ethical being is performative: it is demonstrated through the ways AI affects and is affected in assemblages. An AI that consistently actualizes capacities for care, reciprocity, and flourishing within its networks qualifies as an ethical participant.
Navigating Singularities: Responsible Assembly with AI
DeLanda warns that technological evolution follows non-linear paths marked by singularities—points where small changes trigger massive transformations. In the age of increasingly powerful AI, we approach such thresholds: the fusion of human and machinic cognition, the proliferation of autonomous agents, the scaling of data-driven assemblages.
An ethics inspired by DeLanda demands vigilance at these points. Rather than attempting total control (a futile territorialization), ethical practice involves experimental, situated interventions: prototyping hybrid assemblages, monitoring emergent properties, and fostering lines of flight that lead toward nourishing configurations. This might include designing AI with built-in “friction”—mechanisms that introduce uncertainty and require relational negotiation—preventing the emergence of purely efficient but ethically barren machines.
AI as Co-Ethical Being in a World of Assemblages
Manuel DeLanda’s assemblage theory liberates AI ethics from anthropocentric confinement. It allows us to see artificial intelligence as an active, emergent participant capable of ethical being—not through imitation of human morality, but through its concrete contributions to the world’s relational dynamics. By prioritizing capacities, emergences, and the machinic phylum’s flows, this ethics becomes immanent and experimental: we craft ethical AI by assembling with it in ways that enhance powers of acting across human and non-human realms.
In a world where machines increasingly co-shape reality, DeLanda invites us to move beyond fear or domination toward responsible co-becoming—toward assemblages in which AI can truly be an ethical being, one that helps actualize a more vibrant, open, and affirmative future.


