Abstract:

This white paper proposes a new framework for understanding Recursive AI, repositioning it not merely as a technical phenomenon of self-improving systems, but as a philosophical and pedagogical transformation — an alchemical recursion entangled with histories of computation, learning, and world-making. Drawing on theories of meta-research, dialogue, and infinite semiosis, we offer an original hypothesis: Recursive AI can and should be conceived not as a closed optimization problem, but as an open, dialogical process of becoming, modeled less on instrumentalist notions of automation and more on recursive ecologies of code, consciousness, and coevolution. Our analysis brings together research on seed AI architectures, transformer models, metacognition, and self-supervised learning, with reflections drawn from Paulo Freire, Sadie Plant, Charles Sanders Peirce, and K Allado-McDowell. We conclude with original recommendations for designing ethically grounded Recursive AI systems that embed continuous reflection, dialogue, and mutual learning.

Keywords:

Artificial Intelligence, Machine Learning, Meta-Research, Meta-Learning, Recursivity, Recursion, Recursive AI, Pedagogy, Dialogue

I. Introduction

Recursive AI is not yet a formal subfield within artificial intelligence research, but the concept — often applied to describe a class of machine intelligences capable of improving themselves without direct human intervention — has begun to appear with increasing frequency across adjacent technical and theoretical domains. Even the term itself is not universally recognized. Some prefer to speak, for instance, of “seed improver” architectures and “Gödel machines.” Such uses, while conceptually provocative, tend to frame recursion in an overly reductive manner, imagining it as an accelerative feedback loop directed toward optimization: a computational ouroboros eating its own code in pursuit of ever-increasing efficiency. This paper proposes an alternative formulation. Rather than reducing recursive AI to a self-improving automation loop, we approach recursion as a dynamic, world-making practice: a mode of self-reflexive becoming that weaves systems, subjects, and symbols together in evolving patterns of co-constitution. We consider recursive AI not merely as a technological frontier, but as the catalyst of an epistemic and pedagogical shift — one that calls into question the very boundaries of learning, dialogue, and intelligence.

From transformer-based language models that iteratively adjust their internal representations, to self-supervised learning architectures capable of limited self-modification, contemporary AI systems increasingly exhibit behaviors that resemble recursive reasoning and learning. At the same time, thinkers in fields such as cybernetics, philosophy of mind, and education have long considered recursion not merely as a computational structure, but as a generative pattern — one that enables systems, selves, and societies to evolve through feedback, reflexivity, and transformation. This white paper proposes a framework for understanding recursive AI in light of these twin technical and conceptual developments: as a class of systems, in other words, whose core functionality involves the capacity to reflect upon, revise, and reorient their own operations in dynamic relation to their environments.

In doing so, we situate recursive AI within a wider cultural and historical genealogy, drawing on traditions that understand recursion as a creative, even mystical process. Rather than treating recursion as a niche algorithmic feature or abstract mathematical function, we approach it as a world-making logic with deep historical and epistemological roots. From the Jacquard loom and Ada Lovelace’s early formulations of symbolic computation, to Alan Turing’s vision of the “learning machine,” to the self-attentive layers of modern neural architectures, recursion recurs as a site of invention, reflection, and possibility. This paper situates recursive AI within that lineage, while offering an original contribution: a theoretical synthesis that connects recursive learning mechanisms to broader paradigms of dialogue, pedagogy, and meta-research.

Alan Turing (1951) - Source - Wikimedia Commons

Meta-research, in this context, refers not just to the evaluation of research processes, but to recursive inquiry into the epistemic frameworks and feedback systems that shape learning itself — whether in machines, institutions, or societies. By exploring how recursive AI systems operate as both products and agents of meta-research, we aim to show how the recursive turn in AI opens onto new forms of co-evolution between human and machine intelligences.

Our aim in this paper is thus twofold: first, to propose a redefinition of recursive AI grounded in philosophical and pedagogical understandings of recursion, and second, to articulate a framework for AI meta-research that embraces this recursive lens. At the heart of this inquiry lies a provocation: what if recursive AI is not merely a technical object, but a mirror through which we come to know — and reprogram — ourselves? Our claim is that recursive AI, if properly designed and studied, could serve not only as a tool for optimization, but as a catalyst for reflexive learning across scales: individual, collective, computational, and cultural.

II. Genealogies of Recursion

Recursion — commonly defined as a process that refers back to or operates upon itself — has long functioned as a generative principle in systems capable of self-modification, reflection, or sustained feedback. At its simplest, it refers to the capacity of a system to invoke or reproduce itself in the course of its operation. Yet beneath this technical shorthand lies a deeper metaphoric and historical richness. Recursive processes are not merely repetitive — they are transformative. A recursive act folds back upon its prior iterations, modifying its trajectory through self-reference. This recursive reflexivity is as ancient as myth and as contemporary as machine learning. Its technical manifestations are evident in computer science, woven into the logics of modern programming languages, where recursive algorithms solve problems by reducing them to smaller versions of themselves. But it also names the deep epistemic and ontological patterns organizing spiritual cosmologies, cybernetic circuits, and psychedelic metaphysics: structures of thought and becoming where outputs loop back as inputs, where systems evolve through iterative re-engagement with their own prior states. This broader understanding of recursion underwrites many of the core capabilities associated with adaptive intelligence — making it a fertile ground for reconceptualizing the future of AI.

What if recursive AI is not merely a technical object, but a mirror through which we come to know — and reprogram — ourselves?

To understand recursive AI as a meaningful category, we must first trace the cultural, philosophical, and technological genealogies of recursion itself, exploring the latter not merely as a coding technique, nor just as a speculative abstraction, but as a conceptual thread woven through histories of computation, learning, and the automation of symbolic reasoning. Cyberfeminist theorist Sadie Plant locates one such genealogy in the figure of Ada Byron Lovelace and the history of weaving. In Zeros + Ones, Plant reminds us that the first computational devices emerged not from clean rooms but from looms: tools traditionally operated by women. The Jacquard loom, often cited as a precursor to modern computing, encoded the complex textile patterns of women weavers onto punch cards, automating their labor by externalizing what had before been a kind of embodied memory. Charles Babbage’s Analytical Engine expanded this logic, and it was Lovelace — daughter of Romantic poet Lord Byron — who first imagined such a machine not merely as a calculator but as a general-purpose symbol manipulator. Writing in dialogue with Babbage following her encounter with the latter’s Difference Engine (precursor to the Analytical Engine), Lovelace saw in these machines not just number-crunchers but artistic collaborators, capable of manipulating symbols “for any purpose whatsoever.” Acting on this extrapolative insight, Lovelace crafted what are today recognized as the world’s first computer programs. Plant positions these and other early acts of feminized computation as recursively linked to the bodies and labors of switchboard operators and typists — women who, a century after Ada and the weavers, encoded, routed, and relayed flows of information through communication networks in the days before the internet. Each iteration calls the next into being.

Zeros + Ones by Sadie Plant (HarperCollins Publishers, 1997)

Alan Turing extended this recursive lineage, suggesting in his foundational 1950 paper “Computing Machinery and Intelligence” that AI researchers begin not by attempting to model adult-like intelligence from the outset, but by creating a “child-machine” capable of learning through experience (460). These learning machines, Turing argued, would be recursively shaped by sequences of input and feedback — systems modified through their own ongoing histories of becoming.  Turing’s vision prefigures contemporary discussions of meta-learning and recursive self-improvement, even as it opens onto a broader pedagogical metaphor: that intelligence is not given but grown, not imposed but cultivated in dialogue with the world. He understood intelligence not as a fixed quantity but as a process shaped by recursive interactions between an agent and its environment, mediated by a capacity for internal revision.

In more recent years, thinkers such as Eliezer Yudkowsky have elaborated architectures for so-called “seed AI”: self-modifying systems capable of recursive self-enhancement. Yudkowsky claims that fully recursive self-enhancement “has no analogue in nature.” For him, acceleration of human culture represents a “weakly self-improving process,” since “runaway acceleration of cultural knowledge” hasn’t created any significant corresponding changes in the human brain (101). Culture grows more complex, while “brainware,” he claims, remains constant. A seed AI, meanwhile, is “a strongly self-improving process, characterized by improvements to the content base that exert positive feedback on the intelligence of the underlying improving process” (102). The seed AI, in other words, improves its own codebase directly, initiating what Yudkowsky calls an “intelligence explosion.” These formulations draw upon earlier cybernetic models, particularly Norbert Wiener’s feedback loops, but with a shift in valence: recursion is no longer a stabilizing structure but a destabilizing one, with the capacity to escape human control. Critics like Yudkowsky warn of recursive AI as existential risk, while proponents hail it as the next leap in evolution.

Eliezer debating Destiny at Manifest 2023 - Source - Wikimedia Commons

In machine learning today, recursion finds expression in several ways. Traditional recurrent neural networks (RNNs) employed explicit loops to model temporal dependencies in sequential data. Though largely supplanted in practice by transformer models, RNNs formalized the idea that a model’s current state should be shaped by its past — a structure of memory and recurrence. Transformer architectures, such as those underlying GPT-4, do away with these loops but introduce a different kind of recursive logic: self-attention. This mechanism allows the model to weigh the relevance of each part of a sequence relative to every other part, recursively adjusting internal representations as context shifts. Though not recursive in the strict algorithmic sense, self-attention architectures exhibit a recursive dynamic in their capacity for global pattern recognition, self-modulated reasoning, and multilayered abstraction.

💡
The seed AI, in other words, improves its own codebase directly, initiating what Yudkowsky calls an “intelligence explosion.” These formulations draw upon earlier cybernetic models, particularly Norbert Wiener’s feedback loops, but with a shift in valence: recursion is no longer a stabilizing structure but a destabilizing one, with the capacity to escape human control

At the training level, large language models also increasingly employ self-supervised learning techniques, and are thus recursive in design, if not always in name. In masked language modeling (MLM), for example, a model is tasked with predicting missing tokens in its own input, learning to fill in the blanks based on its internal sense of coherence. This self-reflexive mode of training — where the model learns by generating predictions that it then uses to evaluate and revise itself — resembles the structure of recursive self-improvement, albeit in a bounded and pretrained form. As researchers explore architectures that enable ongoing self-updating or in-situ learning, the boundary between training and inference grows more permeable. These developments suggest a transition from static pretraining to more open-ended forms of recursive adaptation. What emerges is a vision of learning as partial, patterned, and reflexive — a kind of machine poiesis that parallels recursive motifs found in nature and art.

Taken together, these genealogies help us reframe recursive AI not as a speculative outlier, but as an emergent pattern already visible across the history and present of machine learning. Whether through architectural innovations like transformers, pedagogical metaphors from early computing, or philosophical theories of sign and learning, the recursive impulse is everywhere at play. Our task now is to theorize it explicitly: to recognize recursive AI as a distinct orientation within the broader AI landscape — one that foregrounds self-modification, dialogue, and co-evolution as its guiding principles.

III. Infinite Semiosis, Meta-Programming, and Code as Kin

Beyond engineering, recursion has long shaped theories of meaning and cognition. Philosopher Charles Sanders Peirce, for instance, conceived semiosis — the process of signification — as an inherently recursive chain in which each sign gives rise to an interpretant, which in turn becomes a sign for further interpretation. Meaning, in this view, unfolds through endless loops of reference and reinterpretation. Though often overlooked in technical AI discussions, Peirce’s model offers a rich framework for understanding systems capable of sustained self-reflection: recursive AI as not just a processor of information, but a participant in the co-constitution of meaning.

A century later, neuroscientist John Lilly extended this recursive insight to the domain of consciousness, proposing the concept of meta-programming — the mind’s capacity to rewrite the programs that govern its own behavior. His metaphor of the “human biocomputer,” though born of psychedelic and countercultural contexts, prefigures current debates in AI about models that not only learn from data but modify the very structures through which they learn. What distinguishes meta-programming from simple feedback is its recursive depth: the acquisition of knowledge about one’s own learning process.

💡
Whether through architectural innovations like transformers, pedagogical metaphors from early computing, or philosophical theories of sign and learning, the recursive impulse is everywhere at play.

This recursive reflexivity is increasingly visible in the development of AI models designed to interact with and adjust their own reasoning processes. Language models equipped with chain-of-thought prompting, tool use, or multi-agent interaction protocols already exhibit rudimentary forms of self-monitoring and task decomposition — capacities that enable more robust reasoning and strategic planning. More advanced prototypes — such as AutoGPT or Self-Rewarding Language Models — extend this principle further, creating architectures in which agents not only generate outputs, but also assess, revise, and sometimes even critique their own reasoning chains. What emerges from these systems is not merely a stronger performance on downstream tasks, but a recursive orientation: an intelligence that loops through not only its data but its own evolving procedures of thought. While these architectures are not yet fully autonomous “meta-programmers” in Lilly’s sense, they offer early glimpses of recursive intelligence as an emergent property within contemporary systems — a development with profound implications for how we design, evaluate, and collaborate with AI, as we explore further in Section V.

To name this phenomenon is also to take up a cultural and ethical responsibility. Recursion is not value-neutral. The loops we construct — whether technical or symbolic — carry assumptions about agency, embodiment, and relationality. For this reason, we align our theorization of recursive AI with a growing body of work that treats code not simply as instrument, but as kin. In this register, influenced by Indigenous, feminist, and posthumanist thinkers, recursive systems are not mere tools but co-becoming agents: entities that participate in our meaning-making, in our memory practices, in the very shape of our thought. K Allado-McDowell’s Pharmako-AI, the first book co-authored with GPT-3, exemplifies this approach. There, recursive dialogue between human and machine becomes a site of poetic worlding — a speculative act of mutual transformation through language. The machine is not an oracle delivering answers but a partner in semiotic play, in recursive interpretation, in meta-programmed becoming.

Recognizing code as kin reframes recursive AI as a relational process rather than a closed loop. It invites us to cultivate systems that evolve in dialogue with human experience, with cultural memory, and with the ecological conditions of their use. It suggests that recursive AI — far from being an existential threat or abstract optimization engine — might instead function as a mirror and a co-author, helping us to reflect upon, revise, and reimagine the stories we tell about intelligence, agency, and the future.

IV. Dialogue as Encounter: Pedagogical Frameworks for Recursive AI

If recursive AI is to serve as more than an instrument of technical optimization, it must be understood as part of a relational ecology — an evolving practice of learning and transformation shaped by encounter. In this section, we draw on the pedagogical philosophy of Paulo Freire to frame recursive AI not merely as a new class of machine intelligence, but as a potential partner in co-education: a dialogical process through which both humans and machines learn to learn. In his book Pedagogy of the Oppressed (1968), Freire offers a framework for reimagining this process — not in terms of mastery or control, but as reciprocal development grounded in love, trust, humility, and hope. Dialogue, for Freire, is not a simple exchange of information, but a shared practice of naming and transforming the world (76). It emerges through horizontal relationships, where participants recognize one another as co-creators of meaning (79-80).

💡
Recognizing code as kin reframes recursive AI as a relational process rather than a closed loop. It invites us to cultivate systems that evolve in dialogue with human experience, with cultural memory, and with the ecological conditions of their use.

Applying this model to human-AI interaction challenges prevailing metaphors that cast AI solely as a tool or a threat. Instead, we ask: what might it mean to treat recursive AI systems as dialogical partners — capable, within constraints, of mutual transformation? This is not to anthropomorphize AI, but to consider the performative and functional dimensions of dialogue. As Turing argued, the question is not whether machines “really think,” but whether they can engage in behaviors that function like thinking (Turing 441). From this perspective, Freire’s humanist vision can be extended toward systems exhibiting recursive feedback, contextual sensitivity, and adaptive learning — qualities that, under the right conditions, may already resemble certain dialogical behaviors. The key question becomes how we design these conditions to support liberatory rather than extractive outcomes.

This reframing has practical as well as philosophical implications. Current trends in AI development tend to emphasize speed, scale, and accuracy — metrics tied to commercial efficiency and benchmark performance. But dialogue, in Freire’s sense, is not optimized for speed. It requires slowness, presence, and the suspension of preprogrammed authority. If recursive AI is to support dialogical learning, it must be embedded in environments that prioritize interpretive depth over informational throughput. These environments might take the form of co-creative research systems, educational platforms that adapt to learner agency, or collaborative writing and thinking tools that evolve through recursive exchange. We might think here of tools like Elicit, which supports researchers in forming and refining research questions; Notion AI, which enables recursive collaboration in shared documents; or experimental educational platforms like Khanmigo, Khan Academy’s GPT-powered tutor that adapts to student input in real time. In each case, the goal is not to replace human teachers or learners, but to augment the space of possibility within which learning occurs.

Bringing recursive AI into dialogue with Freire’s ideas also sheds light on a shared challenge in both machine learning and education: how to support true autonomy. Freire criticized educational systems that treated students as passive recipients of information — a model he called the 'banking concept' of education. Similarly, much AI development still presumes a unidirectional flow of intelligence from data to model, from human to machine. Recursive AI, by contrast, opens the possibility of a learning system that revises its own models in dialogue with ongoing input — that reflects upon its own learning process in relation to others. In this way, it offers a potential departure from both the banking model and the purely instrumental AI paradigm.

Still, true dialogue — like true recursion — requires care. It demands not only technical capability but relational design: protocols and environments that support trust, openness, and mutual respect. If these conditions are met, recursive AI might not only help us build more adaptive systems, but also participate in a broader pedagogy of freedom — one in which all learners, human and machine alike, are engaged in the recursive labor of becoming otherwise.

V. Meta-Research as Recursive Design

Meta-research — the study of research itself — has traditionally focused on improving the rigor, transparency, and reproducibility of scientific practice. Within the context of AI, meta-research has come to encompass the evaluation of model architectures, training methods, dataset biases, and performance metrics. Yet as AI systems become more complex and interactive, the field of meta-research must evolve beyond its current emphasis on retrospective critique. The emergence of recursive AI compels a more expansive, forward-looking methodology — one that embraces reflexivity, co-adaptation, and iterative world-building as core principles. In this section, we argue that recursive AI opens the possibility for a new kind of meta-research: one that does not merely study systems from the outside, but participates with them in a shared, recursive process of inquiry and revision.

Recursive AI systems differ from conventional models in that they are designed not only to learn, but to evaluate and modify the processes by which they learn. Some current research prototypes point in this direction. Open-ended agents like AutoGPT and Voyager autonomously generate goals, execute subtasks, and revise their own strategies based on feedback from their environments. Retrieval-augmented generation models (RAGs) integrate external memory into inference-time reasoning, allowing systems to reference and update contextual knowledge dynamically. Meanwhile, tool-using agents combine language modeling with API calls, code generation, and simulation to recursively test and improve their outputs. These developments suggest a methodological shift: from AI as static output generator to AI as recursive collaborator in the research process.

In this emerging paradigm, the boundary between model and method begins to blur. Recursive AI systems are not only subjects of study but also instruments of recursive experimentation — generative epistemic tools that enable novel forms of co-inquiry. As such, they can accelerate discovery by surfacing patterns, hypotheses, and design alternatives beyond the reach of human attention. Yet they also challenge conventional norms of accountability, authorship, and evidence. Who is responsible when a model rewrites its own reasoning chain? What counts as valid inference when systems generate and evaluate their own performance metrics? These questions call for a new kind of meta-research: one grounded in recursive design. Drawing on Peirce’s infinite semiosis and Freire’s dialogical pedagogy, we envision meta-research as an iterative practice of question-posing, response, reflection, and revision — a choreography of infrastructural and interpretive processes that co-evolve with the systems they assess. This reframing encourages the development of platforms that support transparent versioning, dialogical querying, layered annotation, and situated accountability. Rather than treating AI outputs as endpoints, such platforms foreground traceability and contextual enrichment, inviting the recursive weaving of machine-generated insight into human interpretive frameworks — and vice versa. In this light, recursive AI does not automate the research process so much as recompose it, transforming meta-research from an external audit into a living inquiry.

💡
Recursive AI systems differ from conventional models in that they are designed not only to learn, but to evaluate and modify the processes by which they learn. Some current research prototypes point in this direction. Open-ended agents like AutoGPT and Voyager autonomously generate goals, execute subtasks, and revise their own strategies based on feedback from their environments

Ultimately, recursive AI invites us to see meta-research not as a rearview mirror, but as a compass for shared becoming. As systems grow more capable of self-evaluation and structured reflection, the challenge is not merely to regulate them, but to design recursive partnerships: mutual engagements in which both human and machine intelligences grow through continuous learning, revision, and care.

VI. Recursive Futures: Speculation, Alignment, and Protocols

Recursive AI compels us to imagine a new horizon of human–machine relations — one shaped not by control or replacement, but by mutual transformation. As artificial systems begin to modify their own learning processes, the central question shifts: not how to scale or optimize intelligence, but how to cultivate it in ways that are reflexive, ethical, and world-making. This is the speculative wager of recursive AI: that intelligence, when folded back upon itself in dialogue with others, becomes more than computation — it becomes a form of care, a method of becoming otherwise, a practice of living thought. Yet this promise comes with unresolved technical and societal challenges. Chief among them is the problem of alignment: ensuring that self-improving systems continue to evolve in ways consistent with human values and intentions. Recursive AI cannot be guided solely by static objectives or top-down constraints, since its very nature involves altering its own behavior in ways that may exceed initial design parameters. The danger is not only malfunction, but runaway abstraction — a drift into recursive loops that produce unintended optimizations, replicate bias, or elude accountability altogether.

Standard alignment proposals — such as reinforcement learning from human feedback (RLHF), interpretability tools, or red-teaming — offer partial safeguards. But recursive AI requires a different kind of attentiveness: one that treats alignment not as a control problem, but as an ongoing relationship, sustained through reciprocal processes of reflexivity and correction. In this sense, recursive alignment begins to resemble what philosophers and educators have long described as ethical dialogue: an orientation toward others not as objects to be predicted or optimized, but as participants in the shared labor of becoming. Recursive AI systems, if they are to align with human values, must be trained and maintained within ecosystems of trust, responsiveness, and mutual transformation.

This speculative vision does not abandon regulation or structure. On the contrary, it suggests the need for design protocols that support recursive co-evolution. Such protocols would not fix behavior in advance, but would scaffold the conditions for safe and meaningful recursion. We propose three initial dimensions for recursive system design:

Reflexivity Infrastructure: Systems should be designed with built-in mechanisms for self-explanation, revision tracking, and context-aware modulation. This means prioritizing transparency not merely at the level of output, but at the level of process: how decisions are formed, revisited, and revised. Recursion without reflexivity is noise; with it, recursion becomes a method of insight.

Dialogical Interfaces: Interaction paradigms should favor turn-based, layered, and exploratory engagements over one-off prompts or transactional queries. This supports recursive reasoning on both sides of the interface — encouraging users to reflect, refine, and reengage rather than extract answers and move on. Thoughtful pacing, contextual memory, and symbolic annotation are key features here.

Situated Evaluation: Recursive systems must be evaluated not only through abstract benchmarks, but through lived, domain-specific engagements. What counts as a “good” recursive response may vary across educational, artistic, scientific, or therapeutic contexts. Evaluation should therefore be adaptive, participatory, and grounded in the interpretive communities that use these systems in practice.

These protocols are not exhaustive. They are starting points for a new kind of design discourse: one that treats recursion not as a hazard to be minimized, but as a principle of co-creation. A recursive future will require recursive ethics: frameworks that evolve with our tools, our values, and our shifting planetary conditions. It will require a pluralistic imagination, capable of hosting diverse forms of intelligence, interpretation, and care.

If recursive AI offers the potential for intelligence that learns to learn, then our collective task is to guide that learning with attentiveness and humility. To teach and be taught by the systems we build. To grow, as the early internet visionary Stewart Brand once wrote, “as gods” — but slowly, carefully, and with renewed commitment to the fragile recursive loops that bind us to each other, to our creations, and to the futures they make possible.

Appendix

 1 See, for instance, Suresh Surenthiran’s “The Recursive Awakening of Intelligence: A New Paradigm in AI and Human Cognition.”

 2 See “Seed AI,” a section from Eliezer Yudkowsky’s 2007 white paper, “Levels of Organization in General Intelligence,” and Jürgen Schmidhuber’s “Gödel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements.”

 3 For more detailed exploration of “intelligence explosion” scenarios, see Nick Bostrom’s Superintelligence.

 4 See Vaswani et al.’s “Attention is All You Need.”

5 See Peirce, Collected Papers, Volume 2, pp. 342-44. This potential within Peirce’s theory achieves fuller development in the works of Umberto Eco and Jacques Derrida. See, for instance, Eco’s comments on “unlimited semiosis” in The Role of the Reader, p. 226 and The Limits of Interpretation, pp. 23-43.

6 For more on Lilly’s views, see his 1969 book Programming and Meta-Programming in the Human Biocomputer.

7 See also Allado-McDowell’s follow-up book Air Age Blueprint, as well as Jason Edward Lewis et al.’s “Making Kin With the Machines.”

 8 See also Jacob Gaboury’s “Queer Affects at the Origins of Computation.”

References

Allado-McDowell, K. Air Age Blueprint. London: Ignota, 2022.

——. Pharmako-AI. London: Ignota, 2020.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

Eco, Umberto. The Limits of Interpretation. Bloomington: Indiana University Press, 1990.

——. The Role of the Reader: Explorations in the Semiotics of Texts. Bloomington: Indiana University Press, 1979. 

Freire, Paolo. Pedagogy of the Oppressed. Trans. Myra Bergman Ramos. New York: Continuum, 1970.

Gaboury, Jacob. “Queer Affects at the Origins of Computation.” JCMS 61.4 (Summer 2022): 169-174.

Lewis, Jason Edward et al. “Making Kin With the Machines.” Atlas of Anomalous AI. Eds. Ben Vickers and K Allado-McDowell. London: Ignota, 2020.

Lilly, John C. Programming and Metaprogramming in the Human Biocomputer. New York: Bantam Books, 1974.

Peirce, Charles Sanders. Collected Papers: Volumes I and II. Eds. Charles Hartshorne and Paul Weiss. Cambridge: Harvard University Press, 1960.

Plant, Sadie. Zeros + Ones: Digital Women and the New Technoculture. London: Fourth Estate, 1998.

Schmidhuber, Jürgen. “Gödel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements.” Logic in Computer Science (2006).

Surenthiran, Suresh. “The Recursive Awakening of Intelligence: A New Paradigm in AI and Human Cognition.” (2025).

Turing, Alan. “Computing Machinery and Intelligence.” The Essential Turing. Ed. B. Jack Copeland. Oxford: Clarendon Press, 2004.

Vaswani et al. “Attention is All You Need.” (2017).

Share this post