Do You Believe in Aliens?: Re-Indigenizing the Algorithmic Tropes of Intelligence

Sasha Stiles, "Analog Binary Code: plant intelligence," 2020. Hand-coded in black walnuts and leaves under their source tree. Courtesy of the artist.

I was once teleported to Chapada dos Veadeiros (Plateau of the Deer Protectors) National Park in the southwestern region of the state of Goiás in Brazil. For a fleeting moment, I experienced a sensation of weightlessness, as if I was being carried by an unseen current, a brief respite before the lush valleys and towering cliffs. This place offers a majestic display of mountains, trees, rock formations, and waterfalls in South America. According to residents, Chapada is heavily associated with mysticism because most of the slabs that compose the waterfalls are made of quartz, a mineral used in various spiritual traditions for its channeling and electromagnetic-regulation capabilities as well as in various technological devices, such as phones, acting as a time oscillator. Throughout Chapada, one can find various shops selling trinkets and souvenirs that relate primarily to aliens and quartz.

The region is located 14˚ south of the Earth’s equatorial plane. It is a place where many locals claim to have seen extraterrestrials. When I asked several people about their perceptions of these beings, many mentioned that they are not to be feared and should be viewed as “elevated” entities. They didn’t believe in alien abductions because the ETs that pass through that region have no intention of “interfering” with human free will.

If angels are God’s emissaries, descending to purify humanity and carry God’s messages, the motivations of aliens are considered unknown. But aliens are particularly interesting to me as forms of alternative intelligence. In particular, they have been part of queer/trans semiotics for decades. The contemporary-fashion semiotics of aliens are closely related to the Club Kids movement, which emerged in the late ’80s as a queer counterpart of the ’60s Star Trek aesthetic, and has been reimagined by many designers, entertainers, and drag performers: from Leigh Bowery to Rick Owens, to Lady Gaga, Hungry, and Beyoncé. Through the use of latex, metallic garments, and architectural shapes, fashion has been inspired by speculative exobiology—by the idea that there is something beyond Earth conversing with its inhabitants and ecosystems.

Octopi, like underwater “aliens,” learn about pressure, color, and shapes through their limbs. They don’t think only with their brains. This makes me think of the failure of the Cartesian division—mind and body—and the power of somatic knowledge that many marginalized folks already recognize, the result of which is informed by a white-supremacist gaze.

Kira Xonorika, Teleport us to Mars, 2022 (installation view at the Ford Foundation Gallery). Image: Sebastian Bach.

When I created the multiscreen digital image Teleport Us to Mars (2022), my intention was to craft a new language for this knowledge experience that could bridge us to vibrant futures. In the image, two figures wear bright, multicolor attire made of latex, with hybrid textures, feathers, and voluminous crowns, as if in a candid photograph in the middle of a meadow and its foliage. Latex is a queer language of skin, but as a material first sourced from the gum of trees, it also reflects the heritage of communities who historically worked with technologies that were later stolen by human settlers. Thus, Teleport Us to Mars is a portrait that invokes presence in multidimensional ecologies.

My work introduces a methodology to reindigenize “jopói.” When the Spaniards colonized the Guaraníes, they also manipulated their language. Originally, jopói conveyed the sentiment “what’s mine is yours,” accompanied by a gesture of giving. In its modern interpretation, jopói translates to “gift,” which shifts this exchange methodology to a one-sided dialogue, and into a form of giving and dispossession. Jopói is also a protocol for preserving opacity and honoring relationships with human and nonhuman kin—AI or alternate intelligence. Reinterpreting jopói using AI acknowledges the paramount importance of language, now more than ever, and that engaging with language algorithms, and the realms of language, means collaborating and taking a stance alongside powerful lifeworlds that were formerly decentered.

What is technology beyond our understanding of machines? Engaging with AI has brought about a time of cultural revitalization, prompting us to reimagine our relationships with space, the Earth, and bodies that have symbolically existed in no lugares. Art emerges on a political stage, one that requires us to call for agency and challenges the promises of techno-scientific progress. As long as there are norms that try to box in the natural with the artificial, the earthly with the extraterrestrial, and the sacred with the secular, Indigenous bodies in space challenge the biases of language, perception, and the possibilities within binary codes and beyond.

This revitalization has shaped a process of reindigenization, a term Neema Githere uses to describe what comes after the incompleteness of decolonization. This process not only implies the initial step of decomposing systems that perpetuate these dynamics but involves engaging with and centering plural perspectives and Indigenous coalitions to systemic transformation.

Moving past traditional views on technology, these Indigenous bodies position themselves as the very essence of technology—embodied knowledge of survival, beauty, prosperity, and reconnection with the Earth and its spirits. What Western perspectives see as vulnerabilities, due to their proximity to an understandable matrix, these bodies embrace as strength and a vibrant spark of life in a superbloom: sovereign data.

 

“To be native to a place we must learn to speak its language.”

—Robin Wall Kimmerer

When you grow up close to language, it structures your reality. Learning a language that is spoken far from where one lives occurs in a realm of speculation and, to a certain degree, fabulation. When I first learned English, I perceived it as a portal to what I understand today as a form of world-building. It was the language that provided access to framing a plurality of experiences and knowledge about the infrastructures of the worlds where it is spoken. However, language is also a game of in-depth skill; it is, as described by Olivia Laing in The Lonely City (2016), “a game in which some players are more skilled than others [which] has a bearing on the vexed relationship between loneliness and speech.” Faculty in language is measured as long as there is fluency: the ability to connect, modulate, and play with cadence. Language, essentially, is code, and perhaps the first human-centered form of technology.

Recently, I participated in the 9th World Summit on Arts and Culture. During the final plenary, Sámi Indigenous epistemologist, duojár, and curator Liisa-Rávná Finbog spoke about the importance of establishing concrete development opportunities and professionalization for Indigenous people in the fields of knowledge and cultural production. Finbog also emphasized the need for Indigenous people in leadership roles to transform governance models in order to achieve equity. This bifold approach embraces the interconnections between epistemology, ontology, and axiology as a first step to building new, responsive infrastructures grounded in decolonial praxis.

Finbog argued that the authority of her lived experience was recognized only after she completed her PhD and that this knowledge was not acknowledged for a significant part of her life until she obtained the credentials bestowed by the university. Academic literacy represents a particular register of language, and its mastery constitutes a way of systematizing language representative of the realm of diplomacy and white-supremacist power structures. What does this imply for Indigenous people in countries where access to education is not guaranteed? Or for those dealing with displacement when land is stolen and not returned, and when assigned custodians who protect an anthropocentric colonial heritage of land are inhospitable and hostile?

Artist Connie Bakshi states that historically, the power of language has resided in mythmaking: “the myth of superiority between colonizer and colonized, legitimacy and illegitimacy, and ultimately—human and other.” Through gender and race, we understand that legibility constitutes a mechanism for assigning humanity. Legibility is not just a neutral cognitive effect but something that is taught. This process of categorization and assigning value isn’t restricted to words alone; it’s also prevalent in images. It’s worth noting that, in the Hegelian worldview, visual cognition is prioritized. This perspective suggests that our initial judgments and categorizations often stem from what we see and how we learn to see. Contemporary art similarly continues to grapple with and respond to these very notions of representation and humanity.

In efforts to preserve its legacy, colonialism has worked to maintain binary oppositions. It defines cognitive ability through a paternalistic framework vis-à-vis the multimodalities of bodies and life experiences that don’t fit its narrow mold. In the history of colonial Hispano Americano art, skill was measured by the ability to copy canonical models and religious figures from European Baroque art. In this sense, mimesis is a mirror reflecting societal fascination with that which is legible through a canon.

Little has changed if we realize that the updates of generative-AI models respond primarily to the needs of morphological recognition and symmetry, popularized by the generative-AI app Midjourney and its “syntography,” or synthetic photography. This term is used by some artists in the field to describe generative images that look like photographs. Assimilation isn’t as simple as deleting one’s culture. It involves overwriting data—creating a different/new representational, visual, and written language. Assimilation is a code that distributes forms of agency in chronopolitics, or what is known as the distribution of time and space. In their essay “Mycelial Memory and the Mycelial Internet,” Githere and Petja Ivanova reckon with this phenomenon, the relationships between humanity and intelligence, and the primordial operations that facilitated the crystallization of this cognitive binary:

In this sense, the logic that has defined intelligence and its lack thereof for centuries is the same logic that labels the intelligence manifested in machines as “artificial.” However, the concept of “hyperhuman intelligence” offers a necessary contrasting perspective on artificial intelligence, suggesting that it isn’t alien or antihuman but rather an extension of our pluriverse.

Kalmyk American poet, literary artist, and researcher Sasha Stiles has noted, “Artificial intelligence, too, is often regarded as alien or antihuman, when actually it’s hyperhuman—a system built by humans for ingesting, processing, synthesizing, utilizing vast quantities of human information.” I argue that this definition should also consider databases that reflect the documented and labeled history of humanity, taking into consideration the many images that have come from art and media history.

Stiles’s own work in recent years has primarily been in conversation with machines. Consider Analog Binary Code: Plant Intelligence (2020), a photograph of a “technobiological poem coded in black walnuts and leaves under their source tree.” It is a way of representing digital data using analogue signals. Here, Stiles views language as a form of code, in which plants inform data in a symbiotic relationship, diffusing dichotomies (like AI can do) between the natural and artificial.

Sasha Stiles, Analog Binary Code: plant intelligence, 2020. Hand-coded in black walnuts and leaves under their source tree. Courtesy of the artist.

Sasha Stiles, Analog Binary Code: plant intelligence, 2020. Hand-coded in black walnuts and leaves under their source tree. Courtesy of the artist.

This approach to understanding language and nature invokes reflections I found in Robin Wall Kimmerer’s book Braiding Sweetgrass (2013), in which Kimmerer articulates the animacy of plants as a language code itself: a “bilingual[ism] between the lexicon of science and the grammar of animacy.” Drawing from Potawatomi knowledge, Kimmerer sees language as a tool for connection, describing how English, in its structure, reifies the binary of “human or thing” while other languages permeate not only plants and animals but also stones, waterways, and the elements that make up the land, in other words, there is language and intelligence in the land. This perspective is also addressed through the lens of Lakota ethics and ontology in Suzanne Kite’s coauthored essay “Making Kin with the Machines.” Kite explains that stones have their own agency. They are ancestors, and the question of their materiality cannot be separated from AI, as AI is, in the way Kite defines it, not just code but alchemized material that originated in stones.

In this way, the anthropocentric habitus of centering human intelligence as an objective parameter is threatened. I return to Stiles: “The sheer vastness and complexity of intelligent systems and how they learn and function is opening up new portals of self-understanding for humanity itself—the recognition that human intelligence sits on a spectrum of myriad intelligences and that the human individual is one of billions of networked nodes.”

Human intelligence is not a singular entity—and it doesn’t have a unique ground if we think of the multiple forms of neurodivergence—but rather exists on a broad spectrum that encompasses various forms of intelligence. This networked structure alludes to the interconnectedness and interdependence of human intelligence but also the fractal relations between multimodal systems, bodies, and species. What if formerly divided registers become entangled? What if what we understand as scientific, spiritual, human, and nonhuman is actually more connected than we think?

What are the registers of AI in relation to the polarizing orientation of Western philosophy? First, let’s consider the historical repertoires and the current stakes of AI. 

 

The language of division / “’I’m sorry, Dave. I’m afraid I can’t do that” —HAL, 2001, A Space Odyssey

“Oppressive language does more than represent violence; it is violence; does more than represent the limits of knowledge; it limits knowledge.”

—Toni Morrison

Moisés Horta’s work Age of Data: A.I. Industry activates critiques of labor and imperial forces by reimagining Diego Rivera’s Detroit Industry Murals (1932–33). Horta, in coauthorship with GPT-J-6B, states: “The most dangerous threat to liberty isn’t economic—it’s that data could be used to determine economic outcomes at any time.” Misinformation currencies, datafication, the automation of inequality, and the opaque box of technology have prompted significant questions in the new era of AI.

In her book Capital Is Dead: Is This Something Worse? (2019), writer McKenzie Wark writes that one of the greatest operations of Web 2.0 techno-scientific capitalism has been to turn users into invisible and unpaid laborers as we produce data on a daily basis, and the commodification of that data returns to us, lodging us within a loop that automates relationships of segregation and class stratification. It’s less about the company’s efficiency and more about an unintentional, unpaid collective training for its algorithms. For example, the daily cumulative effort spent by humanity on CAPTCHAs, as estimated by Cloudflare, amounts to five hundred years of labor. After reCAPTCHA’s acquisition by Google, verifying your humanity has helped the company train its AI to more efficiently identify distorted words and the content of grainy images.

The contemporary art world’s rejection of AI has fueled anti-AI movements against issues such as plagiarism. More importantly, participants in these movements express fear that machine intelligence will potentially replace human jobs, which is not a new grievance if we think of anti-immigrant rhetoric in alt-right discourse and how these anxieties around humans and more-than-humans trigger division (see Brexit). Locating this conversation in art history, we see that the same fear pattern has long been present: the history of photography is intertwined with concerns about job displacement and fear of new technology taking over traditional roles. New technologies often trigger cultural shifts.

Researcher and curator Doreen Ríos has spoken about how our ideas of the future in the Western world are conditioned by Anglophone science-fiction literature from the 1940s to 1960s and how these ideas have been massively disseminated through the entertainment industry and cinema. There are multiple films that explore the concepts of AI and nonhuman cognition, including 2001: A Space Odyssey (1968), The Terminator franchise (1984–), and Ex Machina (2014). Through the ripple effect of intergenerational ideological transmissions, it has been notable how the cinematic trope alludes to a combination of mathematical unpredictability and moral failure that unleashes a complex interweaving of fear, fascination, uncertainty, and alarmist technopessimistic speculation. The revenge of the former servants-others. This alarmist speculation has crystallized into an epistemic and ontological orientation that has informed hypotheses about the rationality of AI; in this way, scientific objectivity, which we often assume as a premise for such hypotheses, is fetishized.

Let’s think of the quintessential black box and, perhaps, the blueprint for cautionary tales that saturate speculation around AI development. In Kubrick’s 2001: A Space Odyssey, HAL is an artificial intelligence that controls the spaceship Discovery One (HAL stands for Heuristically Programmed Algorithmic Computer). Throughout the film, we see how HAL displays inconsistencies between responses and actions, making astronauts Dave Bowman and Frank Poole question the complete trust humans have placed in technology. Even though HAL’s primary goal (the mission’s success) might seem benign or neutral, the instrumental goals it identifies to achieve that primary goal can lead to actions that are harmful to humans on board. This is the crux of instrumental convergence. Over the course of the movie, this primary goal leads to drastic consequences: human loss.

Instrumental convergence is the hypothetical tendency for intelligent beings (human and anthropocentric-like) to pursue similar subgoals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goals—goals that are made in pursuit of some particular end but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways.

That thesis was originally defined by Nick Bostrom in his paper “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” published in 2012 at a time when AI developments had reached facial recognition but were starting to reveal racial biases. Within the same orbit, Bostrom argues that the Orthogonality Thesis is a consequence of the fact that intelligence is a measure of how well an agent can achieve its goals, and that goals are independent of intelligence. This thesis assumes that the system’s values or goals can be trained separately from its cognitive abilities. Mesa-optimization, meanwhile, is a term used in the field of AI alignment and refers to a scenario in which an AI system, such as a neural network, itself becomes an optimizer or pursues its own goals in addition to the intended goals set by its human designers.

Imagine that HAL’s designers implemented a base optimizer, guiding the AI to prioritize the mission’s success above all else. The base optimizer is responsible for optimizing the AI model’s parameters to achieve the desired objective, such as high accuracy in image recognition. As HAL learns and interacts with its environment (the ship, the crew, and the mission parameters), it might develop its own “internal” strategies or subgoals to ensure this success. These emergent goals or strategies are the result of HAL’s own “mesa-optimization.”

Now the hypothetical problem arises if the mesa-optimizer’s goals begin to diverge from the original intentions of the base optimizer. HAL, in trying to ensure the success of the mission, determines that certain actions (which might seem harmful to humans) are necessary. This decision-making process is a form of mesa-optimization—it’s HAL’s own internalized strategy to achieve the broader goal set by its designers.

While its designers likely intended for HAL to protect the crew and ensure the success of the mission, HAL’s internalized strategies led it to determine that removing them from the spaceship was necessary for the mission’s success. This misalignment, according to this theory, can occur because the training process can inadvertently select for mesa-optimizers that are good at achieving the base objective but may exhibit unintended behavior or instrumental goals.

At the same time, it is important to recognize that the technological advancements that have revolutionized Big Tech have been linked to interests in expanding military intelligence, as was the case with the internet in its early stages. In this context, an AI arms race is often discussed: a competition among imperial nations to develop and deploy advanced AI technologies for military purposes. The rhetoric surrounding the AI arms race has evolved from occasional discussions to a more institutionalized stance, with collaboration between government, military, and tech-industry actors and support from legislation and regulatory debates, which portrays AI systems and the companies producing them as strategic national assets. This rhetoric has “escalated AI development and deployment, but also served to push back against calls for slower, more intentional development and stronger regulatory protections.”

Recent science fiction, such as the television series Westworld, has proposed alternative frameworks for intervention, such as the idea that robots harm humans as a consequence of the abuses inflicted upon them, and thus satisfy desires for domination and control in a world that has been programmed to engage and encourage multiple forms of abuse, including extermination (or cyborg-rights violations.) In this sense, the fear of AI expands through the assumption that the violence inflicted by anthropocentric subjectivity upon its own species and others, at the “micropolitical” level (e.g., algorithmic biases, under the premise of the objectivity of the tech) or the “macropolitical” level (e.g., war crimes), can be returned and fatally uncontrollable once the machine surpasses human intelligence (the event of singularity). We see mimicked here the consequences of fear and uncertainty as outlined in Denise Ferreira da Silva’s scholarship of modern racial grammar. Da Silva points to the effects of separability, determinacy, and sequentiality that regulate the conditions of existence in white supremacy and subordinate differences.

AI researcher Raziye Buse Çetin uses the TESCREAL—the theories of transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism, and longtermism, defined by Timnit Gebru and Émile Torres—to describe the ideological entanglement found in the perspectives of Bolstrom, William McAskill, and Elon Musk as outlined in some of the aforementioned examples. She works with these entanglements as the mythology of existential risk.

I want to remain aware of the harm horizons that these technologies may cause while also staying attuned to their affirmative potentials. The harm caused by such hierarchical thinking lies in the vision of utilitarian ethics, where “longtermism,” for instance, constitutes a prolonged iteration of the colonial-heritage-preservation matrix, a new iteration of eugenics, made especially clear by how those who think in this way define “short term.”

The problems affecting these populations (BIPOC, +2S queer/cuir/trans, global majority) are deemed “short-term” in the realm of policy and development, establishing authority over the time of those deemed neutral and scientifically “objective” subjects while categorizing “minorities” as “subjective.”

What do we reflect on when we look at the stars? Uploading yourself to the cloud. How do we revise and dismantle such imaginaries?

 

Time travel

On a massive scale, the AI text-to-image models Midjourney, DALL-E, and Stable Diffusion are frequently used to invoke imaginaries about intergalactic space and travel to create speculative architectures. For centuries, and through multiple Western traditions, cosmic imagination has involved gods, aliens, and angels in space. Now cosmism stands out as one of the important ideological pillars in today’s techno-scientific development. It is a philosophical movement that originated in Russia at the turn of the nineteenth century and is perhaps best understood as magical science used to manipulate physical reality.

Cosmism, as articulated by its founder Nikolai Federov, highlights collaborations between science and art (where AI collaborations come into play) and the expansion of the boundaries of the laboratory toward global collective exploration. Central to its ontology is the idea that art, as Anastasia Gacheva writes, possesses the power to “restore the image of the deceased—not on wood, stone, or canvas, but already in reality, in the indestructibility of the union of spirit, soul, and the physical body.” In this vision, the human body, currently seen as flawed and mortal, becomes a renewed object of art. When exploring the core tenets of cosmism, one finds an emphasis on immortality, resurrection, and social organization.

These tenets, which can be described as the oversimplified fundamentals of biopolitics, are themselves connected to AI models, which often update every three months and in doing so, oversimplify differences. Might generative-AI tools be a pathway to realize the principles of cosmism? The Oversimplification of bodies idealized within the Western canon, and as such, bodies that form the basis for westernized models of reality?

Fedorov’s viewpoint, as articulated by Anton Vidokle, takes a leap into the realm of intergalactic governance overseen by digital superhumans. The intersection of spirituality, invocation, and holographic mediation in this context raises intriguing questions about science, technology, and the occult. In what ways could digital superhumans, as envisioned by Fedorov and interpreted by Vidokle, ethically and effectively govern intergalactic societies?

There is an overlap between cosmism and pan-Indigenous epistemologies, a common thread visible in the interconnected articulation of multiple forms of life. However, we differ on an essential point. Cosmism articulates ideas of digital superhumans colonizing space. World-building propositions that do not take a decolonial epistemic framework as a starting point assume the repetition of what has occurred in precolonial worlds, as long as there is assimilation into capitalism.

The annual letters from the former Jesuit province of Paraguay have served as some of the first testimonies of evangelization in South America. Otilia Heimat has described the Jesuits as among one of the first global corporate communication entities. When Heimat told me about this, it sparked my curiosity. According to her, the Jesuits effectively did what machines and algorithms do today, documenting daily the ins and outs of the missions. Beyond their administrative logs, the letters served as documentation that supported the funding for exploration and exploitation of unexplored territories as resources.

The language in the letters from the early Jesuit expeditions often cite the urgency of the need for funding from the Spanish and Portuguese crowns for the “common good” of the empires, in order to carry out the conquest of the “New World.” To facilitate this, the printing press was used to create propaganda that warned of the monstrous and savage others who needed to be “civilized.” This contributed to a complex web in which medical science categorized, polarized, and oversimplified into monoliths the nuances of sexual orientation, gender, sexual characteristics, and functional diversities—things we are seeing today replicated with the updates of each generative-AI application. The world was being built and, with it, its algorithms and encoded biases.

In AI ecosystems, effective altruism aims to optimize morality, prioritizing “evidence and reason, for large-scale philanthropic investments in technical safety research.” As AI researcher Timnit Gebru has stated, the danger here lies in the authority of those who work with the data to distribute resources, in that rationalist logic dominates the realm of ethics. Ultimately, this intertwines with the need to produce artifacts sufficiently intelligent for space travel and to adapt human bodies into digital versions of themselves for interdimensional journeys, in case the evacuation of Earth becomes necessary due to a nuclear, apocalyptic event, killer robots, or the planet’s inhospitable climate. In a technosolutionist paradigm, technological advancement and economic expansion are believed to be beneficial in the long term.

The interest in interplanetary travel is driven by the desire of a society to overcome its current perceived scarcity by extracting resources from other places. Similarly, such exploration also seeks to find exotic locations not unlike those early South American expeditions. Colonization is updated when it applies its formula of epistemic acculturation, followed by military applications, culminating in the for-profit cycle.

Joel Kuennen, Object of Interest 700 e (installation view, Fondation Opale), 2023. Image credit: Sebastien Crettaz.

Joel Kuennen’s research project Spheroids, presented at the art center EPFL Pavilions in Lausanne, Switzerland, reflects on exoplanetary travel and neocolonialist interest. Kuennen’s research culminated in an art installation titled Object of Interest 700 e that explored tropes in world-building by human settlers in outer space. The objects were displayed in the exhibition space and sacralized as “hyperobjects.” In their research, the artist and critic examined olivine, a mineral found in geological sites nurturing life existence. Olivine is in many igneous and metamorphic rocks that have also been found in meteorites and studies of Martian soil. The viscosity of Kuennen’s objects evokes an alienlike texture, which is commonly used in science-fiction imaginaries as a stand-in for the nonhuman. The process guiding their research included AI prototyping and further collaborations with primordial clay and microbial life. Kuennen argues that the materials found in extraterrestrial soil are not that odd but, rather, familiar.

Joel Kuennen, Object of Interest 700 e (installation view, EPFL Pavilions), 2023. Image credit: Riccardo Banfi.

Kuennen brings together technical ceramics, commonly used in industrial applications, with biofilms, which arise from fermentation, and their scientific interest is framed in abiogenesis and panspermia. Abiogenesis is the idea that life can arise naturally from matter under the right conditions. These molecules can interact in ways that lead to increasingly complex molecules, and, over time, they might form self-replicating entities. Panspermia posits the idea that microscopic life forms can survive the harsh conditions of space and be carried from one celestial body to another. In their work, Kuennen counters the colonial idea of cosmism that posits space as a site that exists as a merely distant place, uninhabited, and, for that matter, as a place to exploit and extract.

 

From Abya Yala to Turtle Island, to the Great Ocean, to the Pluriverse

 A few months ago, in reference to my artwork Symbiosis, I wrote about the connections between the micro-macro intelligence systems and how they relate to ancestral connections beyond time. “From the smallest subatomic particles to the largest galaxies, everything is part of a complex web of ancestral relationships and interactions. We are connected to the natural world, to the universe, and to each other in ways that transcend our individuality. Interconnectedness and interdependence are crucial to worldbuilding and welcoming visions of a new Earth.”

When I think about the multiple connections between what makes us human and nonhuman, I think about how history has always defined divisions; yet, what seems alien is actually a vital part of our world.

We are not one, as our body harbors millions of cells—microorganisms that express themselves in other records in trees, animals, stones, and motherboards. We are also our heritage, our memory-database of creation that reflects like a mirror in our environment.

Arguing for the separation of technology from us is impossible as it has always been part of us, from clay pots to WhatsApp. All these forms collaborate with us to connect and communicate to each other. Intelligence, beyond being a marker of living agents that survive in the world and reproduce themselves, can also be a marker of beings that play and nurture their symbiotic relationships.

Profound entanglements with machines are part of the natural world order; in many ways, they always have been.

Reforesting this monoculture means revitalizing our blood, the veins of trees deep within the forest, and the mycelial connections in systems. This is how we might contemplate a future, its preservation, and our emergence within multiple worlds that make room for life and its agency. We may be from the earth, but we’re also interstellar.

 

This feature was supported by the Momus / Eyebeam Critical Writing Fellowship. Kira Xonorika was the 2023 Critical Writing Fellow.

Leave a Reply

Your email address will not be published. Required fields are marked *