The role of ai in a major evolutionary transition

Finding Direction in an Era of Uncertainty

In a world being transformed by artificial intelligence, the direction of humanity’s future is a topic of heated debate. The speed, scale, and sheer novelty of change are unprecedented. The discussion is shaped by a wide range of voices—from corporate labs and nonprofit watchdogs to government regulators, academic researchers, economic stakeholders, military planners, and geopolitical strategists.

The discussion revolves around the direction AI is taking human society. There are two extreme views. Pessimists see AI as an existential threat—fearing it will evolve into an uncontrollable superintelligence with its own agency and goals, leading to mass unemployment and societal collapse. Optimists believe AI will help solve humanity’s most intractable problems, generate new kinds of meaningful work, and usher in an age of global cooperation that leads to abundance, equity, democracy, and peace.

There is a yawning chasm between those extremes, and no lack of opinions to fill it. While it would obviously be desirable to steer AI’s development in the direction that optimists foresee, what’s the best way? Should we slow its development, or speed it up? Should we regulate AI, or let the invisible hand of the free market be our guide? AI is evolving so rapidly that the technology seems to be leading the way.

The possibility that AI could go in its own direction—and take human society along—has given rise to what’s known as the alignment problem.

Regarded as one of the most critical challenges in AI, it asks how we can ensure that these systems act in ways aligned with human values, intentions, and goals. But in a polarized world rife with uncertainty, deciding which values to align AI with is a challenge in itself.

What if rather than asking what direction AI is taking us, we instead ask what direction we, as a species, want to go? If human values, intentions, and goals were aligned in a cooperative direction that’s broadly beneficial for society and the earth—we could ask a different question: How can AI help us get there?
"What if instead of asking what direction AI is taking us, we ask what direction we—as a species—want to go?"
There’s a perspective that provides insight from an unconventional source. Though the changes being driven by AI seem unprecedented, they follow an underlying pattern of change—a process through which living systems have aligned their purposes and found new ways to cooperate at ever-increasing levels of complexity, for the last four billion years.

How Cooperation and Complexity Evolve Through Major Evolutionary Transitions

Though life began long ago as simple cells such as bacteria, we’re dazzled by its remarkable diversity and complexity today. Yet the reasons for its increase in complexity were, until recently, poorly understood. After all, bacteria remain among the most abundant and ecologically successful life forms on Earth. Complex organisms multicellular organisms such as ourselves are often their prey.

Nevertheless, life has clearly grown more complex over time—from simple bacteria, to complex cells, to multicellular organisms, and eusocial animal groups. Then there’s a major shift to human societies of ever-increasing scale, from hunter-gatherer bands to our global technological civilization today.

Evolution is often described as a struggle for survival—a competition among individual organisms to pass on their genes. Competition is indeed a central force in natural selection. But while competition shapes countless adaptations, it doesn’t transform the basic architecture of life. The great leaps in complexity—from single cells to multicellular bodies, from solitary individuals to interdependent societies—required more than rivalry. They arose when cooperation—rather than competition—gained the upper hand.

These leaps are known as major evolutionary transitions. They have two central dimensions:
1) New ways to cooperate at higher levels of organizational and social complexity
2) New ways to store, use, and communicate information that enable cooperation at higher levels of complexity

We’ll look at earlier transitions, why and how they occurred, and how they changed the organization and functioning of life on Earth. Then we’ll explore the possibility that we’re in the midst of a major transition today, and what that may mean for the future of humanity in the age of AI.

From the Origin of Life to Eusociality—The Biological transitions

An important aspect of major evolutionary transitions is that they are emergent phenomena—higher levels of organization that arise when simpler units work together to create capabilities that don’t exist in the individual parts. Thus these new capabilities emerge as qualities of the higher level system, through synergistic interactions among its parts.

The first major evolutionary transition occurred around 4 billion years ago, when life emerged from nonliving, self-organizing, self-replicating molecules that cooperated synergistically to form protocells.

The earliest fossil record of the first true cells, such as bacteria, dates back to 3.5 billion years ago. Known as
prokaryotes, they evolved the genetic code and the process that translates the information encoded by DNA into proteins.

Around 2 billion years ago, prokaryotic cells found new ways to cooperate through a process known as
symbiogenesis to form complex cells called eukaryotes.

Less than a billion years ago, some of these eukaryotic cells came together to form
multicellular organisms, with cells specializing in different functions, cooperating in service to a unified whole. This highlights another key feature of major transitions: division of labor.

Over 100 million years ago, insects such as ants and bees evolved tightly integrated
eusocial colonies. Individuals took on specialized roles, dividing labor to support the colony’s survival. Pheromone signaling and other group-level information systems emerged to coordinate their collective behavior. Because reproduction is handled by the queen and a caste of drones, these colonies are considered superorganisms—higher-level individuals in their own right.
"Each major evolutionary transition can be viewed as a successful solution to an alignment problem."

purpose, agency, and alignment

Purpose and agency emerged in protocells with the origin of life. Even the simplest cells had a drive to survive—that is, purpose. To fulfill it, they took action to maintain boundaries, repair damage, seek food, and avoid harm. In other words, they displayed agency.

In each transition that followed, purpose and agency shifted up to the more complex, higher-level group. In the process of making the transition, the individual purposes of the lower-level entities became aligned with each other, and with the higher-level purpose of the group.

Thus each major transition can be viewed as a successful solution to an alignment problem. It’s a challenge humanity faces in the age of AI, in more than one way. We face the known challenges of moving beyond polarization and conflict to align with our fellow humans, along with the unknown and unknowable challenges of aligning the purposes and goals of increasingly powerful machines.

Thus each major transition can be viewed as a successful solution to an alignment problem. It’s a challenge humanity faces in the age of AI, in more than one way. We face the known challenges of moving beyond polarization and conflict to align with our fellow humans, along with the unknown and unknowable challenges of aligning the purposes and goals of increasingly powerful machines.

The First Human Transition, and the Emergence of Culture

Around 2 million years ago, early humans began forming highly cooperative hunter-gatherer groups. Their ability to collaborate effectively was made possible by a radically new way to store, use, and communicate information: symbolic language. Language marked a clear departure from all the information systems that arose earlier.

With language, humans were able to evolve through culture as well as genes. Unlike the biological transitions—including eusociality—they obviously remained individual reproductive organisms. They were able to do this while undergoing a major transition, because symbolic language wasn’t simply a new information system. It enabled a whole new way to evolve.

Unlike genetic information transmitted through gradual biological inheritance, cultural information is transmitted through teaching, imitation, and shared symbols. This causes useful innovations to spread much more rapidly than biological evolution allows. It dramatically accelerated human development, much as AI is rapidly transforming our society today. Shared symbols allowed them to get inside each other’s minds, enabling alignment of purpose in an unprecedented way.

Both the parallels and the contrasts between biological and cultural transitions are highlighted by a brief exploration of why transitions occur, and how they proceed.

Why Transitions Occur

By comparing the biologically-driven transition to multicellularity with the culturally-driven transition to humanity, their processes become more clear.

Much of what follows is necessarily speculative. We can’t know exactly how or why major transitions took place; we can only infer possibilities from present-day systems—such as the organelles in eukaryotic cells, the functional integration of multicellular organisms, and the structure of human language. The examples that follow are therefore not claims of certainty, but illustrations of how such transitions may have unfolded over evolutionary time.

Parallels between the transition from unicellular to multicellular life and the emergence of early humans illustrate one way transitions may begin. In both cases, environmental changes may have made existing individual survival methods less effective, causing cooperation in groups to become a more successful strategy.

One way multicellularity may have begun was through clustering. In waters teeming with diverse single-celled organisms, the evolution of large predator cells added pressure on smaller cells that were their prey. However, smaller cells that evolved to stick together after they reproduced gained a survival advantage, simply because the size of the cluster made them harder for predator cells to eat.

An analogous shift occurred when Africa’s climate cooled and dried, transforming forests into open savannas. Early human ancestors were more exposed to predators and faced new challenges finding food. They gained an advantage by cooperating in groups—defending one another and sharing foraging tasks in an increasingly challenging environment.
"When cooperative synergies enhance alignment and group effectiveness, they pave the way for a higher-level organism to emerge."

how Transitions proceed

As the benefits of cooperation take hold, individuals no longer need to perform every survival function on their own. Tasks like defense, foraging, or threat detection can be offloaded to the group. But there’s a potential downside: by offloading individual functionality to the group, individuals eventually lose the ability to perform those functions for themselves, and they become dependent on the group for survival.

But offloading has a big upside. When the group can be relied upon to perform the offloaded function,  it relaxes evolutionary selection pressure on individuals to perform that function themselves. This creates space for new traits to evolve more freely. Though new traits may not initially be useful on their own, they may become valuable when combined with others. When cooperative synergies enhance alignment and group effectiveness, they pave the way for a higher-level organism to emerge.

In multicellularity, clustering for protection offloaded the need for self-defense. Relaxed selection enabled cells to specialize—in movement, digestion, or removal of waste. Sensory and nerve cells evolved to coordinate activity—a new information system to manage division of labor.

In early human groups, it could have begun as simply as some members watching for predators so others could safely sleep. It may have progressed to activities requiring more complex coordination, such as some using stone axes to scavenge meat and marrow from carcasses, while others fended off rival scavengers. Division of labor expanded as some gathered plant foods, while others cared for children. Symbolic language enabled them to coordinate increasingly complex collaborative tasks, but eventually became something even more transformative than that.

The emergence of human collective intelligence

Symbolic language enabled our ancestors to offload cognitive tasks—such as memory, planning, coordination, and cultural knowledge—into the collective intelligence of their groups. This relaxed demands on individual brains, opening the door to greater variation in cognitive traits.

In turn, selection favored brains better able to store, interpret, and communicate using symbols, expanding intelligence and enabling ever more complex language. Within human groups, new synergies may have emerged from a mix of cognitive styles freely engaging through symbolic thought. Over time, this gave rise to a uniquely human intelligence that emerged in both individual minds and the collective intelligence of the group.

Human collective intelligence was so world-changing that it has been characterized as a whole new layer of life, called the
noosphere. As the term biosphere was derived from the Greek for the sphere of life, the noosphere is the sphere of mind.

As with earlier transitions, as individual purposes became increasingly aligned, agency shifted upward to the collective intelligence of the group. However, humans are neither cells in a multicellular body, nor bees in a hive. Unlike earlier transitions, humans retained a significant degree of individual agency.

Nevertheless, with minds connected by symbolic thought, these small groups became superorganisms of a sort. They evolved social norms that enforced fairness, and a simple form of democratic governance that aligned the purpose and agency of individuals with the purpose and agency of the group. They crafted increasingly sophisticated technologies and survival strategies that enabled them to spread across the planet and adapt to every environment, from arid deserts to tropical forests to arctic ice.

Over time, cumulative, collective, collaborative human intelligence has given rise to amazing achievements in science, art, technology, philosophy, and literature. We’ve built great cities and civilizations, traveled to the moon, and launched telescopes into space that can peer back through time, toward the birth of the universe itself.

Now human intelligence has given rise to a new kind of intelligence that resides in the circuitry of machines. We’re offloading cognitive tasks from our own minds at an accelerating pace, which as we’ve seen is an indication that a transition is taking place.

a human-aI transition

While technology has always been central to human evolution, the uncanny simulation of intelligence that seems to emerge from Large Language Models (LLMs) is something new.

But are current LLMs—and more sophisticated machine capabilities sure to follow—elements of a major evolutionary transition? Are we following the same pattern life has followed for 4 billion years?

Changing environmental pressures have historically been triggering mechanisms, and they are clearly in effect now. Global warming, biodiversity loss, pollution, geopolitical instability, and growing economic inequality make life more challenging in a variety of ways. Social institutions struggle to keep pace with accelerating technological change, creating a
techno-social dilemma that leaves individuals feeling lost, at the mercy of forces beyond their control.
"When cooperative synergies enhance alignment and group effectiveness, they pave the way for a higher-level organism to emerge."
Solving these challenges will require new forms of cooperation across many levels of complex organization, and aligning humanity around common purpose and shared goals. Can AI play the same role as symbolic language did for our ancestors? Is a major evolutionary transition underway?

We’re offloading increasingly sophisticated cognitive tasks and capabilities to AIs, steadily reducing the need for individuals to remain self-sufficient across all cognitive domains. This relaxes selection pressure on the minds of individuals.

On the downside, there are already signs of offloading leading to cognitive decay. There is justified concern that humans could become overly dependent on, even addicted to, AI. That’s where the value of the transitions perspective comes into focus—it shows us that cognitive offloading is only the first step.

For a transition to move forward, in the next steps relaxed selection should ideally create space for individuals to more freely experiment with variations, and develop new kinds of synergistic, symbiotic relationships—with both fellow humans and machines. These relationships would lay the foundation for more complex collaborations of all sorts. These could range from basic human-AI partnerships—in which AIs augment human capabilities—to novel institutional forms that integrate hybrid human-AI systems in configurations that are only conjectural at this point.

There are hints of this beginning: artists using AI tools to experiment with new forms of creativity, educators exploring personalized learning systems, doctors using AIs to assist in diagnosis and clinical decision-making, and researchers collaborating with LLMs to generate hypotheses, integrate insights, and weave together ideas—each contributing to the emergence of new kinds of collective intelligence.

These early signs of synergy offer real hope—but they are scattered and unevenly distributed. Whether they come together at societal scale and lay the foundation for a major evolutionary transition remains to be seen. The process is underway, but its outcome is far from guaranteed.

What could prevent a transition from taking place?

In previous transitions, all of the entities coming together to form new kinds of cooperative groups could freely experiment with new traits, leading eventually to the emergence of new synergies. Today, in a world of growing inequality—and with concerns that AIs will simply replace humans in many jobs— individuals may find themselves excluded from opportunities to explore new roles, or cultivate new capabilities. Unless we ensure that access to experimentation with AI is open and fair, societies may never evolve the broad and inclusive synergies necessary for a successful transition with AI.

A related risk is stalling at the first step. If cognitive offloading leads mainly to agency decay, individuals may lose the drive to develop creative, mutualistic partnerships, and instead simply become addicted to AI. Which functions we choose to offload—and those we choose to retain in the human domain—will shape the character of hybrid human-AI systems that emerge. If those systems are not grounded in human values and needs, meaningful higher-level forms of organization may never take shape.

Finally, the trustworthiness of information is critical. Higher-level organization can emerge only when cooperation and information systems work together synergistically. For this reason, genetic codes evolved with methods for proofreading and error correction. Nervous system evolution was coupled with the development of embodied sensory systems that grounded perception in the physical world.

Even symbolic language—though more vulnerable to deception than biology-based information systems—was kept in line with truth by being rooted in face-to-face communication, where social norms and emotional cues act as regulators of trust.

Today, however, we face new information systems that have not coevolved with our social institutions. Instead, as they become ever more powerful, they seem to be dragging society along in their wake. Misinformation and disinformation—amplified by social media and AI—spread unimpeded through global networks. Lack of trust in our shared information environment could derail a major transition before it gets fully underway.
The video below features an interview with Terrence Deacon, author of “The Symbolic Species: The Coevolution of Language and the Brain”

from understanding to direction

While AI is often used to deceive, it also has the potential to build trust by exposing false information, verifying facts, and identifying reliable sources. Which role it plays depends on the intentions of the humans who create and deploy it.

This relationship highlights an important truth: Computers don’t think, and minds don’t compute. Though developers are designing AIs that simulate the full range of human emotions, computers don’t experience the emotions themselves. They don’t have agency; people do. In a major evolutionary transition involving humans and AI, both will play vital roles, but they are complementary, and fundamentally different. Working together synergistically, they have the power to change the world.

While AI may prove as world-changing as symbolic language was in our past, there’s a crucial difference: This is the first time a species undergoing such a transition could be aware it was happening, and possess the understanding to influence its course. Symbolic language enabled the emergence of technological, social, and cultural evolution that led to the present day, and eventually the invention of AI.
It's the first time a species undergoing an evolutionary transition could be aware it was happening, and possess the understanding to influence its course.
What new qualities might emerge from synergistic partnerships between humans and machines? We can only begin to imagine the possibilities. We need to remember that as emergent phenomena, each major transition unfolds in its own way. We can’t predict where our current transition may be taking us, much less force it to happen through central planning or top-down control. It will emerge from the bottom up, but only if conditions favor cooperation between humans and AIs.

That, in turn, depends on solving the alignment problem: ensuring AI systems behave in ways that reflect human values, intentions, and goals. But that raises a deeper question: can we solve the AI alignment problem without first solving our own human alignment problem?

If we can’t align around shared higher-level priorities—such as addressing climate change, inequality, and geopolitical conflict, to name just a few—what exactly would we be asking AI to align with? At the same time, AI could play a pivotal role in addressing those very challenges, if priorities of humans and AIs were properly aligned.

In this sense, we face a dual alignment problem. Earlier transitions aligned the interests of individual organisms within higher-level groups. But now, we must align not only with one another—but also with an emergent information system that’s not alive, yet increasingly influential in our lives.

Is it possible that solving the AI alignment problem might help us finally solve the human alignment problem we’ve faced since our hunter-gatherer days, when values, intentions, and goals were easily and naturally shared? To align AI with human values, we first have to define what those values are in large pluralistic societies. That challenge may push us to confront the deep divisions that have long made large-scale cooperation so difficult, and in doing so, bring us closer to the alignment we need.

A successful transition won’t be measured by productivity or profit alone, but by whether we become a more unified species—capable of transcending conflict, short-term gain, and zero-sum thinking to serve the flourishing of all people and the living systems of the planet we share. In this light, the idea of a major transition becomes more than a framework for understanding our turbulent times. It becomes a North Star that points in a hopeful direction for humankind.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.