Skip to main content

Psybernomics: Latent Human Potential and The Role of AI in Society

image with no alt attributes

In the season one finale of Westworld, Dr. Ford confronts an AI host, Dolores, revealing that her autonomy is not her own. An illusion of reality, she is but a carefully constructed reflection of his design, existing exclusively to serve his ideal, in a synthetic world. To illustrate his point, Ford invokes Michelangelo’s Sistine Chapel, in which a figure of God is depicted extending his hand towards a perfectly incarnate yet inert Adam. Surrounded by heavenly forms, God’s outline is notably enveloped within the contours of a human brain, with Adam making no effort to reach for God’s finger. Was it his decision not to reciprocate, or was he created this way?

This subtle framing invites us to reflect on intelligence, autonomy, and the activation of latent human potential—if only we would reach out and grasp it. In this context, Westworld asks the very question we must now ask ourselves: As we build increasingly sophisticated AI, are we empowering human potential or creating systems that only mimic control without offering it?

Renaissance to Algorithms: The Evolution of Autonomy

image with no alt attributesRenaissance humanism, championed by figures like Michelangelo, celebrated human potential and autonomy as the foundation of an ideal society. Drawing from ancient Greek philosophy, Michelangelo and his contemporaries believed that through individual willpower and intellect, humanity could collectively shape a better world. These ideals laid the groundwork for the democratic principles that shape governance today. Philosophers like John Locke, Jean-Jacques Rousseau, and Montesquieu expanded on these ideals, advocating for intellectual freedom, reason, and self-determination. Locke’s social contract suggested that individuals consent to certain kinds of governance in exchange for the protection of their natural rights—life, liberty, and property. Rousseau argued that political authority derives from the collective will of the people, and Montesquieu’s advocacy for the separation of powers established the foundations of modern democratic governance. These frameworks assumed that rational, free individuals could govern themselves, much like the Renaissance vision of human potential.

However, just as Dolores in Westworld questions her autonomy, the rise of AI forces us to reconsider the principles upon which democracy is built. Renaissance humanists viewed autonomy as an inherent right, and Enlightenment thinkers embedded it in political theory. Yet AI introduces a new dilemma: Are we still the agents of our own governance, or are we gradually relinquishing that power to the systems we create?

The parallels between AI and human autonomy touch every aspect of society. While both are constrained only by human imagination, AI systems now make increasingly complex decisions regarding governance, control, and civil liberty. This raises critical questions about power, responsibility, justice, and control: Who is accountable when AI systems fail or make detrimental decisions? Who benefits? Who suffers? Further, whose values are programmed into the heart of the AI model itself? These are not just technical issues—they are moral ones, touching on the core values of democratic societies- ones that have not been fully defined. If AI continues to evolve without clear governance structures, we risk undermining the very autonomy and dignity on which democratic ideals are built. This would create a neverending feedback loop of our own self-governance, not particularly meant to be solved by a single or static solution ….

Democracy is not a static or fixed system, but one that evolves and improves over time through continuous feedback, adaptation, and reform. Just like AI, it’s never meant to be finished or perfect. Therefore, there is no such thing as a stable system of AI governance, because these frameworks are, always have been, and always will be iterative.

AI Governance: A New Social Contract?

image with no alt attributesAI mirrors human cognition, encapsulating our capacity for logic, decision-making, and problem-solving. But as AI develops more autonomy, we are forced to reflect on our own consciousness: How much control do we truly have? As we shape AI, does AI, in turn, shape us? Will this transformation lead to positive change or unforeseen consequences?

A recent survey by The Center for Data Innovation and Public First revealed that Americans are divided on whether AI will improve society or create new challenges. Only 32% of respondents felt confident explaining how modern AI models work, and nearly half doubted their ability to identify AI-generated content. Despite AI’s potential to drive economic growth, just 18% believe it could prevent future social conflict. Meanwhile, 55% of Americans believe AI will achieve human-level consciousness within the next decade, and 40% fear that AI could eventually destroy civilization.

This tension reflects the promise and peril of AI: it can either amplify human potential or undermine the values upon which democratic societies rely. As AI increasingly makes decisions on our behalf, from what we see online to how we access services, the lines between human agency and machine autonomy blur. Are we truly in control of our future, or are we gradually allowing AI to govern it for us? And, given that we built AI, how does AI governance differ from our own?

Are We the Architects or the Designed?

image with no alt attributesIt’s true that AI holds incredible promise for enhancing human capacity, creativity, and problem-solving. However, we cannot unlock new levels of innovation without first mapping clear ethical structures that define its role in society today. We say we wish to ensure transparency, accountability, and human dignity, but who decides the tangibility of these ideals and how they are applied to our own allegorical progress? How can we navigate an uncertain future through vague or immovable frameworks?

In the end, the question is not just about what AI can do for us, but about how we can govern the future it helps create. The decisions we make today will determine whether AI serves as a tool to amplify human dignity or becomes a force that challenges it.

Will we carefully construct our designs to reflect an ideal humanity? Or will the embedded black mirror of our creations wharp our view of ourselves? Maybe we’ll finally be forced to recon with our own inertia…

Who’s really pointing their finger at whom, anyway?

DISCLAIMER: McCain Institute is a nonpartisan organization that is part of Arizona State University. The views expressed in this blog are solely those of the author and do not represent an opinion of the McCain Institute.

Author
Hallie Stern
Publish Date
March 13, 2025
Type
Tags
Share
image with no alt attributes