
Artificial Intelligence
A Jungian Exploration
Download or listen to this article
Satish Kappagantula, October 7, 2025
In 1967, Marvin Minsky, widely acknowledged as one of the fathers of Artificial Intelligence (AI), predicted that “within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved” (as cited in Mitchell, 2023). He was wrong, of course, but AI has marched on steadily since Minsky’s predictions to its present resurgent state. Ray Kurzweil, Google’s lead researcher and AI visionary and evangelist par excellence, forecasted that by the end of this decade, computers will improve to the point that they attain general artificial intelligence levels and human-level intelligence (TED, 2024). Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain. Artificial General Intelligence (AGI) means AI that can do everything that any human can do, but to a superior level. In a May 2024 interview with the financial news channel CNBC, Eric Schmidt the former CEO of Google responded to a question about whether AI was being overhyped:
I think this stuff is underhyped. Not overhyped, because the arrival of intelligence of a non-human form is really a big deal for the world. … You’re seeing the future of reasoning, the future of human interaction, the future of research, the future of planning is being invented right now. (CNBC Television, 2024)
We are clearly witnessing a period of techno-optimism with its accompanying irrational exuberance. However, aside from the technological exuberance about AI and the apparent human greed to monetize this technology, what are the effects of AI on the human psyche from the point of view of Jungian psychology? From a Jungian perspective, this examination entails an understanding of the effects of modern digital technology and AI’s creeping ubiquity into every aspect of our lives.
In June 2024, a counterpoint to the AI techno-evangelical fervor came from the late Pope Francis in his message for the World Day of Social Communications, when he warned:
The development of systems of artificial intelligence…is radically affecting the world of information and communication, and through it, certain foundations of life in society. These changes affect everyone. … How can we remain fully human and guide this cultural transformation to serve a good purpose?” (Francis LVIII, 2024)
In a conversation with Richard Evans in 1957, Jung added similar caution to this discussion:
The announcement of an important truth, even with the best of intentions, can lead to an extraordinary mess. That was the fate of Prometheus. It is therefore important to husband dangerous material very carefully so that first graders do not get hold of dynamite (as cited in Evans, 1964/2020, p. 59)
The 2020s appears to be the defining decade of AI—a flurry of technological innovation and huge investments by leading tech firms and governments, combined with exponential improvements in the “learning capability” of AI algorithms. All this activity is aimed at eventually producing a putative Artificial General Intelligence (AGI), leading in due course to a time of “superintelligence” whose form and function are not yet predictable, with intriguing visions of trans-humanism and post-humanism.
Examining AI with a Jungian lens led me to research which uncovered “a welter of material” (Jung used this phrase in his foreword to the 1939 Argentinian edition of Psychological Types) that I needed to sift through, covering a range of philosophical, metaphysical, psychological, and technological viewpoints. I found myself pondering the question of whether AI could ever be considered intelligent, or conscious. As I accumulated research papers, books, and articles on the topic, I realized I was seeking a deeper understanding of this question, looking for an overarching framework into which I could neatly fit the various aspects of AI.
An Archetypal View of AI: Prometheus the Technologist
The guiding spirit of AI technology appears to be Prometheus, whose story was a significant inspiration for Jung’s Psychological Types (Jung, 1921/1971). Prometheus was a Titan, a close companion of Zeus the king of the Olympian pantheon. Prometheus created humans as playthings for Zeus’ enjoyment. Eventually, out of compassion for his creation, he stole the secret of fire from the Olympians and gave the secret to humans. Zeus exacted a terrible revenge for this transgression by chaining Prometheus to a rock. Each day an eagle tore out his liver and overnight it regenerated only for the torture to continue the following day.
The story of Prometheus as a technologist illustrates both the positive and the negative implications of unleashing AI. Jung (1921/1971) in his remarks on Spitteler’s 1881 poem Prometheus and Epimetheus focused on the two-sided attributes of archetypes, in this case Prometheus and his brother Epimetheus. In Jung’s reading, Spitteler has conceived of Prometheus as the inward-facing introvert and Epimetheus as the outward-facing extravert. It’s worth taking a moment here to consider how Jung is speaking of extraversion and introversion in the persons of Epimetheus and Prometheus. He is, of course, referring to our conventional understanding of these two attitudes: extraversion involves being animated by what is happening externally in the world while introversion involves drawing energy from one’s inner world. But Jung also meant these attitudes as stand-ins for a broader understanding. Extraversion here represents the persona—how we present ourselves to the world—and introversion represents the soul—our innermost essence, our uniqueness. Similarly, the attitude of extraversion refers to our place in the collective while introversion is about who we are as individuals removed from the collective.
Epimetheus and Prometheus are thus the poles of these opposites. Jung (1921/1971) said that Prometheus “sacrifices his individual ego to the soul. …He has lost the counterweight of the persona” (¶ 278) and that “Epimetheus cannot resist the temptation to…submit to the ‘soulless’ point of view” (¶ 282). Epimetheus, his brother says, “has bartered his soul for the p’s and q’s” (¶ 283). Jung saw this opposition of poles as “essentially a struggle between the introverted and extraverted lines of development in one and the same individual” (⁋ 276). This is evident in the choices of “lifelines” associated with AI technology. Jung (1921/1971) continued:
The lifeline that Prometheus chooses is unmistakably introverted. He sacrifices all connection with the present in order to create by forethought a distant future. It is very different with Epimetheus: he realizes that his aim is the world and what the world values.” (¶ 282)
Jung (1921/1971) observed: “If we fuse Prometheus and Epimetheus into one personality, we should have a man who is outwardly Epimetheus and inwardly Prometheus – an individual constantly torn by both tendencies, each seeking to get the ego finally on its side” (¶ 287). The optimal attitude is to unify the opposites, to realize the “symbolic jewel,” which Steve Myers explained as follows:
The symbolic jewel is an attempt to heal the split in the psyche. … But the significance of the jewel is lost on Epimetheus because … he has no connection to soul. Prometheus is unable to do anything with the jewel because he has no connection with the people. (Myers, 2019, p. 212)
AI grandees like Google’s Ray Kurzweil or Geoff Hinton, the 2024 Nobel Laureate in physics—with their respectively optimistic and pessimistic visions of the singularity, transhumanism, and post-human life—are examples of the introverted Promethean forethought. Conversely, Sam Altman and Mark Zuckerberg play the role of Epimetheus—seeking the here and now, to produce and capture value. It is worth noting that AI-Cassandras risk considerable derision when they warn of the dangers of AI—even the most distinguished of them, like Geoff Hinton, Eliezer Yudkowsky who founded the Machine Intelligence Research Institute, and Oxford philosopher Nick Bostrom. Jung pointed out that the propensity to go along with the collective without any effort at differentiated thought makes it necessary for Spitteler’s Prometheus to “come out of his solitude and tell men, even at the risk of his life, that they are in error, and where they err” (Jung, 1921/1971, ¶ 311).
Jung (1933) in his essay “The Spiritual Problem of Modern Man” stated:
Only the man who is modern in our meaning of the term really lives in the present; he alone has a present-day consciousness, and he alone finds that the ways of life on those earlier levels have begun to pall upon him. The values and strivings of those past worlds no longer interest him save from the historical standpoint. Thus, he has become “unhistorical” in the deepest sense and has estranged himself from the mass of men who live entirely within the bounds of tradition … . To be “unhistorical” is the Promethean sin [emphasis added], and in this sense the modern man lives in sin. A higher level of consciousness is like a burden of guilt. (Jung, 1933, pp. 197-198)
The Promethean sin of modern man according to Jung is what we would call hubris. This is a willful forgetting of the past, of mythical voices, and arrogating to ourselves the powers of the gods. The image of Prometheus, as in any archetypal symbol, is two-sided—positive as well as negative. We can see the positive Promethean side in the development of AI: ingenuity in the creation of architectures of machine intelligence, algorithms that have the ability to tackle complex problems, assistance in decision-making, the ability to digest vast amounts of data to make accurate forecasts of future events and to generate credible responses to users’ questions, and so on. Indeed, Prometheus may be called the “patron saint” of technologists and engineers. But the Promethean symbol also has a negative side.
The Promethean attitude always looks to the future with little interest in the past, nor in the current state of affairs, except as it affects the future (Neville, 1996). The attitude can lead to a sort of inflation, an overoptimistic belief in our rationality and abilities to control the world. The dark side of the Promethean archetype constitutes a weakness that the cultural anthropologist and close friend of Jung, Karl Kerényi (1946/1963) said “signifies precisely the deficiency of one who needs fire in order to achieve a more perfect form of being” (p. 125). This becomes evident in the striving of AI evangelists to develop AI which can match human intelligence—a quest that according to Kerényi is an attempt ultimately to “achieve a more perfect form of being and to compensate for a realization of our basically imperfect form of being” (p. 125). The French sociologist and philosopher Jean Baudrillard (2020), best known for his analyses and critiques of contemporary culture, media, and technological communication, has a contemporary version of Kerényi’s sentiment on the mindset behind AI endeavors:
If men create intelligent machines, or fantasize about them, it is either because they secretly despair of their own intelligence or because they are in danger of succumbing to the weight of a monstrous and useless intelligence which they seek to exorcize by transferring it to machines. (Baudrillard, 2020, p. 51)
Baudrillard’s “useless intelligence” is the tendency of AI optimists and geeks to reduce all intelligence to the realm of ultra-rational algorithms while sacrificing imagination and transcendence to the machine. Jung (1949) pointed out the effect of this imbalance in a letter to a student newspaper at the Federal Polytechnic Institute in Zurich, in response to questions the students there had posed to him: “[Technology] estranges man from his natural versatility of action and thus allows many of his instincts to lie fallow.”
In his article “Prometheus, the Technologist,” Bernie Neville (1996) wrote that the image of Prometheus looms large in modern consciousness. Prometheus is represented as “the creative mind and the hero who defied the patriarchy in the name of individual freedom” (p. 13). The modern nineteenth-century capitalist version of the Promethean myth—the idea that technology is the key to freedom and prosperity—has continued to persist through the twentieth and into the twenty-first centuries. In the Promethean mindset, human beings are made in the image of the machine. This mindset assumes that everything in our world must operate like a machine, including the mind (intelligence and consciousness) itself. In the context of AI, the machine is no longer at human scale but at a scale that may no longer be subject to human control.
The German Swiss philosopher and poet Jean Gebser (1905-1975) was Jung’s junior by three decades. Nonetheless, Jung was a friend and admirer. In his seminal work The Ever-Present Origin, Gebser (1986) observed the evolution of human consciousness progressing through five stages or “structures of consciousness: Archaic, Magical, Mythical, Mental, and ultimately Integral” (p. 42). In discussing the development of consciousness, Gebser argued that the shift from a culture of crafts to a culture of machinery has been a shift from an efficient mental consciousness to a deficient, narrowly rational one.
Aeschylus (n.d), in the fifth century BCE play Prometheus Bound warns his Athenian compatriots about the pitfalls of a Promethean consciousness. In the play’s last scene, Prometheus is forever bound and cast into Tartarus (the underworld). Adding to this punishment, Zeus orders the creation of Pandora and her box of evils, which are duly unleashed on humankind. As AI within two decades reaches a human level of intelligence (Kurzweil, 2005, p. 35, Kurzweil cited in Zomordi, 2025), will this be our fate also? Will humans have to endure a repetitive torture and suffering from the object of our creation? Will humanity be supplanted and placed in the service of all-powerful machines (who effectively become the new Titans)? Will humanity have to endure more of Pandora’s evils?
A Contextual Dream
As I pondered the research on AI that I had accumulated, I realized that I was essentially seeking a definitive answer on this topic from opinions of scholars—a quest which seemed fruitless because of the variety of points of view, none of which were entirely satisfying to me. During this period, I had the following dream. The dream gave me the idea of experimenting with using AI agents for dream interpretation.
Dream Part 1
I walk into a hall where several people are gathered. In the gathering, Sri Ramakrishna (an Indian sage) is seated surrounded by others. Someone introduces me to him, and he smiles, noticing I’m wearing my earbud headphones. He says, “You must be waiting for some important news!” I realize he is making gentle fun of me, and I feel embarrassed that I’m in his presence with my modern accoutrements.
Sri Ramakrishna (1836–1886) was an Indian mystic and sage who lived and taught near Kolkata (Calcutta), the capital of Bengal State in India. He is considered the spiritual father of the Ramakrishna Order in India and of the Vedanta Societies in the West through his disciple Swami Vivekananda. Ramakrishna considered all faiths as revelations of the Divine. His teachings are presented in The Gospel of Sri Ramakrishna (1942/2015).
Dream Part 2
I walk through the house and out onto a porch or a deck. Suddenly, I’m aware of a vine growing seemingly out of the wall and moving rapidly and purposefully towards me. I’m astounded, even somewhat alarmed that this plant can grow so quickly. As it moves rapidly towards me, I grasp it and wanting to know where it is growing from, tug on the vine. A small boulder shakes loose and falls to the floor. The vine is growing from this rock. I am left wondering how this vine could germinate and grow from this seemingly inert object.
I decided to test the capabilities of Claude Sonnet 4 (Anthropic.ai) and ChatGPT 4o (OpenAI) to give me an interpretation of the second part of the dream. The following texts are my queries and the answers.
Claude Sonnet 4 Dream Interpretation
Query: Using my Myers-Briggs type of ENTP and John Beebe’s 8-function psychological types model, give me a Jungian interpretation of the following dream. I’m walking through the house and out onto a porch or a deck. Suddenly, I’m aware of a vine growing seemingly out of the wall and moving rapidly and purposefully towards me. I’m astounded, even somewhat alarmed that this plant can grow so quickly. As it moves rapidly towards me, I grasp it and wanting to know where it is growing from, tug on the vine. A small boulder shakes loose and falls to the floor. The vine is growing from this rock. I am left wondering how this vine could germinate and grow from this seemingly inert object.
Claude Sonnet 4: Your dream presents a fascinating interplay between your dominant Ne (Extraverted Intuition) and your inferior Si (Introverted Sensing), viewed through Beebe’s model of psychological dynamics. Let me analyze this for you:
1. Dominant: Extraverted Intuition (Ne) – Possibilities, patterns, connections
2. Auxiliary: Introverted Thinking (Ti) – Internal logical analysis
3. Tertiary: Extraverted Feeling (Fe) – External harmony, social values
4. Inferior: Introverted Sensing (Si) – Internal bodily awareness, past experiences
5. Opposing: Introverted Intuition (Ni) – Single future vision, conviction
6. Critical Parent: Extraverted Thinking (Te) – External organization, efficiency
7. Trickster: Introverted Feeling (Fi) – Personal values, authenticity
8. Daimon: Extraverted Sensing (Se) – Present-moment engagement
The vine emerging from the wall represents your Ne’s natural tendency to perceive emerging possibilities and connections that others might miss. The fact that it moves “rapidly and purposefully” toward you suggests new insights or opportunities actively seeking your attention. Your initial alarm at its speed reflects the sometimes overwhelming nature of Ne’s constant stream of novel perceptions – even as an ENTP, you can be startled by the intensity of your own pattern-recognition abilities.
Your instinctive response to grasp the vine and trace it to its source reveals your auxiliary Ti (Introverted Thinking) at work – you need to understand the underlying logic and structure behind what you’re experiencing. This analytical impulse is characteristic of how Ti supports Ne by providing a framework for understanding intuitive leaps. The revelation that the vine grows from a boulder is particularly significant. In Jungian terms, the stone often represents the Self or the most foundational aspects of the psyche. Your inferior Si, which governs your relationship with concrete reality, tradition, and past experiences, appears here as this “seemingly inert” rock. The dream suggests that what you may dismiss as static or lifeless in your practical world actually contains tremendous generative potential.
This points to a shadow dynamic where your least developed function (Si) may be the unexpected source of growth and vitality. Your tertiary Fe (Extraverted Feeling) might be involved in your “astounded” emotional response – recognizing that even your most underdeveloped aspects have value and creative power.
The dream seems to be highlighting how your unconscious Si, though seemingly dormant, actually grounds and nourishes your dominant Ne insights. Your wondering at the end suggests an invitation to reconsider your relationship with the concrete, practical world – not as a limitation, but as the fertile ground from which your best ideas emerge.
ChatGPT 4o Dream Interpretation
Query: Using my Myers-Briggs type of ENTP, and John Beebe’s 8-function psychological types model, give me a Jungian interpretation of the following dream. I’m walking through the house and out onto a porch or a deck. Suddenly, I’m aware of a vine growing seemingly out of the wall and moving rapidly and purposefully towards me. I’m astounded, even somewhat alarmed that this plant can grow so quickly. As it moves rapidly towards me, I grasp it and wanting to know where it is growing from, tug on the vine. A small boulder shakes loose and falls to the floor. The vine is growing from this rock. I am left wondering how this vine could germinate and grow from this seemingly inert object.
ChatGPT 4o: First, Your Type and Beebe’s Model. As an ENTP, your cognitive function stack in Beebe’s 8-function model looks like this:
1. Hero (Dominant): Extraverted Intuition (Ne)
2. Parent (Auxiliary): Introverted Thinking (Ti)
3. Child (Tertiary): Extraverted Feeling (Fe)
4. Anima/Animus (Inferior): Introverted Sensing (Si)
5. Opposing Personality: Introverted Intuition (Ni)
6. Witch/Senex: Extraverted Thinking (Te)
7. Trickster: Introverted Feeling (Fi)
8. Demonic/Daimonic: Extraverted Sensing (Se)
ChatGPT 4o: Dream Symbols and Jungian Interpretation
“I’m walking through the house and out onto a porch or a deck.” Symbolic meaning: The house often represents the psyche or the self. Moving from inside to the porch suggests a transition from the inner world to the threshold of the outer world. The porch/deck is a liminal space-between inside and outside-symbolizing a shift in consciousness or awareness.
“Suddenly, I’m aware of a vine growing seemingly out of the wall and moving rapidly and purposefully towards me.” Jungian angle: A vine is organic, instinctual, and life-seeking-it may symbolize the unconscious pushing forward, seeking integration. Its rapid, purposeful movement suggests urgency or insistence from the unconscious. That it comes from a wall-a symbol of barriers or divisions-suggests the breakdown of psychological boundaries.
“I grasp it and, wanting to know where it is growing from, tug on the vine.” Interpretation: Grasping and tugging suggests curiosity-a hallmark of your dominant Ne function: open, pattern-seeking, and exploratory. You try to understand the source-which engages Ti (introverted thinking): logical inquiry into origins or principles.
“A small boulder shakes loose and falls to the floor.” Symbolic meaning: A boulder is inert, heavy, grounded-perhaps a symbol of the past, the repressed, or an old complex.
That it falls implies a shift in the psychic structure-a loosening of rigid or unconscious material.
“The vine is growing from this rock.” This is the central image: life emerging from the inert. It’s an archetypal image of transformation-life force or psychic energy (the vine) emerging from something seemingly dead or unyielding (the rock).
Suggests an integration of inferior or shadow functions-especially those that have been neglected or unrecognized.
“I am left wondering how this vine could germinate and grow from this seemingly inert object.” This wonder shows Ne’s astonishment at paradox, and also the dawning awareness of deeper unconscious potential.
ChatGPT 4o: Beebe Model Mapping
Let’s map this to your psychological functions:
Ne (Hero) – Extraverted Intuition. Dominates the dream: curiosity, openness to new meanings, seeing unexpected connections (life from rock). The vine is a symbol of Ne itself – growth, movement, connectivity, breaking through barriers.
Ti (Parent) – Introverted Thinking. Activated when you grasp the vine and try to analyze the source. You move from an intuitive reaction to a logical investigation.
Fe (Child) – Extraverted Feeling. Present more subtly, possibly in the emotional tone of being startled but not terrified – suggests receptivity and openness to connection.
Si (Inferior / Anima) – Introverted Sensing. The rock may symbolize old, fixed experiences or memories. Si can be experienced as rigid or inert to dominant Ne, but here it’s giving rise to something new. This could reflect a breakthrough in integrating your inferior function – bringing memory or stability into dynamic awareness.
Ni (Opposing Personality). The purposefulness of the vine may feel alien – Ni’s focused insight can oppose Ne’s scattered curiosity. The dream may hint at a conflict or encounter with focused unconscious material that challenges your dominant way of knowing.
Se (Daimonic) – Extraverted Sensing. The alarming speed and physical movement could come from Se in its negative form-overwhelming stimulation, unpredictability, even fear of the raw world breaking in. But the daimonic can also be transformative-it shocks you into awareness.
ChatGPT 4o: Jungian Summary
This dream can be read as a numinous encounter with the unconscious, wherein something previously inert or unconscious (the rock) has begun to generate life-an image of psychic renewal. You, as an ENTP led by extraverted intuition, are drawn to explore this paradoxical phenomenon. The dream may mark a turning point in your psyche: the integration of your inferior Si function (stability, memory, rootedness), or an awakening of repressed psychic material that now wants expression. The vine, organic and fast-growing, is an archetype of new life, growth, and transformation-but it also invites accountability: you can’t ignore its presence.
ChatGPT 4o: Final Thoughts
Your psyche seems to be presenting you with a mystery—how something alive and vital can grow from what was believed to be lifeless. It’s an invitation to go deeper, to trust in the irrational, and perhaps to reclaim a buried part of yourself.
Personal amplification
Both ChatGPT and Claude Sonnet are clearly impressive, amazing even, in their interpretations. It is important to note that when I first queried ChatGPT about an interpretation according to the Beebe model, it provided a function hierarchy for the ENTP type that was incorrect. I gave it essentially the same query a few weeks later, and it had already learned Beebe’s model by then. Now, the two AI interpretations sound eerily similar—as if they just learned the theory from the same textbooks, which is not surprising since their training databases are more or less the same. It is interesting that Claude Sonnet focused on the Ne–Si, Ti–Fe relationship, and particularly the interplay between my dominant and inferior functions. ChatGPT produced a more comprehensive breakdown, mapping elements of the dream to the eight-function Beebe model (Beebe, 2017). On the whole, I can’t help being impressed by the prowess of both AI agents. The value of these tools to psychoeducation cannot be denied. One can get a good sense of Jungian dream analysis using these agents. These AI agents are clearly good learners, and this performance surely demonstrates a degree of intelligence.
Adding some further amplification to these AI interpretations, I realized that, in fact, there are vines called lithophytes which grow on rocks, extracting nutrients from the mineral content of rocks and from moisture that condenses on the surface. Prior to the dream, I had no direct knowledge of lithophytes. The affect of wonder experienced by my dream ego as it witnessed the vine’s rapid growth is perhaps the psyche’s attempt to communicate emergence—a non-rational development better grasped by sensation versus rational thought. The desire of my dream ego to comprehend the emergent phenomenon of the vine is echoed in these lines from Tennyson (1863), from a poem titled “Flower in the Crannied Wall,” a meditation on an unnamed lithophyte growing at Waggoners Wells, an English nature reserve:
I hold you here, root and all, in my hand,
Little flower – but if I could understand
What you are, root and all, all in all,
I should know what God and man is. (3-6)
Tennyson’s little poem captures my search for a deeper understanding of the emerging phenomenon that AI represents. The poet, having no scientific knowledge of lithophytes and therefore no rational means of analysis, had to accept the phenomenon of the emergence of the plant from stone masonry. Similarly, the dream appears to be prompting me to accept the possibilities of emergence from this study of AI technology, an emergence of deeper understanding of the psychological implications of AI. The yearning for this deeper understanding is the common animating feature of both the Tennyson’s meditation and my research.
Jung (1921/1971) aptly described my dominant Ne orientation:
Because extraverted intuition is oriented by the object, there is a marked dependence on external situations. …The intuitive… has a keen nose for for anything new and in the making… .Neither reason nor feeling can restrain or discourage him from a new possibility, even though it may run counter to convictions hitherto unquestioned. (¶ 613)
Beebe (2017) in his unpacking of the eight function typological attitudes further explained the Ne orientation as a mental process operating at three levels associated with the persona, the ego, and the Self. First, at the persona level, Ne is entertaining, entertaining others with a timely, tantalizing topic, followed by a state of entertaining new possibilities that can lead to a broader envisioning at the second or ego level, and finally, at the third level of the Self, Ne becomes enabling (Beebe, 2017, p. 5). Whilst my dominant orientation finds the Promethean futurists intriguing, my auxiliary function, introverted thinking (Ti), kicks in, wanting to understand more about the technology and its impact, providing a balance between unbridled optimism and constricting pessimism.
Is it not possible, as many AI optimists believe, that AI is a tool for human empowerment, that it can be accessible broadly and equitably in society, and that it would work under human control, ultimately making possible a new age of creativity and innovation? Even as suffering was unleashed on humanity by Pandora opening her box in the Promethean drama, a second opening of the box released Hope. Prometheus himself was set free ultimately by Hercules, who killed the torturing eagle and broke the binding chains. Would this not be a reason to foresee a positive future much as Prometheus did when he gave the gift of fire to humans? In general, technological advances have benefited society. AI would be no exception to this working principle. Generative AI has already demonstrated its value in developing new drugs to treat various health conditions (Zhang et al., 2025). Netflix, for example, uses machine learning (a key component of AI) to analyze our viewing habits and then recommend shows and movies that are tailored to our preferences. And Grammarly uses natural language processing to analyze text, correct grammatical errors, and suggest improvements to the quality of writing. The list goes on.
However, even the most enthusiastic AI optimist must acknowledge the Promethean debt to be paid. All future gains from AI technology must be paid for with disruptions to business, loss of jobs, and inevitable societal change. This, we could argue, is the nature of creative destruction in economics, long championed by economists (Schumpeter, 2008). New occupations will undoubtedly emerge as older ones fall away. Many of jobs held today could not have been foreseen much less conceived of fifty-five years ago in 1970. AI optimists see the developing learning capacity of modern AI as an extension of ourselves, of a crystallization of human knowledge and values, which can be molded to behave in line with our preferences. Well-attuned AI systems can assist their human principals in a variety of tasks, boosting productivity significantly. In psychotherapy for example, efforts are underway (Stade et. al., 2024) to develop Large Language Models (LLMs) for assisting therapists with clients, collaborating on client treatment, and eventually autonomously operating on supporting clients’ post-therapy. From the AI optimist’s point of view, this is a vision of humanity’s continuing march towards a post-human utopia.
The Material and Psychological Costs of AI
The method of operation of generative models like ChatGPT or Gemini is mindbogglingly computation intensive. For example, to generate a single token, which is a word or a part of a word, ChatGPT performs roughly a trillion arithmetic operations (Narayan and Kapoor, 2024). A simple haiku poem with a total of 17 syllables generated by ChatGPT may take about twenty trillion operations, all on the basis of next-word prediction. Next-word prediction is popular with AI researchers because, unlike image recognition or topic specific AI, generative models can train on any available text. And there is an astronomical amount of training data online for which we need larger and larger computational capability. The computational need of AI engines is a major factor for future energy requirements. On average, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search (Goldman Sachs, 2025).
Goldman Sachs Research estimates that data centers, each with a billion or more AI compute instances, will require a 165% increase in electrical energy by 2030 (Goldman Sachs, 2025). Wells Fargo research projects that by 2028, AI data centers will need about 300 Terra Watt Hours or TwH (Kimball, 2024). This demand will equal the annual energy consumption of the entire State of California’s population of 38 million people and require new plans for power generation. In the Bulletin of Atomic Scientists, Stover (2024), described plans from tech giants such as Microsoft, Google, Meta, and Amazon to build dedicated nuclear power stations to meet the energy needs of AI data centers. AI data centers are thirsty beasts as well. They consume large amounts of fresh water directly for cooling towers, which are needed due to their high energy use, and indirectly through the water used by the power plants that supply them with electricity (Nicoletti et al., 2025). And finally, the carbon dioxide emissions of data centers may more than double between 2022 and 2030 (O’Donnell, 2025).
Money is no object. Leopold Aschenbrenner, an alumnus of OpenAI, describes how in San Francisco, the talk amongst tech cognoscenti has moved over the last couple of years from 10-billion dollar to 1-trillion dollar “compute clusters,” data centers with AI computation capability that cost about a trillion dollars each (Aschenbrenner, 2024). Aschenbrenner’s optimism about the progress of the capabilities of generative AI like ChatGPT, based on the availability of ever-larger compute capacity and algorithmic improvements, envisions that by 2030, we will be well on our way to Artificial General Intelligence (AGI). And he’s not alone in his optimism.
The sheer size and scope of these projected investments, the infectious groupthink, the ferocious competition, and the underlying hubris and ambition without regard to environmental or societal costs remind us of Jung’s warning about collectives in Two Essays in Analytical Psychology (1928/1966):
Every man is, in a certain sense, unconsciously a worse man when he is in society than when acting alone; for he is carried by society and to that extent relieved of his individual responsibility. Any large company composed of wholly admirable persons has the morality and intelligence of an unwieldy, stupid, and violent animal. The bigger the organization, the more unavoidable is its immorality and blind stupidity. (¶ 240)
The corrective to this comes from Spitteler’s Prometheus (Jung, 1921/1971):
Psychologically, this means that the collective, undifferentiated attitude to the world stifles a man’s highest values and becomes a destructive force, whose influence increases until the Promethean side, the ideal and abstract attitude, places itself at the service of the soul’s jewel and, like a true Prometheus, kindles for the world a new fire (¶ 311).
In a sense AI and psychology have always been intertwined. Bryson (2020) described AI as a “schizophrenic field pursuing two different goals: an improved understanding of computer science through the use of psychological sciences; and an improved understanding of psychological sciences through the use of computer science” (p. 3). An example of this synergy is seen from the earliest days of AI with Newell and Simon’s claim that all intelligent behavior can be explained by symbol manipulation (Newell & Simon, 1976). The theory suggests that human cognition can be understood as a process of symbol manipulation—a classic case of conflating human and machine processing. Their work introduced the idea of modeling human thought processes as information processing systems within a computer and laid the foundation for the field of cognitive psychology.
Similarly, human psychology is used in the design of AI algorithms (Gigerenzer, 2024; Gigerenzer & Goldstein, 1996), particularly in situations of uncertainty and instability. For example, algorithms designed using the human tendency of “recency” to predict the spread of flu or COVID did a far better job than the typical big-data AI algorithms. And then, there is the development of the field of “artificial psychology” (Crowder & Friess, 2013), which is the theoretical study of the study of the mental processes of a future AI system exhibiting capabilities of autonomous thinking, self-reprogramming, and continuous adaptation for new situations. This inevitably brings up issues of “artificial cognition,” “machine emotions,” and “artificial intuition.” Thus, when a machine becomes sufficiently intelligent, will we find that these properties are emergent markers? Interestingly, work on these topics appears to be limited to theoretical work with little real-world or experiential study. In the pell-mell rush to build faster and larger AI, are we missing the opportunity to understand some basic issues that may come up as we reach the threshold of AGI? Jung (1984) warned against this single-minded, one-sided push to develop machines—a warning that will no doubt be more honored in its breach:
In building a machine we are so intent upon our purpose that we forget that we are investing the machine with creative power…it can outgrow us in an invisible way… . Despite the fact that they were created by man, they are the dwelling places of divine powers that may destroy us. (p. 542)
Our continuing use of the word “machine” to refer to aspects of AI is significant and bears some examination.
The Machine Archetype
As a species, we have been invoking the machine for centuries. The machine has become an archetype, evolving in its shape and form over the generations. One only needs to think of the massive data centers already in place today with thousands of computing elements in operation for AI training and generation. Jung (1984) reminded us that these vast machines are the dragons of our day, holdovers in the collective memory of a time when “there were still saurians” (p. 538) Psychologist Jody Echegaray (2024), in his essay “Archetype of the Machine,” probed the reasons behind the enduring image of the machine as an object of awe, derision, and fear in the human psyche. The human fascination with machines goes back several millennia. Our cultural tendency to see our minds and bodies as machines is in many ways a reflection of our fascination with and admiration for machines in general. Echegaray (2024), quoting the work of Erik Davis, pointed out that the machine archetype can be separated into two distinct identities: “technologies of power” and “technologies of communication.” ChatGPT, for example, contains both elements: a massive computing and database backend, which lurks in cavernous datacenters unseen by most, and a chatty, generative front-end ready to offer a response to any query made to it.
However, we know from Jung that all archetypes are two-sided, with light and dark sides. The light Epimethean side of the AI archetype is reflected in ChatGPT’s (or Google’s Gemini, or Anthropic’s Claude) remarkable capability to generate reasonably coherent content. This is the seductive, performative side of the archetype. The trickster element becomes evident in the sometimes garbled, ambiguous responses and the outright fabrications and falsifications in responses from ChatGPT, charitably referred to in the trade as “hallucinations.”
Jungian scholar Robert Romanyshyn focused on the Promethean part of the machine archetype (Echegaray, 2024). The positive side of this archetype is, of course, reflected in the enormous advances that we have made with AI. We are witnessing the collective intellectual genius of research from OpenAI, Google, and many others, with the resulting engineering miracles that are driving AI forward. As Ray Kurzweil put it, “No matter what problem you encounter…there’s an idea out there that can overcome it. And you can find that idea” (as cited in Aamoth, 2010). The belief that that there is nothing insurmountable in our march towards AGI is a quintessentially Promethean attitude.
Inevitably, there is the shadow side. Romanyshyn brings the shadow into focus by likening the dark aspect of the Promethean thrust to Frankenstein’s monster. The continuing AI enterprise will require, as we have seen, enormous resources, like the monstrous appetite of Frankenstein’s creation. Another equally disturbing aspect of the dark side of the massive unseen elements of AI is evident in what is now known as “AI bias.” The National Institute of Science and Technology (NIST) has devoted considerable resources to developing a standard to “de-bias” AI (Schwartz et al, 2022). A rigorous examination by the NIST team has identified three categories of AI bias: computational (algorithmic and training), human (cognitive, confirmation), and systemic (historical, societal, institutional). It seems that the full weight of human history with all its discoveries, insights, omissions, and distortions is embedded into today’s AI. Virtually all the available data on the planet has been devoured by AI training databases, yet all the frailties of human attitudes are preserved (and amplified) in today’s AI.
But the march to AGI goes inexorably on. Sam Altman, the CEO of Open AI, believes that it will take a few trillion dollars more to build AGI, dollars that he is sure are available from a worldwide network of private and public investors:
We are now confident we know how to build AGI as we have traditionally understood it. …We are beginning to turn our aim beyond that, to superintelligence. … We are here for the glorious future … [which will] massively increase abundance and prosperity. (Altman, 2025)
He offers a glorious vision indeed, at the root of which is our deep human longing to achieve immortality.
Conquest of Death?
This belief in the goal of immortality echoes Jung’s suggestion in a videotaped BBC interview (1959) that the psyche might have a continued existence beyond the death of the physical body where “it isn’t entirely confined to space and time. … To that extent, the psyche is not submitted to those laws and that means a practical continuation of life, of a sort of psychical existence beyond time and space.” The difference from Jung is that the AI post-humanist view of immortality is a non-psychic existence where the entire basis of existence is a virtual ego-based “thinking” state of being, which is hardly a recipe for psychic wholeness. But could this be a plausible scenario? Can AI lead humanity towards immortality? And will we become as the gods? Yuval Noah Harari (2017) floated the idea in his book Homo Deus: “And having raised humanity above the beastly level of survival struggles, we will now aim to upgrade humans into gods and turn homo sapiens into homo deus” (p. 21).
Our creations are the AI algorithms. As George Zarkadakis (2024), author and AI researcher, put it:
Algorithms are the animating spirits of artificial intelligence artefacts. For some people, they are also the ultimate destination for humanity in a posthuman future … . In the absence of gods, we have become the superior beings by default. How can we possibly commit hubris against ourselves? (p. 87)
The costs and consequences to this quest for power are mostly being ignored in the present techno-prophetic climate. Harari (2017) described a Faustian bargain that seems to sum this up: “Modernity is a deal. The entire contract can be summarized in a single phrase: humans agree to give up meaning [emphasis added] in exchange for power” (p. 200). And surely, the gods are watching and waiting. Hubris must invariably be followed by nemesis.
Jung (1957/1977) in The Symbolic Life made the following statement about the technology explosion of the post-war era:
In general, it can be said that for modern man technology is an imbalance that begets dissatisfaction with work or with life. It estranges man from his natural versatility of action and thus allows many of his instincts to lie fallow. … The technologist has something of the same problem as the factory worker. Since he has to do mainly with mechanical factors, there is a danger of his other mental capacities atrophying. Just as an unbalanced diet is injurious to the body, any psychic imbalances have injurious effects in the long run and need compensating. (¶ 1405-1406)
His opinion of modern digital and AI technology would, if anything, be more pointed. The notion of mental capacities atrophying becomes a distinct possibility with an increasing dependence on AI. Humans have a natural tendency to anthropomorphize machines, ascribing human qualities to them (Proudfoot, 2011). This tendency makes us more likely to overestimate the intelligence of the machine. The result of this self-deception could be a gradual receding of our capability for critical thinking—because the prowess of the intelligent machine absolves us of the need for it. Baudrillard expressed a similar concern to Jung’s:
What such machines offer is the spectacle of thought, and in manipulating them people devote themselves more to the spectacle of thought than to thought itself… .The act of thinking itself is thus put off forever … . [I]t is to be feared that artificial intelligence and the hardware that supports it will become a mental prosthesis for a species without the capacity for thought. (Baudrillard, 2020, p. 51-52)
Zarkadakis (2024) went further, likening the current attitude of AI technologists to a form of Neo-Platonism. Plato’s teaching is that our sensual and somatic perception of the world around us is illusory, and that a deeper reality lies undiscovered behind our senses. Ascending to this deeper reality requires the presence of a mediator—a priest or a guru. AI and virtual technologies which enhance our awareness could become the contemporary mediators. According to Zarkadakis (2024), “We are all Platonists nowadays, especially the computer engineers, the data scientists and the mathematicians that drive advances in Artificial Intelligence” (p. 88).
As to who oversees an “on-off” switch to counter the ever-increasing intertwining of our lives with AI, the question does not seem to bother the creators greatly. Will this control be the domain of the firms who create and own the AI tools like ChatGPT, or will users ever have the choice to turn off the AI extensions to their applications? Jung (1957/2020) asked us to consider this question of “who?” in his meditation on Yoga and the West:
The power of science and [technology] … is so enormous and indisputable … . One shudders at the stupendous possibilities. Quite another question begins to loom up: Who is applying this technical skill? in whose hands does this power lie? … Our technical skill has grown to be so dangerous that the most urgent question today is … how the man who is entrusted with the control of this skill should be constituted, or how to alter the mind of Western [emphasis added] man so that he would renounce his terrible skill [emphasis added]. It is infinitely more important to strip him of the illusion of his power than to strengthen him still further in the mistaken idea that he can do everything he wills. (¶ 869)
Renounce his terrible skill! Despite Jung’s cri de cœur, in the boardrooms and engineering labs of OpenAI, Google, Microsoft, and every AI startup, renunciation is probably not even in the lexicon. Indeed, the very opposite is the case. How to acquire more control is the question of the day. “AI nationalism” is on the rise—the race to control and own AI technologies is the twenty-first century version of the nuclear arms race of the post-war twentieth century. In late January 2025, a Chinese startup called DeepSeek launched a chatbot like OpenAI’s ChatGPT. It turns out that DeepSeek developed its AI model at a fraction of the cost of OpenAI’s tool, and the algorithm will be made available for free. Worldwide stock markets responded by wiping over a trillion dollars from the valuation of the key AI players. AI researchers are excited because DeepSeek will “democratize access” to advanced AI models. Marc Andreessen, a leading US venture capitalist, calls this “AI’s sputnik moment” (Hetzner, 2025).
Quo Hinc? (Where Are We Headed?)
So where is the arrow of the future of AI pointing towards—a glorious utopia of human advancement or a dystopian world of haves and have-nots? And what of my initial questions about the intelligence of AI or whether today’s AI can be consider conscious? To the first question, based on my interaction with AI agents (ChatGPT 4o and Claude Sonnet 4), my answer is that both agents are definitely “smart.” They remind me of excellent research assistants and, as such, can be tremendously helpful, but I am reserving my judgment on their level of intelligence for now. As for AI consciousness, I do not see any evidence of that at the moment. But the field is advancing rapidly, and we will have to revisit this question regularly in the coming years.
A few quotes bring this exploration to an end. The first is from Eliezer Yudkowsky (2023), an AI pioneer and recently turned AI-Cassandra:
The most likely outcome is AI that does not do what we want and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI, but we are not ready and do not currently know how …. The AI does not love you, nor does it hate you … . You are made of atoms it can use for something else. (Yudkowsky, 2023)
The second is from Nick Bostrom (2014), a keen student of AI, writing in his bestseller, Superintelligence: Paths, Dangers, Strategies: “This is quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face” (p. vii). Both Yudkowsky and Bostrom are Promethean in their view (as opposed to Epimethean), filled with forethought and inward reflection. But is there room here for an Epimethean attitude as well? Does our attitude to AI need more of a balance—exploiting its usefulness, while maintaining a healthy skepticism and oversight of this technology? In his comments on the poignant ending to Spitteler’s poem, Jung reminded us:
Prometheus surrenders himself, come honor or dishonor, to his soul, that is, to the function of relation to the inner world … . With this surrender to his soul Prometheus loses all connection with the surrounding world, and hence also the very necessary corrective offered by external reality. But this loss cannot be reconciled with the nature of the real world. Therefore, an angel appears to Prometheus, evidently a representative of the powers-that-be; in psychological terms, he is the projected image of a tendency aiming at adaptation to reality. The angel accordingly says to Prometheus: ‘It shall come to pass, if you do not prevail and free yourself from your froward soul, that you shall lose the great reward of many years, and the joy of your heart, and all the fruits of your richly endowed mind… .You shall be cast out on the day of your glory on account of your soul, for she knows no god and obeys no law, and nothing is sacred to her pride, either in heaven or on earth. (1921/1971, ¶ 278)
And finally, Jung (1960/1970) from “On the Nature of The Psyche,” turns our attention back to his most important subject—the human psyche:
The psyche is the greatest of all cosmic wonders and the sine qua non of the world as an object. It is in the highest degree odd that Western man, with but very few—and ever fewer exceptions, apparently pays so little regard to this fact. Swamped by the knowledge of external objects, the subject of all knowledge has been temporarily eclipsed to the point of seeming non-existence. (¶ 357)
References
Aamoth, D. (2010, April 2). An interview with Ray Kurzweil.Time.
Aeschylus (cir. 430 B.C.E) Prometheus bound. The Internet Classics Archive, Web Atomics, 1999-2009, Daniel C. Stevenson.
Aschenbrenner, L. (2024, June). Situational awareness: The decade ahead. [Online text]
Altman, S. (2025, January 5). Reflections. Retrieved September 21, 2025.
Baudrillard, J. (2020). The transparency of evil: Essays on extreme phenomena. Verso.
Beebe, J. (2017). Energies and patterns in psychological type. Routledge.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
BBC. (1959). Face to face: Carl Gustav Jung. [Video]. YouTube. Retrieved September 21, 2025
Bryson, J. J. (2020). The artificial intelligence of the ethics of artificial intelligence: An introductory overview for law and regulation. In M. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 3-25). Oxford University Press.
CNBC Television. (2024). Former Google CEO Eric Schmidt on AI potential: American businesses will change because of this [Video]. YouTube.
Crowder, J., & Friess, S. (2013). Artificial psychology: The psychology of AI. Systemics, Cybernetics and Informatics, 11(8), 64-68.
Echegaray, J. (2024). Archetype of the machine. In G. Amundson (Ed.), Psychological and philosophical studies of Jung’s teleology: The future-orientation of mind, (pp. 174–216). Routledge.
Evans, R. I. (2020). Richard I. Evans’ conversations with Carl Jung and reactions from Ernest Jones: Reprint of the 1964 edition, with foreword by Jodi Kearns. Center for the History of Psychology.
Francis LVIII. (2024). Artificial intelligence and the wisdom of the heart: Towards a fully human communication. (Message for the 58th World Day of Social Communications). Vatican.
Gebser, J. (1986). The ever-present origin. Ohio University Press.
Gigerenzer, G. (2024). Psychological AI: Designing algorithms informed by human psychology. Perspectives on Psychological Science, 19(5), 839–848.
Gigerenzer, G., & Goldstein, D. G. (1996). Mind as a computer: Birth of a metaphor. Creativity Research Journal, 9, 131–144.
Goldman Sachs (2025, February 4). AI to drive 165% increase in data center power demand by 2030. Retrieved from https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030
Harari, Y. N. (2017). Homo deus: A brief history of tomorrow. Harper Collins.
Hetzner, C. (2025, January 27). Marc Andreessen warns DeepSeek is “AI’s Sputnik moment.”
Jung, C. G. (1921/1971). Psychological types. CW6.
Jung, C.G. (1928/1966). Two essays in analytical psychology. CW7.
Jung, C. G. (1933). Modern man in search of a soul. Houghton Mifflin Harcourt.
Jung, C.G. (1949, November). Letter to the Zürcher Student. Student Newspaper of the Federal Polytechnical Institute, Zurich.
Jung, C. G. (1957/1970). On the Nature of The Psyche. CW8.
Jung, C.G. (1970/2020). Psychology and religion: West and East. CW11.
Jung, C. G. (1957/1977). The symbolic life: Miscellaneous writings. CW18.
Jung, C. G. (1984). Dream Analysis, Volume I: Seminars. Princeton University Press.
Kerényi, K. (1946/1963). Prometheus: Archetypal image of human existence. Pantheon.
Kimball, S. (2024, May 5). AI could drive a natural gas boom as power companies face surging electricity demand. CNBC.
Kurzweil, R. (2005). The Singularity is Near. Viking Penguin.
Mitchell M. (2023). How do we know how smart AI systems are? Science, 381(6654).
Myers, S. (2019). Myers-Briggs typology vs Jungian individuation: Overcoming one-sidedness in self and society. Routledge.
Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press.
Neville, B. (1996). Prometheus, the technologist. International Journal of Transpersonal Studies, 15 (2), 12–23. https://digitalcommons.ciis.edu/ijts-transpersonalstudies/vol15/iss2/3
Newell, A. & Simon, H. A. (1976). Computer Science as empirical inquiry: Symbols and search, Communications of the ACM. 19(3), 113-126.
Nicoletti, L., Ma, M., & Bass, D. (2025, May 8). AI is draining water from areas that need it most. Bloomberg. https://www.bloomberg.com/graphics/2025-ai-impacts-data-centers-water-data/?embedded-checkout=true
O’Donnell, J. (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review.
Proudfoot, D. (2011). Anthropomorphism and AI: Turingʼs much misunderstood imitation game. Artificial Intelligence, 175(5–6), 950–957.
Ramakrishna, & Nikhilananda. (2015). The gospel of Sri Ramakrishna. Ramakrishna-Vivekananda Center of New York.
Schumpeter, J. A. (2008). Capitalism, socialism, and democracy. Harper Collins.
Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence. National Institute of Standards and Technology.
Stade, E. C., Stirman, S. W., Ungar, L. H., Boland, C. L., Schwartz, H. A., Yaden, D. B., Sedoc, J., DeRubeis, R, J., Willer, R., & Eichstaedt, J. C. (2024, April 2). Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation. Mental Health Research.
Stover, D, (2024, December 19). AI goes nuclear. Bulletin of the Atomic Scientists.
TED. (2024, June 27). Ray Kurzweil: The last 6 decades of AI — and what comes next [Video]. YouTube.
Yudkowsky, E. (2023). Pausing AI developments isn’t enough. We need to shut it all down. Time.
Zarkadakis, G. (2023). The goddess and her icon: Body and mind in the era of artificial intelligence. AI & Society, 39(1), 87–89.
Zhang, K., Yang, X., Wang, Y., Yu, Y., Huang, N., Li, G., Li, X., Wu, J. C., & Yang, S. (2025). Artificial intelligence in drug development. Nature Medicine, 31(1), 45–59.
Zomorodi, M. (2025, July 11). It’s only a matter of time before your mind merges with AI, says Ray Kurzweil. Retrieved from https://www.npr.org/2025/07/11/g-s1177-76551/its-only-a-matter-of-time-before-your-mind-merges-with-ai-says-ray-kurzweil
Images
Balla, G. (1913). Abstract speed.
Evans, M. (1959). A Dream, Prophets in the Air.
Kahlo, F. (1944). The broken column.
Mallo, M. (1936). Surprise of the wheat.
Nevinson, C. (1915). Pursuing a taube.
Tellez, A. (n.d.) Untitled.
–
Images courtesy of wikiart.com
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Steve,
Thank you for the comments. First of all, my apologies for misspelling your name.
My Jung’s use of Spitteler’s Prometheus-Epimetheus comparison came from a desire to examine both sides of a representative archetype for AI. I was naturally drawn to the mythic Prometheus and tech hubris. I felt I needed to have a place for AI-optimists as well as AI-Cassandras. I did find this in Jung’s commentary on Spitteler’s poem. Jung himself recognizes that he may have read more than he needed to in the Prometheus-Epimetheus axis. In Memories, Dream, Reflections, he humorously points out that he heard that Spiteller delivered a lecture where he stated that his Prometheus and Epimetheus meant nothing, and that he might as well have sung “Spring is come, tra-la-la.” It’s possible I too have followed Jung’s lead in this regard.
With respect to the dream analysis, I couldn’t agree more that we should let the unconscious speak. I did several versions of the dream analysis with ChatGPT without the Beebe overlay. My interest in the Beebe model was to see how these agents respond from a type point of view – perhaps a result of my Ne orientation. As a Jungian oriented therapist, my goal is to to help clients make appropriate use of AI agents (which they will anyway!), to help them get the best out of the technology while respecting the voice of their psyches.
I would love to continue this discussion. There’s much more that we can explore with AI from a Jungian perspective. Thank you!
I found much to appreciate in this piece — particularly the reminder that our fascination with AI has a shadow side as well as a creative one. Many of the cultural and ethical questions raised here seem deeply valid and worth continuing to explore.
At the same time, there were a couple of points that didn’t sit easily with me. The first was the simultaneous use of two quite different characters whose only commonality is the same name. The Prometheus of Greek myth embodies defiance and technological hubris, whereas Spitteler’s (unrelated) Prometheus represents an introverted turn towards soul and away from the collective world. Each image carries a very different psychological implication for how we understand or use AI. When they’re blended together, we lose much of the power and nuance in both. Spitteler’s Prometheus might indeed have rejected AI in just the same way he rejected social conscience.
Also, the use of Beebe’s model to frame the dream interpretation is certainly a clever demonstration of the eight-function model’s explanatory power. However, from a Jungian standpoint it inverts the direction of inquiry. Rather than letting the unconscious speak first, it begins with an imposition from consciousness — placing the dream inside a typological framework and thereby prescribing its meaning, even when there may be an unrelated, deeper one. Just as Jung suggested analysts should leave their well-learned theories outside the door of the therapy room, so too, for AI to assist in dream interpretation, we need to leave our conscious preconceptions about the dream’s meaning outside the question.
PS. I also noticed that my name is incorrect in the reference list.