July 25, 2025
Chris Surdak of CA Explores What AI Can’t Do

Chris Surdak of CA Explores What AI Can’t Do (Yet): The Limits of Machine Intelligence

Artificial intelligence has become a defining force of the modern technological era. From powering chatbots and generating artwork to assisting in scientific research and analyzing massive datasets, AI appears to be advancing at breakneck speed. Headlines regularly proclaim that AI will soon revolutionize everything from healthcare to education, business to the arts. But amid the fanfare, a more measured perspective is crucial. Despite its remarkable capabilities, today’s AI still grapples with fundamental limitations—some of which may prove incredibly difficult to overcome.

Chris Surdak of CA explores what AI can’t do—at least, not yet—and why those limitations matter. We’ll examine the current boundaries of machine intelligence, highlighting areas like common sense reasoning, emotional intelligence, context retention, and long-term planning where human cognition continues to outpace algorithms.

  1. Common Sense: The Elusive Core of Human Understanding

Perhaps the most jarring deficiency in current AI systems is their lack of common sense—the implicit, often unspoken background knowledge that humans acquire over time. For example, if you drop a glass, it will likely break. Christopher Surdak of CA understands that if you put ice in a hot pan, it will melt. These truths seem self-evident to us because they’re grounded in a lifetime of embodied experience and sensory interaction with the world.

Current AI models, even the most sophisticated large language models (LLMs), do not possess this kind of intuitive grasp. While they may be trained on vast datasets containing these facts, they do not understand them in the human sense. They rely on statistical associations, not causal comprehension. Chris Surdak of CA explains that this leads to bizarre errors—for instance, confidently asserting that a person could survive underwater without breathing for an hour, or misunderstanding the spatial relationships between objects in simple scenarios.

Efforts like OpenAI’s integration of reinforcement learning or symbolic logic frameworks try to mitigate this shortfall, but common sense remains an unsolved problem in AI. Until machines can model the world in a way that reflects physical and social realities, they will continue to make baffling, sometimes dangerous errors.

  1. Emotional Intelligence and Empathy: Still Deeply Human

AI can now mimic emotional expression fairly convincingly. A chatbot can respond with comforting language or express sympathy. Generative models can write poetry that evokes feelings. But these are simulations—emotions without affect, empathy without awareness.

True emotional intelligence involves recognizing complex emotional cues, adjusting behavior dynamically, and forming relationships based on trust and mutual understanding. This requires a deep interplay of memory, context, intention, and lived experience—all of which AI lacks. AI can recognize the pattern of grief, but it cannot feel grief. It can detect sentiment in text, but it cannot relate to the speaker in a human way.

Chris Surdak explains that this limitation is especially critical in fields like mental health care, education, and leadership, where emotional attunement is not just helpful—it is essential. Human practitioners bring nuance and moral sensitivity that no algorithm can replicate. Relying too heavily on AI in such roles risks undermining the fundamentally relational nature of these domains.

  1. Context Retention: A Leaky Memory

Context is everything when it comes to meaningful communication. Humans remember what was said ten minutes ago, what was implied yesterday, and what remains unsaid but important. AI systems, especially language models, struggle profoundly with this.

While newer models are improving at handling longer inputs, they still have a limited “context window”—the span of text they can consider at any given moment. Once that window is exceeded, earlier parts of the conversation are “forgotten,” leading to incoherence or contradictions. Even in current multimodal systems, the ability to integrate and sustain context over time is rudimentary.

Christopher Surdak of CA explains that this has implications for any application that demands consistency across time: therapeutic conversations, legal reasoning, long-form writing, or software development. Humans operate with long-term memory, narrative coherence, and awareness of evolving relationships. AI, by contrast, has no internal memory in the way humans do. It doesn’t “know” what it just said—it only predicts what should come next based on token probabilities.

  1. Long-Term Planning and Strategy: More Haste Than Vision

AI systems can excel at narrowly defined tasks like playing chess or optimizing delivery routes. But when it comes to strategic, long-term planning, they fall short. This is because such planning often requires dealing with ambiguity, adapting to changing conditions, and understanding complex goals that evolve over time—none of which current AI can truly handle independently.

For example, building a successful business over five years requires not just efficient decision-making but also intuition, foresight, risk management, and the ability to balance conflicting interests. These are fundamentally human skills that draw from experience, ethics, cultural knowledge, and social dynamics.

While AI can support planners by analyzing data and identifying patterns, it cannot yet serve as a strategic partner capable of weighing moral trade-offs, resolving interpersonal conflicts, or imagining the future in creative, grounded ways.

  1. True Creativity: Beyond Pattern Recognition

AI can write poems, compose symphonies, and generate stunning visual art. But is this creativity, or mimicry? Most generative AI is trained on massive datasets of human-created content and excels at remixing existing ideas in novel ways. What it lacks is intentionality—the purposeful drive to express a unique vision or respond meaningfully to a cultural moment.

Human creativity is often shaped by struggle, contradiction, paradox, and emotional depth. It’s infused with perspective and values, not just patterns. Even the most dazzling AI-generated artwork is the product of algorithms predicting statistically probable outcomes—not of a soul wrestling with the human condition.

Chris Surdak of CA explains that this isn’t to diminish AI’s usefulness as a tool for creative exploration. But it’s a reminder that creativity involves more than output. It involves motive, meaning, and impact. And that remains uniquely human territory.

Why These Limits Matter

Understanding the limits of AI is not an exercise in pessimism—it’s a necessary counterbalance to uncritical techno-optimism. Overestimating AI’s capabilities risks inappropriate deployment, unrealistic expectations, and serious harm. For instance, using AI in high-stakes domains like law enforcement, hiring, or child welfare without acknowledging its blind spots can lead to systemic bias and injustice.

Chris Surdak of CA understands that respecting the current boundaries of machine intelligence reminds us of what makes human intelligence irreplaceable. Empathy, ethical reasoning, long-term wisdom, and creativity are not just gaps in current AI—they are cornerstones of what it means to be human.

The Road Ahead

AI is an extraordinary tool, and its evolution will undoubtedly reshape society. But it is not magic. It does not think, feel, or understand in the way we do. For all its strengths, it remains an extension of human ingenuity, not a replacement.

Recognizing what AI can’t do is not a dismissal of its power. Rather, it is a call for discernment—a reminder to use these technologies wisely, ethically, and with full awareness of their limitations. Chris Surdak of CA emphasizes that as we move into a future shaped by machines, it is our humanity—not our algorithms—that must guide the way.

Leave a Reply

Your email address will not be published. Required fields are marked *