|
Getting your Trinity Audio player ready... |
Understanding AI Resistance: Perspectives from the Next Generation
As CEO of End State Solutions, an aerospace certification consultancy at the forefront of emerging technology, I’ve observed that while AI promises transformative efficiency gains, many capable founders and organizations resist its adoption. To understand this phenomenon, I turned to an unlikely source—a 20-year-old college student and USMC enlisted “Forward Observer”, who we will refer to here as Ethan, whose thoughtful skepticism about AI mirrors the concerns I witnessed.
This exploration revealed that AI resistance isn’t always rooted in technophobia or ignorance, but in legitimate multifaceted concerns about competency preservation, authentic achievement, and the long-term implications of cognitive outsourcing. These insights have profound implications for how we approach AI integration in professional services and emerging technology sectors.
When I asked the first question about why I sometimes get the “eye roll” when I reference AI usage to perform a given task.
“I don’t trust it to provide accurate answers…and It feels like cheating.”
When Ethan articulated this reaction to AI usage, he echoed sentiments I’ve heard from founders of other companies. It’s a human issue, not just generational as it turns out.
The resistance stems from a deeply embedded value system that equates effort with worth, struggle with learning, and direct creation with authenticity. For organizations built on technical excellence and hard-won expertise, AI can feel like a betrayal of core principles.
Key Insight for Leaders:
Acknowledge that resistance to AI may represent a commitment to excellence, not a fear of change. Address the values conflict directly rather than dismissing concerns as outdated thinking.
Ethan’s observation about handwriting deterioration due to digital tools offers a analogy for organizational concerns:
“My handwriting has gotten worse since I don’t write as much anymore. Everything’s on a computer. I feel like the same concept applies to AI. If you use it to do everything for you, you’re not going to be able to do it yourself.”
This fear of skill atrophy is particularly acute in technical fields where expertise represents competitive advantage. In aerospace certification, where deep regulatory knowledge and nuanced interpretation capabilities define success, the question becomes: What happens if we outsource our thinking? We don’t see it that way at ESS, we apply AI tools as our PhD level assistance, that we assume has ZERO practical experience, and we treat them that way. As a result the tools become knowledge amplifiers for our subject matter experts that have deeeep knowledge in the field.
Along the way a critical distinction emerged in our conversation: the difference between tools that require understanding (calculators) and those that bypass it entirely (AI). We came up with two distinctions that I’ll submit here;
The Calculator:
- Calculators require knowing which operations to perform to get a complete answer
- Grammar checkers show what they’re correcting and typically offer an opportunity to “accept” the proposed changes
- Traditional CAD software demands engineering knowledge to create a working design
The AI Black Box:
- Input a question, receive a complete answer – maybe the right answer!?
- Limited user visibility into “reasoning” process in off the shelf AI tools unless spelled out in the prompt / process
- If used carelessly there is a high potential to mask knowledge gaps, and in turn reveal inappropriate reliance on AI tools to pitch to prospects and present solutions to customers
This distinction suggests a framework for AI adoption: transparency and user comprehension should guide implementation decisions. Human expertise and interaction guiding and checking AI assisted outcomes is still a must in professional services. Ethan and I had an extended conversation about implementation – transparency, AI as a teaching tool for its users, and the right context for “fully automated” functions. We covered too much ground to put into this article but we agreed that we both despise the AI chatbots used by so many companies as a proxy for human customer service and its probably a good idea if design assurance is very high for high risk operations like self driving cars and automated drone delivery systems.
Ethan’s response to questions was much more nuanced than I expected. Perhaps the most sophisticated insight from our discussion was Ethan’s contextual approach to AI:
- Learning Context: AI should be minimized to preserve skill development
- Professional/Business function Context: AI makes sense for efficiency and workflow optimization
- Creative Context: Human authenticity remains paramount
Organizations need context-aware strategies that recognize the different roles AI should play across various functions. I’ve heard this described in different terms during various presentations that feature C54 firms that specialize in AI implementation. We all have lots to learn as founders as we guide our firms through transitions into AI implementation. The general conclusion we drew together in this interview was;
- R&D Teams: Minimize AI as a primary tool to preserve innovation capabilities. We’re humans – exercise the brain first, its our gift and a muscle that needs to be exercised. Then use it to drive innovation through assisting to develop those fantastic ideas. “Sanity checks” required all along the way.
- Operations: Maximize AI for core business functions. Sales methodology development, financial analysis, business performance metric development and measurement, and many more can benefit for efficiency and consistency. Particularly the more deterministic the function is.
- Client Interface: Balance AI assistance with human judgment. The customer facing elements of sales and service can be amplified by AI of course – but a cautionary tale is being told every time we interface with a bot online and over the phone. Whether you are a “rabbit” hunting or an “elephant” hunting firm, beware the day when your customer or potential customer says,”representative” into the phone repeatedly to escape an unhelpful AI assistant.
Trust and Verification
The trust issue revealed another layer of complexity:
“If you put a question into ChatGPT, even if it doesn’t know, it’ll give you an answer. It won’t just say ‘I don’t know.'”
In high-stakes industries like aerospace, where errors can have serious consequences, this confidence without competence is particularly concerning. The solution is developing robust verification frameworks. While this has not been codified yet, we believe in independent verification requirements for AI-generated content, human expert review at decision points and traceable source requirements all the time where possible.
Much of the work we do relies on decades of experience. We have not witnessed AI be able to foresee landmines in regulatory engagements with the FAA in a strategic way. Longtime lessons learned and some scars help us do that in our segment.
The path forward is through thoughtful integration that Preserves the skills that define competitive advantage Enhances capabilities without creating dependencies, Maintains the authenticity clients and regulators expect and Accelerates routine tasks while protecting creative and strategic thinking.
Conclusion: Proceed with Caution
AI is here. Like the internet and the cloud as a tool for storage and working together, its not going a way. My conversation with Ethan reminded me that resistance to new technology often contains wisdom worth extracting. The conversation with Ethan revealed that resistance to AI often stems from legitimate concerns about competency, authenticity, and hidden risks in outcomes.—these concern can create serious organizational pushback.
As we navigate the integration of AI into aerospace and emerging technology sectors, we should view resistance not as an obstacle but as quality control. The questions raised by skeptics—about authenticity, competency, trust, and unintended consequences—are the very questions that will determine how we use AI as a tool for excellence or a crutch that weakens our capabilities.
The organizations that thrive will be those that build AI strategies that address legitimate concerns while capturing genuine opportunities. They’ll use AI not to replace human excellence but to amplify it, creating a synthesis that honors both efficiency and expertise.
In the end, Ethan’s insight may be the most valuable:
“I think it’s just those people that are diving into [AI] without actually thinking about it or diving into it with the intent of just making their their own lives easier in the short term that are going to cause the most problems, both for themselves and for society as a whole.”
We need to adopt AI in ways that make us better at what we do and provide value to our customers, not just faster. As an example I used Otter.aitm to record this interview and mine it for quotes. Then I used various versions of Claude to craft the draft based on ESS proprietary CEO’s Master Prompt and other guiding documentation. Then I edited it to reflect my style and adjust the content.
About the Author
Charlton M. Evans is the Founder and CEO of End State Solutions LLC, an aerospace certification consultancy specializing in helping emerging technology companies navigate FAA regulations to achieve revenue operations. ESS bridges the gap between innovation and regulation in the aerospace sector.