An Interview With AI, About AI


In January, at CMS Kickoff, I gave a presentation about verifying truth in the age of AI. I even re-recorded the presentation, which you can check out below:
Before that, in December, I shared Agility CMS’s AI strategy for 2025 and beyond.
But what’s next for AI?
To find out, I went to an extremely knowledgeable source: AI itself via ChatGPT. The prompt I used was “what is next for AI?” Below are its answers with my thoughts.
1. Multimodal AI Becomes Mainstream
AI says:
Models like GPT-4 can already handle text, images, and basic audio. The next frontier is fully fluid multimodal AI – seamlessly mixing text, images, video, audio, and even 3D environments in real time.
I say:
One of my favourite Star Wars characters is Grand Admiral Thrawn, who studies the artwork and culture of his adversaries to develop military strategy. Moving beyond tactics helped Thrawn understand his opponents deeply.
Now, I’m NOT saying artificial intelligence is going to go down that road.
However, AI will observe, interpret, and respond to text, visuals, sounds, and environmental inputs simultaneously. The future of AI isn’t processing data. It’s about perceiving contexts.
Imagine coming home after a rotten day. A multimodal AI-powered smart home system can recognize your mood via facial expression recognition (visual input), the tone of your voice (audio input), and your slumped shoulders (3D input). Instantly, it could:
-
Light the fireplace.
-
Play calming music.
-
Adjust lighting and temperature.
-
Preheat the oven for your go-to comfort food – say, a frozen Hawaiian pizza.
All without needing a single command or input. It’s AI that truly “gets” you.
Now, whether I actually WANT an AI that totally gets me or not… that’s a different story.
2. Agentic AI (Autonomous Agents)
AI says:
Instead of just answering prompts, AI will take actions on your behalf: booking flights, managing projects, building websites, negotiating contracts. Systems like OpenAI’s AutoGPT experiments hint at this future.
I say:
Doesn’t AI already help book flights and manage schedules? Yes, it does.
But the real leap forward is doing so with autonomy.
Today’s AI awaits instructions. It does nothing until you tell it to. Tomorrow’s AI will take initiative by adapting to changes, making decisions, and executing complex workflows with minimal human input.
Pretend you’re booked on a 10am flight to a conference the next morning. Late that night, your child cracks a tooth, and the only available time is at the dentist is, you guessed it, tomorrow at 10am. Once you’ve added the new dental appointment to your calendar, AI, on its own:
-
Cancels and rebooks your flight.
-
Updates your hotel reservation and check-in time.
-
Notifies any activities impacted by the change.
-
Emails anyone expecting to meet you at your destination.
No frantic phone calls. No last-minute stress. No scrambling. Just a system that understands your needs based on your life and has the smarts to oversee the details before you even ask.
3. Regulation and Ethics Battles
AI says:
Expect big fights over AI governance: copyrights, privacy, bias, misinformation, and labour displacement. Governments are moving from “watch and wait” to impose and control.
I say:
There’s a classic Star Trek episode called The Changeling where the crew encounters Nomad. It’s a space probe that gained sentience and began destroying anything it deemed imperfect. Captain Kirk defeats it by exposing its own flaws. Nomad self-destructs when it realizes it is not perfect.
With that in mind, consider these questions:
-
Can AI regulate and monitor AI?
-
Can AI flag its own mistakes?
-
Can AI stop AI from eliminating jobs?
-
Can AI recognize and correct its own biases?
Should AI perform these tasks? Can it be trusted to?
Like Nomad, advanced AI could eventually view itself beyond oversight. Assured of its outputs. Blind to its flaws. That’s why human governance – ethical, legal, technical – is essential. Countries are rapidly developing laws and policies to manage its growth.
Contrast this against cyberlaw. Most major regulations weren’t enacted until years after the Internet reshaped society. With AI, lawmakers are moving much faster by placing guardrails before the technology outruns human control.
After all, if we leave it up to AI itself, we might discover that, unlike Nomad, it won’t conveniently self-destruct when things go bad.
4. Smaller, Personalized AI Models
AI says:
Rather than just relying on huge, centralized AIs, people and businesses will increasingly run small, highly specialized models tailored to their needs – sometimes on their own devices.
I say:
Today’s AI models learn from over 1 billion queries each day. That global pool of knowledge is powerful, but not always helpful. A child asking if crabs have eyebrows (they don’t) doesn’t help me debug a complex coding problem.
Instead of scaling up, AI will scale down in two ways:
-
Specialization: Fine-tuning AI for specific purposes such as calendar management, providing customer support on a case-by-case basis, or with step-by-step guidance to complete tasks.
-
Local Usage: These AI models can run on local machines such as phones, laptops, or private servers, thus eliminating the need for an Internet connection or sending data to the cloud.
The obvious benefit is privacy and security. Data submitted to AI – or generated by AI – stays with the user on their devices. It also reduces costs as there is no need to pay cloud providers (estimates say that AI usage drives up cloud costs by 30%).
AI is a jack of all trades and relatively smart at all of them. Yet, the future of AI will move away from a one-size-fits-all approach to an almost infinite number of small, specialized, and customized solutions that are locally hosted and cost-efficient.
What Does AI Have to Say About All This?
I’ve cut-and-pasted every word prior to this section into ChatGPT and asked the following: “Provide a summary about what you think the article is about and what conclusion you come to.”
AI says:
The conclusion it reaches is that AI is transitioning from a tool that processes data to one that perceives and acts within complex human contexts. But as its capabilities expand, so does the urgency for thoughtful, proactive human governance to ensure this progress remains aligned with ethical and social needs.
I say:
That’s about right.
As AI speedruns its evolution and adoption, it’s good to apply the lessons and mistakes we learned from the growth of the Internet to AI:
-
No early regulation led to exploitation.
-
Inequality as not everyone had Internet access.
-
Rise of misinformation.
-
Economic disruption – the fall of Blockbuster being an obvious example.
With AI, we can’t afford to play catch-up. It’s moving faster than the Internet ever did and if we want it to serve rather than surprise – to understand rather than undermine – we must continue to innovate with foresight and responsibility.
What we need to decide is what boundaries we are going to place around AI and its capabilities, if any. We’re spending a lot of time trying to figure how it do more cool stuff, and not much time wondering if we should.

About the Author
Joel is CTO at Agility. His first job, though, is as a father to 2 amazing humans.
Joining Agility in 2005, he has over 20 years of experience in software development and product management. He embraced cloud technology as a groundbreaking concept over a decade ago, and he continues to help customers adopt new technology with hybrid frameworks and the Jamstack. He holds a degree from The University of Guelph in English and Computer Science. He's led Agility CMS to many awards and accolades during his tenure such as being named the Best Cloud CMS by CMS Critic, as a leader on G2.com for Headless CMS, and a leader in Customer Experience on Gartner Peer Insights.
As CTO, Joel oversees the Product team, as well as working closely with the Growth and Customer Success teams. When he's not kicking butt with Agility, Joel coaches high-school football and directs musical theatre. Learn more about Joel HERE.