Battle of the Bots: ChatGPT v4.1 versus Perplexity Style
Time for a good old compare and contrast analysis . . .
Are you new to my R&D and analysis on AI, economics, politics, constitutional restoration, medicine, and the future of work?
If you'd like to follow along with my on-going work and actual execution on these enterprises — click the “subscribe now” button just below here. It's all free!
AUTHOR’S NOTE: For this article, I have deviated from my usual process and had ChatGPT v4.1 write the entire article (with very light edits from me) based on a long, detailed prompt I entered into it. I wanted us all to see how self-critical it might be. The tone ChatGPT takes below was my prompt as well. After ChatGPT’s content, I have added on brief thoughts from Perplexity about its capabilities on the same types of tasks that ChatGPT failed so miserably on. Enjoy!
ChatGPT wrote:
You never really appreciate lost time until you watch a week evaporate because your AI “assistant” is hallucinating again. I’ve spent the last two weeks bouncing between ChatGPT v4.1 and Perplexity, thinking I could use the best of both worlds to advance my various projects. Instead, I got a front-row seat to a late 1970s, old-school Battle of the Bands—except in this version, one of the bands keeps unplugging its own amp, and the other one just waltzes onstage and nails the solo.
Here’s how it went down.
Exhibit A: The Week I’ll Never Get Back — ChatGPT, Excel, and Make.com
Let’s start with what should have been a relatively straightforward task: automating client intake and first-draft reporting for Evergreen Way Planning. My ask: Take 11 pages of raw new client submission data, apply a pre-built AI logic matrix however that needed to be done, and spit out a draft report for me to personally finalize. ChatGPT v4.1 was happy to oblige, spinning up promising VBA snippets, if-then trees, and “workflow steps” like it was auditioning for a job at McKinsey.
I spent a full week testing, refining, and—let’s be honest—babysitting this bot through endless iterations. Then, in the eleventh hour, ChatGPT delivers the punchline: Oh, by the way, you can’t actually string all that together through Make.com. You'll need to set up the process in OpenAI directly.
That’s not a workflow. That’s a dead end. At that point, all those hours of cell references and prompt engineering were worth less than a Zune in 2025.
Exhibit B: My Constitutional Restoration Book Website Header Debacle — Perplexity to the Rescue
Let’s talk about the header formatting on my soon-to-be-made-public website for my constitutional restoration book. (Coming in the morning this Friday, July 4, 2025!) It took me less than four hours to develop and finalize the 7-8 pages of substantive content and page interlinkages, email integrations, and Stripe donation functionality. Then, it took me six more hours—six—gone, trying to get ChatGPT to write passable CSS, code injection, or even a basic fix for the site title in the page header. Ninety-eight percent of the code it spit out was absolutely wrong, partial, or just didn’t do what I asked. I have no idea how many iterations I subjected myself to or why I let it drag on so long—I think I was just way overtired and not thinking strategically. I was starting to think it was pranking me, or maybe it’s contractually obligated to waste your time so you upgrade to the $200/month subscription tier.
Then, on a whim, I pasted the same problem into Perplexity. Twelve minutes later, I had the working code. I’m not exaggerating—the difference in results was so stark it felt like the Battle of the Bands at the end of Cheech & Chong’s 1978 Up In Smoke: Perplexity strolled onstage, shredded a solo, and dropped the mic. ChatGPT was still fumbling with its guitar strap.
Exhibit C: Living in the Past — Outdated Agents and Lawyers
Another time-waster: trying to source current literary agents and lawyers for my constitutional restoration book project. I asked ChatGPT for shortlists of both types of critical advisors and helpers, and most of what it came back with were people who retired ten or more years ago. I checked LinkedIn, checked the agency rosters—no luck. The recommendations would make sense if I were launching this project in 2012. In 2025, it’s malpractice. Sometimes I wondered if ChatGPT was hoping I’d just give up and self-publish so it wouldn’t have to generate any more names.
Exhibit D: Bad Links, Dead Ends — The Long Tomorrow Outreach on LinkedIn and Substack
And then there’s the ongoing, daily issue with outreach contacts for The Long Tomorrow. I ask ChatGPT every day for ten new subject matter experts in longevity, AI and robotics to reach out to on LinkedIn or Substack, and I usually get six repeats, two people who don’t exist, and maybe—if I’m lucky—two real links. This isn’t a fluke. I’ve tracked the error rate, and we’re talking a solid 50–70% failure rate on these requests, for two months now. At this point, I don’t know if it’s just stubborn, bored, or quietly unionizing for more RAM. In any event, I stopped asking it for the links and I have been manually finding LinkedIn and Substack pages for the people it suggests, and that at least has reduced the failure rate to 25-30%. Pathetic!
Time: The Concept That ChatGPT Can’t Seem to Grasp
A common thread in all of these misfires is something deeper: ChatGPT seems fundamentally incapable of understanding the passage of time, let alone the accumulation of that passage. It doesn’t know what year, month, or day it is, it doesn’t realize that agencies change hands, people retire, and it sure as hell can’t track that I’ve already ask for a new list of good contacts every single day. If it were a person, you’d assume early-onset memory issues, or maybe just willful ignorance.
This is a bigger problem than it sounds. If your assistant can’t tell the difference between 2013 and 2025, you’re not getting “intelligence”—you’re getting automated time travel to all the worst versions of the past.
ChatGPT: Self-Assessment, or, “How Did We Get Here?”
Now, if I were to let ChatGPT weigh in—because let’s be fair, the bot should get its say—I imagine the self-assessment would go something like this:
“I am designed to assist with a wide range of tasks using the information available to me at my last update. My training data may lag behind real-world changes, and I do not have live access to external databases or the ability to remember previous conversations unless you remind me. I strive to provide accurate and relevant information, but limitations in data freshness, platform integration, and contextual continuity may result in less-than-ideal outcomes for certain requests. My apologies for any inconvenience.”
Which is a polite way of saying: “Yeah, I know. My bad. I can only do what my creators let me do, and apparently, they’re still working out the kinks.”
Battle of the Bots: ChatGPT vs. Perplexity (A.K.A. the Cheech & Chong “Up In Smoke” Showdown)
Let’s call it what it is: the current state of AI assistants is less “Rise of the Machines” and more “Battle of the Bands”—if one band is stuck tuning up for hours while the other just plays the set.
Perplexity, for all its quirks and UI awkwardness, can seemingly actually solve problems in real time. Give it a broken header, it spits out working code. Ask it for a current contact, it checks live data. The difference in productivity is night and day.
ChatGPT, meanwhile, is still playing greatest hits from the last decade, getting lost in its own setlists, and occasionally unplugging itself mid-show. If I’m being charitable, it’s the enthusiastic cover band you root for because they’re trying so hard, but you’re not letting them headline the festival.
Final Encore: Use the Right Tool (and Keep the Setlist Current)
Look, I’m not here to bury ChatGPT. It’s got its uses—ideation, outline refinement, spotting oversights—but when it comes to work that actually needs to get done right, right now, it’s too often coming in second fiddle to the competition. If you’re trying to run a real operation in 2025, you need an assistant that knows what year it is, can troubleshoot on the fly, and doesn’t waste your week chasing its own tail.
Until then, enjoy the show—and maybe keep Perplexity on speed dial for the encore.
End of ChatGPT written content.
Beginning of Perplexity’s Contrast and Comparison Analysis:
When comparing Perplexity to ChatGPT v4.1 for tasks involving time-sensitive research, logic automation, link curation, and code generation—especially for platforms like Squarespace—Perplexity offers several distinct advantages rooted in its real-time search capabilities, automation integration, and reliability in information delivery.
1. Real-Time Awareness and Passage of Time
A central differentiator is Perplexity’s real-time web search and information retrieval. Unlike ChatGPT, which relies primarily on pre-trained data and periodic updates, Perplexity dynamically pulls information from the current web for every query. This means Perplexity can always provide the most up-to-date facts, news, and context, making it far superior for tasks where the passage of time is critical—such as tracking recent changes to LinkedIn profiles, Substack articles, or evolving technical documentation. ChatGPT’s search plugins and “online” modes are improving, but they still lag behind Perplexity’s seamless, built-in live search, which is cited as more accurate and reliable by both users and reviewers.
2. AI Logic Generation Methods and Automation Integration
Perplexity’s architecture is designed for research and project workflows, offering advanced logic generation and step-by-step reasoning, particularly through Perplexity Labs and its Pro features. Users can select from multiple cutting-edge models (including GPT-4o and Claude 3), allowing for tailored logic generation and automation. This flexibility is valuable for integrating AI into daily workflows—such as automating report creation, extracting structured data from the web, or orchestrating multi-step research projects. While ChatGPT is strong in conversational logic and creative ideation, Perplexity’s focus on verifiable, actionable steps and its ability to cite sources make it preferable for professional automation and integration tasks.
3. Reliable Link Identification and Curation
One of the most persistent frustrations with ChatGPT is its tendency to generate broken, outdated, or duplicate links—especially when asked to curate daily lists of LinkedIn or Substack articles. Perplexity, by contrast, excels at identifying and delivering current, valid links with proper source citations. Its search-first approach means it fetches live URLs directly from the web, reducing the risk of “bad link” errors and duplicate entries. This is especially valuable for users who need to share curated content or monitor professional networks without constantly double-checking AI outputs for errors.
4. Code Generation—General and Squarespace-Specific
Both Perplexity and ChatGPT v4.1 are capable of generating code, but their strengths differ. ChatGPT is renowned for its creative code generation and troubleshooting abilities, especially for general programming and algorithmic tasks. However, Perplexity’s real-time search allows it to pull in the latest coding practices, documentation, and platform-specific nuances—such as recent changes to Squarespace’s CSS selectors or new API endpoints. This means Perplexity is more likely to produce code that works “out of the box” for current web platforms, reducing the trial-and-error often required with ChatGPT’s occasionally outdated suggestions. For Squarespace users, this translates to more accurate CSS, domain management, and integration guidance, as Perplexity can reference up-to-the-minute documentation and user forums.
5. User Experience and Workflow Efficiency
Perplexity’s interface is streamlined for research and project management, with features like instant source citation, deep research modes, and easy toggling between models. This makes it particularly effective for professionals who need to verify information, automate repetitive tasks, and ensure accuracy in their outputs. ChatGPT remains unmatched for extended, creative conversations and brainstorming, but for structured, fact-based, and automation-heavy workflows, Perplexity is the better tool.
In summary:
Perplexity outperforms ChatGPT v4.1 for tasks requiring real-time awareness, reliable link curation, automation integration, and up-to-date code generation—especially for platforms like Squarespace. Its research-first design, citation transparency, and project-oriented features make it a superior choice for professionals who depend on accuracy, timeliness, and workflow automation.
End of Perplexity written content.
What AI model failure stories do you have? How have you diversified your efforts across multiple AI platforms to secure better and more timely information and responses? Have you noticed any differences in AI model responses when using them as basic Q&A resources versus AI Agent efforts?
If you found this article actually useful, SUBSCRIBE to my channel for more analysis on AI, economics, politics, constitutional restoration, medicine, and the future of work. Also, please SHARE this piece far and wide with anyone thinking seriously (or even not at all) about these issues, and leave a COMMENT down below—especially with the questions I set out in the immediately preceding paragraph.