This episode examines why low-code everyday AI is the smarter, more sustainable choice for higher education—and why the old “build-it” mentality no longer works. IT organizations are under pressure to adopt everyday AI, but custom-built solutions are expensive, insecure, and impossible to maintain at scale. We break down the real costs of DIY AI, the security risks of homegrown models, and why higher ed IT has already moved away from custom development in other areas. Tune in to explore a practical everyday AI adoption strategy that balances innovation, cost containment, and long-term viability.
Mark Putnam
This episode examines why low-code everyday AI is the smarter, more sustainable choice for higher education—and why the old “build-it” mentality no longer works. IT organizations are under pressure to adopt everyday AI, but custom-built solutions are expensive, insecure, and impossible to maintain at scale. We break down the real costs of DIY everyday AI, the security risks of homegrown models, and why universities have already moved away from custom development in other areas. Tune in to explore a practical everyday AI adoption strategy that balances innovation, cost containment, and long-term viability.
Olivia Carter
Good afternoon. So, Mark, I gotta ask—why are we talking about accelerating everyday AI adoption in higher ed right now? Like, what’s the urgency here?
Mark Putnam
Well, Olivia, it’s a fair question. Everyday AI, things like chatbots, content creation, even document processing—it’s already impacting higher ed. But here’s the thing: some IT organizations are still clinging to these old-school IT strategies, you know, like building everything from scratch.
Olivia Carter
Wait, seriously?
Mark Putnam
Yeah. They see it as cheaper or more flexible in the long run—
Olivia Carter
But it’s not, right?
Mark Putnam
Exactly. It’s—well, it’s short-sighted. The build-it approach might have worked for a student records system back in the ‘80s, but for everyday AI? It’s too slow, too expensive, and not remotely sustainable.
Olivia Carter
Okay, but what about those IT teams who argue they’re saving costs by just, you know, using the talent they already have?
Mark Putnam
Ah, that’s the illusion. They think, “Our programmers are on payroll, so it’s kinda... free.” But what they miss is the cost of maintaining those systems—security patches, scalability upgrades, training new staff when someone leaves. And those costs snowball, especially with how fast everyday AI is advancing.
Olivia Carter
Yeah, and isn’t scalability a huge issue, too? Like, something that works for one department isn’t cutting it for an entire campus, right?
Mark Putnam
Absolutely. Most homegrown systems, they’re designed for specific use cases. So when, say, admissions wants to use that system, or student services, it breaks down. That’s why low-code solutions are a game-changer—they’re scalable across institutions without starting from scratch each time.
Olivia Carter
Alright, so scalability is huge, but what about security? Isn’t AI a total minefield when it comes to protecting student data?
Mark Putnam
It is, and that’s part of the risk of custom-built everyday AI. Universities just don’t have the resources to match the enterprise-level security that platforms like Google or Microsoft provide. With low-code solutions, you’re getting built-in safeguards, ongoing updates, and compliance for regulations like FERPA or HIPAA —it’s just not something universities can replicate internally.
Olivia Carter
Right. So the takeaway here is—it’s not just about cost or speed, but about long-term security and sustainability, too.
Mark Putnam
Exactly. Embracing low-code everyday AI isn’t just smart—it’s necessary if higher ed wants to keep pace without overextending its resources.
Olivia Carter
Okay, Mark, so if low-code everyday AI is such a game-changer, why do some institutions still believe building everyday AI from scratch is the way to go? Isn’t it clear how costly and unsustainable that can be?
Mark Putnam
You’d think so, but it’s a perception problem. Teams often look at it on the surface level, you know? They figure, “We already have the personnel, so why not just build it internally?”
Olivia Carter
But doesn’t that just mean it’s technically... hidden costs?
Mark Putnam
Bingo. The salaries for those data scientists, programmers, and DevOps specialists? Those are real costs—
Olivia Carter
Plus benefits, training—
Mark Putnam
Exactly. And then there’s maintenance. People forget everyday AI isn’t a “set it and forget it” kind of tech. You’re doing constant patching, security reviews—oh, and compliance updates. It’s a never-ending cycle.
Olivia Carter
Okay, but, what about scale? I’ve heard, like, so many stories about homegrown systems completely imploding when they try to expand across campuses.
Mark Putnam
That’s spot on. Most custom everyday AI tools are designed for one narrow use case. Let’s say a department builds a chatbot for financial aid—they think they’ve struck gold. But then admissions wants to use it, or student services, and everything starts breaking. Scaling these tools institution-wide? Almost impossible without rebuilding everything.
Olivia Carter
Wait, everything? Every time?
Mark Putnam
Yep, pretty much. That’s what makes commercial low-code solutions so attractive. They’re designed with scalability in mind. You get something that works across the board—no extra coding or overhauls required.
Olivia Carter
Okay, but say a university built a custom system two years ago, one they were so proud of—how’s that working out now?
Mark Putnam
Ah, the elephant in the room. Those systems? Most of them are already outdated. I mean, look at the progression of everyday AI. Tools like ChatGPT Enterprise or Google Gemini are advancing so fast, keeping up with them through custom development is practically impossible.
Olivia Carter
So, basically, they built something that now needs a full rebuild just to catch up?
Mark Putnam
Exactly. And that’s where the real cost creeps in. It’s not just about building—it’s about keeping it current. That’s why universities have to rethink their strategies. Agility matters more than pouring resources into stuck-in-time systems.
Olivia Carter
Mark, speaking of outdated and risky strategies, let’s get into security. Isn’t it kind of alarming that some universities are putting student data at risk by building these AI tools from scratch? How does that still seem like a good idea to them?
Mark Putnam
It’s a combination of speed and ambition, Olivia. Universities often rush to deploy custom everyday AI without proper oversight. They’re so focused on getting something up and running fast, they... well, they practically ignore security reviews or change management.
Olivia Carter
Hold up—no real security reviews? Like, none?
Mark Putnam
Rarely enough to catch everything. Custom everyday AI projects typically don’t go through the rigorous testing you’d see from, say, Google or Microsoft. It’s mostly internal teams working with limited resources—no dedicated security engineers, no extensive penetration testing.
Olivia Carter
Okay, but that sounds like it would break so many rules. Isn’t, like... I don’t know, FERPA compliance a huge deal for schools?
Mark Putnam
Absolutely, and that’s where the legal risks pile on. AI governance is evolving so quickly—what’s compliant today might not be tomorrow. But custom systems aren’t built to adapt to new regulations, which is a huge problem.
Olivia Carter
Geez, and it’s not just the legal part, right? Like, AI eats up a ton of data—what happens when that data isn’t secure?
Mark Putnam
That’s the core issue. Everyday AI models require massive amounts of data to function. If those datasets aren’t properly secured, you’re exposing everything—student records, administrative systems, even research data. And we’ve already seen examples of this going horribly wrong.
Olivia Carter
Oh, do tell.
Mark Putnam
Take a public school district that partnered with a startup to build a custom chatbot for students and parents. The system ended up leaking confidential student information because it wasn’t secured properly. The startup went bankrupt shortly after, leaving the district to deal with the fallout.
Olivia Carter
Yikes. And meanwhile, universities could just, I don’t know, work with vendors who handle all this stuff instead?
Mark Putnam
Exactly. Commercial low-code platforms come with built-in encryption, continuous updates, and enterprise-grade safeguards. It’s a night-and-day difference compared to the burden of maintaining custom solutions.
Olivia Carter
So, it’s not just about being faster or cheaper—it’s literally safer, too?
Mark Putnam
Right. Universities are prime targets for cyberattacks already. The last thing they should do is introduce unsecured code into their systems. Low-code solutions reduce that liability.
Olivia Carter
Mark, all these risks and challenges with custom everyday AI projects make me wonder—why do IT organizations keep doubling down on these old habits instead of opting for the safer, low-code solutions?
Mark Putnam
That’s a great question, Olivia. About two decades ago, universities shifted from writing custom ERP and SIS systems to relying on vendor solutions like Banner and PeopleSoft. Custom-built systems became too costly and complex to maintain, so the focus shifted to integrating third-party platforms instead.
Olivia Carter
Okay, so they learned their lesson with enterprise systems, but... everyday AI’s getting treated differently?
Mark Putnam
Sometimes, yes. AI still has this allure of being cutting-edge, which makes institutions eager to build their own tools, whether it’s chatbots, analytics, or other automation. But the same issues with legacy ERP apply—custom solutions are expensive to maintain, not scalable, and they age fast.
Olivia Carter
Right, and everyday AI’s evolving way faster than ERPs ever did. I mean, what’s the point of doing all that work if the tool’s obsolete the second you finish it?
Mark Putnam
Exactly. That’s why everyday AI in higher ed should follow the same path those ERP systems took—stop building from scratch and instead use vendor-supported platforms. Microsoft, Google, Oracle, Workday—they’re already offering AI integrated into the tools universities are using every day.
Olivia Carter
But isn’t there a bit of an ego thing at play? Like, some technologists just wanna prove they can still build something impressive.
Mark Putnam
Oh, absolutely, and that’s a problem. What you’re talking about is what we call “innovation theater.” Those who want to show off shiny new tools at conferences, but those tools often fail to deliver lasting impact. The resources would be better spent adopting low-code everyday AI for practical use cases, rather than chasing vanity projects.
Olivia Carter
So the takeaway here is IT organizations really need to rethink what they’re investing in and why... like not just for the sake of saying, “Hey, look at our cool AI solution.”
Mark Putnam
Exactly. Universities have already moved on from the “build-everything” model—everyday AI should follow the same approach. Instead of sinking resources into custom tools, institutions should focus on commercial platforms that can actually scale and adapt over time.
Olivia Carter
Alright, Mark, let’s cut to the chase—if universities want to make the best use of everyday AI without falling into those old traps, what’s their smartest everyday strategy?
Mark Putnam
It's all about strategy, Olivia. First, institutions need to focus on low-code everyday AI for things like automating repetitive tasks—chatbots, document processing, predictive analytics—all without sinking resources into custom development.
Olivia Carter
Right, so keep it simple, don’t reinvent the wheel.
Mark Putnam
Exactly. And second, partner with vendors. Companies like Google, Microsoft, or OpenAI already offer AI solutions with enterprise-grade security, scalability, and compliance. Universities really don’t need to build from scratch for this kind of stuff.
Olivia Carter
But there’s still a place for custom AI, right?
Mark Putnam
There is, but it’s a narrow one. Custom AI development should be reserved for game-changing research and innovation. Faculty working on pioneering AI models or breakthroughs? That’s where “build-it” still makes sense. Not for routine administrative tasks.
Olivia Carter
So, basically, everyday AI... treat it like a tool, not a whole project?
Mark Putnam
That’s the perfect way to put it. Tools solve problems; they don’t need to create new headaches. Universities should focus on efficiency and let the private sector—AI vendors—handle the heavy lifting for everyday applications.
Olivia Carter
Alright, so adopt low-code for operations, lean on vendors for admin and student services, and save the custom-built AI for game-changing research.
Mark Putnam
That’s it. Everyday AI shouldn’t be a software development project—it’s a way to streamline operations and free up resources for what truly matters in higher education.
Olivia Carter
And on that note, I think we’ve covered some serious ground. Thanks, Mark—it’s been a fascinating deep dive.
Mark Putnam
Thanks, Olivia. Always a pleasure.
Olivia Carter
Alright, folks, that’s it for this episode of "Dispatches from an Internet Pioneer." Thanks for listening, and we'll see you next time!
Chapters (5)
About the podcast
Technology is reshaping higher education, leadership, and the economy—but the biggest challenges aren’t just technical, they’re cultural and structural. Created by Timothy Chester, this podcast explores the real impact of AI, automation, and digital transformation on universities, work, and society. With a sociologist’s lens and decades in higher ed IT leadership, he cuts through the hype to uncover what truly matters.
This podcast is brought to you by Jellypod, Inc.
© 2025 All rights reserved.