Ian Quackenbos of SUSE: Building Private, Modular Generative AI for Enterprise

Today we're meeting Ian Quackenbos, Head of the AI Innovation Hub at SUSE. They specialise in enterprise infrastructure with cutting-edge AI solutions.
Over to you Ian - my questions are in bold:
Who are you, and what's your background?
I'm the Head of the AI Innovation Hub at SUSE, where we bridge enterprise infrastructure with cutting-edge AI. I've spent the last decade at the intersection of open source, enterprise software, and emerging tech — helping companies bring innovation to life securely and scalably. I studied engineering and AI systems design, and have led GTM and product strategy for AI at both startups and global enterprises. At SUSE, I focus on making AI accessible, private, and powerful for every organization.
What is your job title, and what are your general responsibilities?
As Head of SUSE's AI Innovation Hub, I am at the intersection of strategy, product development, and ecosystem partnerships for our AI initiatives. I work across teams—from engineering to marketing—to shape our AI roadmap, incubate new solutions, and ensure our customers can securely deploy GenAI wherever their data lives.
Can you give us an overview of how you're using AI today?
We've launched SUSE AI, a modular, private GenAI platform built for enterprise-grade environments. It integrates with open source LLMs, secure supply chain tools like SUSE Security, Kubernetes (via Rancher), and works across air gapped, hybrid, and edge environments. We empower customers to deploy their own assistants and agentic workflows without vendor lock-in or data exposure.
Tell us about your investment in AI? What's your approach?
We take an open, modular, and customer-first approach. We've built a dedicated AI product team, contribute upstream to key OSS projects and prioritize strategic partnerships—whether with chipmakers, cloud providers, or LLM vendors. Our goal is to deliver enterprise-ready AI that puts control back in the hands of users.
What prompted you to explore AI solutions? What specific problems were you trying to solve?
Our customers were eager to adopt AI but faced major blockers—data privacy, lack of infrastructure readiness, and overwhelming complexity. We saw an opportunity to simplify that journey by delivering secure, open, and modular solutions that work with their existing tech stacks.
Who are the primary users of your AI systems, and what's your measurement of success?
IT leaders, innovation teams, and operational users are our primary audience. Success is measured by time-to-value, user adoption, and how easily teams can build or integrate AI into their workflows—without needing a PhD in ML. Unexpectedly, we've seen strong adoption in financial services and manufacturing, where edge and air gap requirements are critical.
What has been your biggest learning or pivot moment in your AI journey?
One big shift: Companies don't want just AI—they want control. Control over models, data, and infrastructure. That insight pivoted us away from a cloud-first approach to a hybrid, customer-owned stack that's privacy-first and fully modular.
How do you address ethical considerations and responsible AI use in your organisation?
It's core to what we do. We prioritize transparency, user control, and data sovereignty. Our platform supports auditability, model explainability, and strict access controls. We also engage with open source communities to encourage responsible innovation at scale. #choicehappens
What skills or capabilities are you currently building in your team to prepare for the next phase of AI development?
We're investing in MLOps, agentic architectures, and security-focused AI engineering as well as full stack observability. Cross-functional fluency—bringing infra, app dev, and AI together—is key. We're also testing and validating new frameworks and libraries the moment they get released.
If you had a magic wand, what one thing would you change about current AI technology, regulation or adoption patterns?
I'd fast-track global clarity around AI regulation and data governance. Today's uncertainty slows innovation and penalizes privacy-first approaches. We need clearer rules and more interoperable standards across borders. MCP is a good step in the right direction but we need more of it.
What is your advice for other senior leaders evaluating their approach to using and implementing AI?
Start small, but start now. Choose tools that respect your data, give you flexibility, and let you scale responsibly. I wish we'd focused earlier on user experience—it's not just about the model, it's about making it usable and trusted across the business.
What AI tools or platforms do you personally use beyond your professional use cases?
I regularly use Cursor, v0 and Lovable to prototype ideas, validate hypotheses, and accelerate from concept to execution at breakneck speed. These tools have completely transformed the build-test-learn loop—what used to take weeks can now be done in hours.
Recently, I used this stack to build a quantitative trading bot that analyzes a blend of macroeconomic indicators, news sentiment, and technical patterns to forecast short-term market movements. With v0 handling front-end scaffolding, Cursor providing AI-assisted code generation and refactoring, and Lovable enabling agentic orchestration across data sources and APIs, I was able to iterate rapidly, test strategies, and deploy a functioning prototype in record time. It's a perfect case study of how modern AI tools don't just support experimentation—they supercharge it.
What's the most impressive new AI product or service you've seen recently?
Hume AI's Empathic Voice Interface is a breakthrough. It's not just listening to what you say—it's understanding how you feel. By analyzing vocal tone in real time, it enables AI systems to adapt their responses based on human emotion. That's huge. Imagine contact centers, healthcare apps, or even robotics that can adjust behavior based on stress, frustration, or enthusiasm—automatically. It's a massive leap toward truly human-centered AI.
Finally, let's talk about predictions. What trends do you think are going to define the next 12-18 months in the AI technology sector, particularly for your industry?
Private GenAI will go mainstream—especially in regulated sectors. We'll see more open model innovation, explosion of agent-based systems, and heavy demand for infrastructure that supports sovereign, secure, and scalable AI. The battle will be won not by models, but by platforms that balance usability, control, and cost.
Thank you very much, Ian!
Read more about Ian on LinkedIn and find out more about SUSE at www.suse.com.