Training Developers to Build Defensively with AI

Today we're meeting Pieter Danhieux, CEO & Co-Founder at Secure Code Warrior. They specialise in secure coding practices and developer risk management.
Over to you Pieter - my questions are in bold:
Can you tell me about yourself and your background in cybersecurity? What industry challenge led you to co-found Secure Code Warrior?
My name is Pieter Danhieux, and I'm the Co-Founder and CEO of Secure Code Warrior, a global security company dedicated to revolutionising secure coding practices and prioritising developer risk management. Before Secure Code Warrior was created, I spent most of my teen and adult life breaking into devices and appliances that people use every day. Taking things apart to understand them was always something that was of great interest to me. Because of this, I went on to lead a global team of ethical hackers, tracking down flaws in digital infrastructure around the world. By 2015, I came to Australia and was working at BAE Systems, helping the company find cracks in its cyber armor. It was during this time that I came to the realisation that changed everything. After 20 years of offensive security techniques in the field, why wasn't it getting any harder? The same bugs and vulnerabilities were still out there in the code that we all were using.
This realisation unveiled a deeper issue: Security experts were skilled at breaking things, but the people responsible for building and creating our digital infrastructure weren't taught how to defend it. I was showing security professionals how to break in, but no one was teaching developers to keep us out. This gap sparked a mission to turn decades of ethical hacking experience into guidance for the next generation of builders to develop secure code, leading to the creation of Secure Code Warrior.
For those who don't know, can you explain why the development of secure code is critical to the cybersecurity industry?
The developers who create the software, applications and systems that power today's digital economy have become essential to the success of modern organisations. Most businesses would not be able to function without 24-hour access to their websites, tools and other infrastructure. Code is what makes this possible, but to keep things functioning properly, the code must be secure. Security teams struggle to keep up with today's rapid DevSecOps innovations, so it's no surprise that it's difficult to track and defend against the many ways attackers exploit the constant stream of new code. However, organisations must treat development with a Secure by Design (SBD) approach - and make security from the very foundations of software creation a priority.
Secure coding should be defined as the practice of security-skilled developers writing code that is free from common vulnerabilities, from the start of the software development lifecycle (SDLC). The push for adopting SBD practices isn't going away either, thanks to the pressing need for it in today's cyber threat environment, making the transition towards a preventive approach to security critical.
As generative AI tools continue to flood the market, can you walk us through the impact they are having on development teams? What are the pros and cons?
Software developers are rapidly adopting large language models (LLMs) and additional forms of generative and agentic artificial intelligence (Gen AI) to help them produce code more efficiently, and at a rapid pace. Like most things, this comes with its perks and plenty of concerns. Let's start with the benefits. AI-powered tools can certainly provide helpful assistance by reducing development time and by automating tasks, allowing developers to focus on other functions that need their attention.
However, with a constant influx of new tools promising improved productivity, data privacy and security have a tendency to get overlooked and deprioritised. There's currently no standard way for developers or organisations to assess an AI tool's security standing, which is troubling given that no LLM reliably produces deployment-ready code, according to research by BaxBench. In fact, over half of the outputs from even the best models are incorrect or vulnerable. This lack of transparency makes it hard to evaluate or compare tools, many of which are prone to insecure code and easily exploited, increasing the risk of compromise.
Can you offer best practices or guidance for organisations and development teams about how they can most effectively adapt to the integration of AI tools?
It's essential for developers to learn and strengthen their secure coding skills so they can make informed decisions about AI usage that better protect code, as opposed to exposing it. I'd suggest that organisations invest in learning platforms for developers so they practice writing safeguarded code from the start and identify different vulnerabilities they might inadvertently introduce into their process, including those generated by AI coding assistants, so they can properly navigate those scenarios moving forward.
Security leaders and development teams should also work to address the lack of standardised measurement. Start by assessing the data sources feeding an LLM tool, and how the tool generates code. Identify the mechanisms in place that will – or will not – keep adversaries from exploiting the code. Then, compile individual scores for each of these considerations, and combine them to determine an overall security score.
Currently, we're hearing a lot about AI-assisted software development, more commonly referred to as vibe coding. What are some of the risks associated with vibe coding, and how can security teams uplevel and train developers to make the most out of the potential benefits?
Vibe coding, or, coding exclusively with agentic AI programming tools using prompt engineering, offers a largely automated, LLM-driven approach to development. This places full trust in models that often present answers with unwarranted confidence. As mentioned before, we must remember that today's market does not offer a single LLM that reliably produces deployment-ready code. Adding to the mix, nearly 90% of developers have reported their own struggles in practicing secure coding. While pairing skilled developers with AI can boost productivity, most developers lack security expertise. As a result, with the assistance of AI, insecure code can be generated faster and pushed into production, worsening already strained AppSec efforts.
While there are clear concerns with vibe coding, it is now part of our reality. Do not ignore the risks, and do not entirely ban the tools, as this can lead to the emergent problem of shadow AI. Instead, focus your efforts on readying the development cohort to leverage AI effectively and safely. It must be made abundantly clear why and how these tools create acceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate risk as it becomes part of developers' day-to-day.
Are there any other trends that you foresee in the near future impacting the developer community?
Despite the lack of an industry standard for Secure by Design (SBD), many enterprises, especially in highly regulated industries like finance and critical infrastructure, are beginning to explore and adopt its principles. Currently, no single company offers a complete SBD solution; instead, various vendors provide fragmented pieces, but we've identified that the hunt is on to track down and fit the pieces together into a solution addressing the overarching issue of insecure software design. It's becoming more evident that developers lack dedicated tools to identify and manage design-stage security risks. This gap highlights a pressing need for integrated, developer-focused solutions that make SBD practical and scalable.
Thank you very much, Pieter!
Read more about Pieter on LinkedIn and find out more about Secure Code Warrior at www.securecodewarrior.com.