Alan Jones, Co-Founder & CEO, YEO Messaging

Alan Jones, Co-Founder & CEO, YEO Messaging

Today we're meeting Alan Jones, Co-Founder & CEO at YEO Messaging. They specialise in secure messaging with continuous facial recognition technology.

Over to you Alan - my questions are in bold:


Who are you, and what's your background?

I'm Alan Jones, Co-Founder and CEO of YEO Messaging. I've been building and selling technology companies since the '90s — Memory Technology, Shuttle Technology Group (Sold to NASDAQ SCMM) and Atrium Innovation (sold to Sumitomo Chemical), to name a few. Across those decades, one factor for success has remained constant: the need to build innovation that empowers trust.

I started YEO (Your Eye Only) because I could see a clear and growing gap between how we communicate and how we use identity to control content. Because of our reliance on apps, messaging today is often instantaneous and gratifying but fundamentally insecure. That's a dangerous mix — especially when it involves personal or sensitive organisational data, from financial info to patient records.

My background spans hardware, encryption, security, and global business leadership. I've always had a keen eye on what's next, and right now, AI is both the biggest risk and the biggest opportunity we've ever faced in digital communications.

What is your job title, and what are your general responsibilities?

As CEO, I guide our strategic direction, lead innovation, and ensure we stay ahead of the curve in AI, privacy, and identity security for our controlled messaging platform. I work closely with our R&D teams to push our patented Continuous Facial Recognition technology forward, and I spend a lot of time with investors, regulators, and other UK fellow founders shaping the next wave of ethical AI infrastructure.

I'm also hands-on with partnerships — making sure our tech integrates seamlessly into sectors like finance, defence, and healthcare, where the stakes for trust and compliance couldn't be higher.

Can you give us an overview of how you're using AI today?

YEO is built around a single core question: "How do we prove a person is who they say they are — continuously — without compromising their privacy?"

Our answer is patented Continuous Facial Recognition (CFR) — an AI-driven engine that authenticates identity in real time, not just at login. It detects liveness, blocks spoofing, and constantly confirms that the intended real, authorised human is present during communication. No more unintended guests to messaging group chats!

What makes us different is not just what our AI does — it's how it does it. All processing is local to the device. No facial data is stored centrally or shared. We don't believe AI should be built on the back of your biometric identity being scraped or sold.

Screenshot from the yeomessaging.com homepage

Tell us about your investment in AI. What's your approach?

Our CFR technology has secured patents across the US, UK, and China. Our investment goes beyond engineering — we're building a new category of AI-powered communication that is live, verified, and under user control.

We're also investing in next-gen biometrics - a layer we're excited about for thwarting deepfakes and enhancing real-time authentication.

We've stayed lean and smart, relying on domain-specific expertise and partnerships with those who align with our privacy-first ethos. We don't chase AI trends. We build around use cases that genuinely demand AI — like identity, compliance, and fraud prevention.

What prompted you to explore AI solutions? What specific problems were you trying to solve?

Trust is breaking down in digital communication. Deepfakes, impersonation, spoofing, human errors in messaging — they've all exploded, and most organisations are still using consumer-grade messaging apps because they're instant, familiar, and foster better team connection than email or Teams chats.

But those apps aren't built for security, nor compliance and data sovereignty. So the question was: Can we preserve that frictionless feel — and still guarantee security, privacy, and identity?

AI gave us that path forward with YEO Messaging. But only if used responsibly — and with precision.

Who are the primary users of your AI systems, and what's your measurement of success?

Our technology is being deployed in financial services, healthcare, defence, and government, where proof of identity is non-negotiable.

We measure success by:

  • Breach prevention: No unauthorised user access and geo-fencing so that what is said in the White House stays in the White House.
  • Uptime of verified communication sessions: How long a human can stay verified in real time.
  • Auditability: Can we meet or exceed the strictest compliance standards?

Unexpected use cases? We've seen demand from social media and influencer platforms looking to verify creators and protect minors through new regulations like the Online Safety Act — something unexpected five years ago.

What has been your biggest learning or pivot moment in your AI journey?

That the real differentiator isn't accuracy — it's how the AI earns trust.

We realised early that users don't want a black box. They want transparency. They want to know their face isn't being stored in someone's cloud.

That drove us to decentralise our biometric processing, even though it was harder to build. But it's the reason we're seen as a trusted partner by commerce, government and military partners.

How do you address ethical considerations and responsible AI use in your organisation?

We apply a few hard rules:

  • No central storage of biometric data
  • Local-only AI processing
  • No use of facial data for product training

We built YEO to meet standards like ISO/IEC 30107-3 and NIST guidelines, and I'm a firm believer that more regulation in this space is overdue. We're already compliant with the Online Safety Act and EU Digital Identity framework because we started from a privacy-first philosophy.

For us, AI ethics isn't a compliance checkbox, or a rewrite to adhere to the latest legislation— it's the foundation of our product.

What skills or capabilities are you currently building in your team to prepare for the next phase of AI development?

We're deepening our expertise in edge AI, and behavioural biometrics. We're also exploring how we can utilise heuristics and other human idiosyncrasies to verify a live human without being invasive.

Talent-wise, we're bringing in minds that understand both algorithmic performance and regulatory strategy — because future-proofing means being able to scale and comply at speed.

If you had a magic wand, what one thing would you change about current AI technology, regulation or adoption patterns?

I'd push the industry to move away from centralised biometric models and data-extractive AI. The idea that your face — your identity — is stored, shared, or sold to train systems you don't control is unethical.

If we normalised zero-knowledge AI architectures, we'd make enormous strides in building digital trust.

What is your advice for other senior leaders evaluating their approach to using and implementing AI?

Start with why, not what. If you're implementing AI just to tick a transformation box, it will fail. But if you have a specific challenge — then AI becomes a powerful lever.

Also, don't be afraid to challenge your vendors. Ask where the data goes. Ask what the fallback is when the AI fails. If they can't answer, build your own — or look elsewhere.

What AI tools or platforms do you personally use beyond your professional use cases?

I'm currently a big fan of:

  • Perplexity AI — great for research without the noise
  • ChatGPT+ with trained custom GPTs— a flexible assistant for product strategy and messaging
  • Gamma

What's the most impressive new AI product or service you've seen recently?

I was impressed by Humane's AI Pin — a wearable assistant designed for ambient computing. It's a glimpse at the post-screen world, where interfaces become invisible, and context-aware AI becomes our companion.

Pair that with biometric proof-of-life and you've got a serious deepfake countermeasure in the making.

Finally, let's talk predictions. What trends do you think are going to define the next 12–18 months in the AI technology sector?

  1. Out of necessity, detecting deep fakes using real time authentication biometrics will become mainstream.
  2. Decentralised edge models — driven by privacy and real-time responsiveness.
  3. Regulated identity frameworks — especially in Europe and the UK, which will reward platforms that can prove both security and compliance.

For us, we'll continue building systems that deliver messages to a human — not just a device. Because in a world of deepfakes, bots, and breached passwords, you are the only source of truth.


Thank you very much, Alan!

Read more about Alan on LinkedIn and find out more about YEO Messaging at www.yeomessaging.com.