Marc Fernandez of Neurologyca: Building Human Context AI for Empathetic Systems

Today we're meeting Marc Fernandez, Chief Strategy Officer at Neurologyca. They specialise in giving AI the missing context to move beyond raw prompts, making systems empathetic, adaptive, and truly human-aware.
Over to you Marc - my questions are in bold:
Who are you, and what's your background?
My background is in physics, where I began developing technology for telecommunications. That experience made me curious about how people interact with digital systems, not just the systems themselves. Over time, this led me to become a tech founder and professor, and ultimately to focus on what I believe is the next wave of human and machine interaction: AI that truly understands people.
What is your job title, and what are your general responsibilities?
I am the Chief Strategy Officer at Neurologyca, where I lead our expansion into the North American market. My role involves aligning our technology with the needs of partners and customers, from cloud providers to enterprises. I spend much of my time working on product market fit, partnerships, and fundraising, while also guiding how our platform can best serve real-world workflows.
Can you give us an overview of how you're using AI today?
At Neurologyca, we are building what we call Human Context AI. Most AI systems today are good at processing prompts or detecting single signals like tone of voice, facial expression, or heart rate, but they stop short of understanding what those signals actually mean in context. Without that layer, many applications miss the mark.
Take heart rate as an example. A high heart rate could mean excitement or anxiety. Without context, an AI system cannot tell the difference. Or think about fear: feeling fear while watching a scary movie trailer is part of the experience, but feeling fear while taking a virtual course online is a serious problem that should be flagged. These distinctions are critical if AI is going to be genuinely useful.
Human Context AI fills that gap. By combining and interpreting signals in ways that reflect real life, we help systems tell the difference between stress and excitement, or confidence and hesitation. This makes AI more trustworthy, empathetic, and measurable. With context, wellness apps can deliver meaningful guidance, customer service bots can respond with empathy, and education platforms can measure whether a course is building confidence rather than creating stress. Without it, AI misses the human element. And if AI cannot understand us, how can it truly help us?
Tell us about your investment in AI. What's your approach?
We have a dedicated team in Spain that has spent years developing our human context algorithms through hands-on work with clients. Now we are investing in making those capabilities available at scale through a cloud-based platform. Our approach is to build the core technology ourselves, while also integrating with partners such as hyperscalers, app developers and chipmakers. This lets us stay focused on what makes us unique, while still making it easy for others to use and adopt our technology.
What prompted you to explore AI solutions? What specific problems were you trying to solve?
Neurologyca began in neuromarketing, where we learned how people truly respond to content and experiences. That work showed us the power of human signals such as expressions, tone, and reactions, and the limits of systems that could capture them but not explain them in context. A smile might signal confidence or discomfort, a raised voice could mean excitement or frustration.
We saw the opportunity to turn that expertise into a scalable platform by building our own language model and agents designed for human context. Our goal is to make AI systems that not only process prompts and signals, but actually understand intent and adapt in ways that feel trustworthy and human.
Who are the primary users of your AI systems, and what's your measurement of success?
We work with AI platforms, enterprise companies, app developers, and chipmakers in use cases where human context is essential to achieving measurable results. Success for us is always tied to outcomes. That could mean a training program showing quantifiable gains in confidence, a wellness application distinguishing between stress and excitement, or a platform proving how audiences actually respond to content. In each case, human context is what turns raw signals into insights that can be trusted, scaled, and built into the core of products and systems.
What has been your biggest learning or pivot moment in your AI journey?
We learned that capturing signals alone is not enough. It may be interesting to measure expressions or tone, but the real value comes when those signals are translated into context that matters. That realization pushed us to move from being a boutique consulting company into becoming a platform provider, where we deliver insights at scale.
How do you address ethical considerations and responsible AI use in your organisation?
We focus on transparency and user consent. People should know what signals are being captured, how they are used, and what they receive in return. We prioritize processing on device or at the edge whenever possible, which reduces unnecessary data transfer and adds a layer of privacy. Most importantly, we do not present our insights as absolute truths. Instead, we provide probabilities and context, leaving room for human judgment.
What skills or capabilities are you currently building in your team to prepare for the next phase of AI development?
We are building three areas. The first is infrastructure, making sure our platform is reliable and can scale globally. The second is developer tools, such as SDKs and sandboxes, so partners can easily experiment with our technology. The third is applied research, which keeps us at the frontier of interpreting human signals. We are also adding talent from outside pure engineering, including psychology and design, because understanding people requires expertise across disciplines.
If you had a magic wand, what one thing would you change about current AI technology, regulation or adoption patterns?
I would shift the focus away from narrow technical benchmarks and toward real human outcomes. Too often AI is judged on test scores that do not reflect everyday situations. We need ways to measure how AI actually improves lives, decisions, and workflows.
What is your advice for other senior leaders evaluating their approach to using and implementing AI?
Start by focusing on the workflow and the people who will use it, not just the technology. It is easy to be impressed by what AI can do, but unless it directly improves how people work or live, adoption will be limited. One lesson we learned is that framing AI as "emotion detection" misses the point. The real opportunity lies in delivering context that connects directly to outcomes.
What AI tools or platforms do you personally use beyond your professional use cases?
I use a mix of everyday and experimental tools. I rely on large language models for writing and analysis, and I also explore platforms like Vectara for retrieval-augmented generation. In creative spaces, I enjoy experimenting with tools like Runway for video or ElevenLabs for voice. These give me a sense of where AI is heading outside of the enterprise world.
What's the most impressive new AI product or service you've seen recently?
I have been impressed by Anthropic's work with Constitutional AI (original blog post from Anthropic). The idea of training models with an explicit set of guiding principles, rather than just trial and error, feels like a meaningful step toward making AI more aligned and accountable.
Finally, let's talk predictions. What trends do you think are going to define the next 12-18 months in the AI technology sector, particularly for your industry?
I think the next 12 to 18 months will be defined by AI systems that can move beyond just processing prompts or raw signals and start embedding human context into workflows. Agentic AI is already showing promise, but without understanding people, these systems will struggle to scale.
That is where the next wave of billion-dollar opportunities will come from. Wellness apps that adapt to real emotional states, educational platforms that measure confidence rather than stress, and customer service systems that respond with empathy instead of tone-deaf scripts. The companies that get human context right will define the next chapter of AI adoption. Without it, AI may remain powerful, but it will miss the human element.
Thank you very much, Marc!
Read more about Marc Fernandez on LinkedIn and find out more about Neurologyca at neurologyca.com.