Embeeding AI Across inDrive’s Ecosystem

Today we're meeting Roman Atachiants, Lead of AI Transformation at inDrive. inDrive is the world's second-most downloaded ride-hailing app, providing mobility and other urban services in more than 800 cities across 48 markets.
Over to you Roman - my questions are in bold:
Who are you, and what's your background?
I'm Dr. Roman Atachiants, a Software Engineer and Data & AI Architect, currently leading AI Transformation at inDrive. Over the years, I've worked across Europe, Asia, and the Middle East, contributing to high-performance systems at Grab, AirAsia, and Careem. I hold a Master's in AI from Maastricht University and a PhD in Computer Science (Human-Computer Interaction) from Trinity College Dublin. My journey reflects a blend of science, engineering, and a drive to apply technology to real-world problems.
What is your job title, and what are your general responsibilities?
I lead AI Transformation at inDrive, a mobility and urban services platform with a mission to challenge injustice and positively impact the lives of 1 billion people by 2030. My role is to ensure AI is embedded across every function – product, growth, logistics, safety, fraud prevention, and customer support – not just as a tool, but as a capability that defines how we work and compete. I oversee the strategy, platform foundations, capability development, and innovation roadmap, while also mentoring teams on responsible and impactful AI adoption.

Can you give us an overview of how you're using AI today?
We've evolved from working on isolated AI applications to kickstarting a platform-wide transformation. AI now powers hyper-personalized growth campaigns, real-time ETA corrections, safety moderation, and more. We're moving toward a diversified super app that feels intuitive, safe, and intelligent across services. It's not about AI as an overlay; it's about rethinking experience, efficiency, and trust from ground up.
Tell us about your investment in AI? What's your approach?
Our investment philosophy is pragmatic. We build when differentiation matters – for example in consumer experiences and logistics – and buy when it accelerates time to value. We've also assembled cross-functional squads of data scientists, product leaders, and infrastructure engineers to scale AI responsibly.
What prompted you to explore AI solutions? What specific problems were you trying to solve?
We began with real, tangible pain points: long onboarding times, poor ETA predictions, and manual support workflows. AI allowed us to eliminate delays in document verification, optimize driver earnings, increase safety, and reduce internal support wait times. These weren't vanity projects, they were responses to existential challenges in high-scale, low take-rate environments.
Who are the primary users of your AI systems, and what's your measurement of success? Have you encountered any unexpected use cases or benefits?
AI is a horizontal capability that touches everyone in the organization: passengers, drivers, operations agents, marketers. We measure success by speed, scale, and satisfaction: reduced cancellations, faster support resolution, higher conversion per marketing dollar. A surprising benefit? LLMs significantly improved some of the verification processes while simplifying our systems. We've also seen a significant boost in getting a product feature from an idea to a prototype phase, thanks to assisted coding (vibe-coding).
What has been your biggest learning or pivot moment in your AI journey?
The biggest realization? AI isn't about models: nobody really cares whether you're using o4-mini-high or Gemini Pro 2.5. It's about mindset, treating AI as a teammate. Early on, we focused too much on ML/AI uplift in operations (like safety and efficiency), and too little on how people adopt, interpret, and operationalize AI. Our pivot was shifting from isolated data science wins to embedding AI into how the company works. Once we taught people to help themselves, they began proposing new ideas – we've now identified nearly 100 places in the business where AI can make an impact, with a rich backlog of AI-powered functionality as a result.
How do you address ethical considerations and responsible AI use in your organisation?
We've codified responsible AI principles: fairness, transparency, safety, and respect for IP. We also created an "AI Code" — five simple principles for teams to follow:
- Learn AI – Understand how it works so you can use it confidently and responsibly.
- Build What Moves Us – Use AI to solve real problems aligned with our mission, not just to chase hype.
- Protect What Matters – Safeguard data, people, and the platform — no shortcuts, no blind uploads.
- Partner and Command – Work with AI, not for it. You're in control; it doesn't make the decisions.
- Own the Outcome – If it ships with your name on it, you're responsible — AI or not.
We only use internally approved platforms, and train teams on prompt design, data sensitivity, and feedback loops. Additionally, our legal team runs a dedicated AI governance project.
What skills or capabilities are you currently building in your team to prepare for the next phase of AI development?
We're training both technical and non-technical teams in AI literacy. We host weekly coaching sessions, internal workshops, and an AI Champions program across functions. On the engineering side, we're investing in platform development, "vibe-coding" skills, and MLOps to accelerate time-to-market. Our next focus: agent-based systems and autonomous workflows.
If you had a magic wand, what one thing would you change about current AI technology, regulation or adoption patterns?
I'd eliminate the false binary between AI optimism and skepticism. We need pragmatic acceleration – more emphasis on practical ROI, risk-adjusted deployment, and interdisciplinary collaboration. Regulation should protect rights without stifling innovation. And enterprise culture needs to move from "wait and see" to "learn fast, apply smart."
What is your advice for other senior leaders evaluating their approach to using and implementing AI? What's one thing you wish you had known before starting your AI journey?
You don't need to be a data scientist, but you do need to understand how AI affects your cost structure, customer experience, and competitive edge. Start with one high-impact problem, measure everything, and scale what works. And remember: AI isn't magic, it reflects your data and your culture.
What AI tools or platforms do you personally use beyond your professional use cases?
I use Google Gemini for Workspace, and OpenAI's ChatGPT for a range of tasks: from creating internal copilots to writing, analysis, and ideation. I've used GitHub Copilot for the last 2–3 years, and I'm now getting proficient with "vibe-coding" through tools like Windsurf and Cursor.
What's the most impressive new AI product or service you've seen recently?
I've tested everything from simple tools to more futuristic ones like Manus. A lot are overhyped and struggle with reliability. The standout for me recently is Google's Gemini Deep Research. It combines the power of Google Search with advanced reasoning to produce high-quality, structured outputs – a huge time saver and great starting point for deep work.
Finally, let's talk predictions. What trends do you think are going to define the next 12-18 months in the AI technology sector, particularly for your industry?
I'm not great at predictions, but a few trends are clear. 1. Models are getting faster and more capable. Even small local models now outperform state-of-the-art models from two years ago, and many now include vision. 2. People overestimate AI's impact on jobs but underestimate its impact on work. It's going to change how we work more than whether we work. 3. Agents are finally viable.
Thanks to tool-calling capabilities (like OpenAI's o3/o4 models) and emerging standards like Model-Context-Protocol (MCP), agents can now follow routine procedures with decent (of course not perfect) reliability. What does this mean for people? Skills like emotional intelligence, taste, and interpersonal communication will grow in importance.
When something really goes wrong with your ride, you don't want an AI, you want a human who understands. But that human will have a powerful AI assistant helping them work faster and smarter. This pattern will repeat across industries.
Thank you very much, Roman!
Read more about Roman on LinkedIn and find out more about inDrive at indrive.com.