Building an AI-Native Business Operating System in Kizen

A Q&A with Kizen’s Director of Data Science on building AI systems that scale, adapt, and deliver real business outcomes.

Wednesday, January 28th, 2026 | Interview with Antoine Gargot | 12 Min 📖

Enterprise AI is advancing rapidly, but building production systems that operate reliably inside complex regulated businesses remains a tricky problem. To understand how Kizen approaches this challenge, we sat down with Antoine Gargot, Kizen’s Director of Data Science, to walk through the architectural decisions behind the platform.

In this conversation, we discuss what Kizen is building today, the problems it’s solving in regulated industries, and how multi-agent systems, retrieval, and model strategy combine to deliver production-grade outcomes.

What problem are we solving at Kizen?

Kizen is building an AI-native enterprise platform that makes CRMs and business workflow automation easy to use, especially in regulated industries such as healthcare, financial services, and insurance.

We’re solving one of enterprise software’s hardest problems: adoptability. In regulated industries, jargon-heavy systems and incredibly intricate workflows make software difficult to use without specialized training. Kizen's AI addresses this by translating industry-specific intent directly into system operations, allowing users to complete complex tasks through natural conversation instead of navigating inadequate interfaces.

Kizen enables:

  • Intelligent automation: Users describe the outcome they want, and Kizen generates the underlying workflow and logic needed to build the system

  • AI-assisted decision making: AI reasoning is embedded directly into business processes to support evaluations, recommendations, and next steps

  • Content generation: Business-contextualized emails, forms, and reports are created in seconds rather than hours

  • Meeting intelligence: Notes, summaries, and follow-up actions are automatically captured and tied to workflows and records

  • Knowledge Q&A: Anyone in the organization can ask questions about internal processes, industry requirements, or their own data and receive context-appropriate answers

These capabilities change how everyday work gets done. Tasks that once required technical expertise can now be handled directly by the people doing the work using natural language, resulting in faster execution, lower overhead, and less time lost to tooling friction.

In practice, this translates into concrete improvements across various industries:

  • Insurance operations: Generate compliance or performance reports in seconds, no query builders or IT tickets, cutting turnaround from days to minutes. Over 17.5% of our generated reports are built using AI only.

  • Healthcare administration: Create intake forms, follow-up workflows, and patient communication templates through conversation, eliminating hours of manual setup.

  • Banking and finance teams: Ask questions about pipeline health, risk exposure, or client activity and receive structured outputs instantly, enabling faster decisions.

How do Kizen AI systems, particularly our use of LLMs, multi-agent architectures, and RAG, create global impact for customers?

Kizen's impact comes from three tightly integrated systems working together.

Multi-Agent Orchestration

Rather than relying on a single AI to handle everything, we use a team of specialized agents. Planner agents interpret user intent, knowledge agents retrieve relevant context, builder agents generate assets, and critic agents validate the results. This mirrors how effective engineering teams work: specialists collaborating toward a shared goal instead of a single generalist doing everything.

RAG with Automatic Indexing

Every object in the platform: custom data models, automations, forms, activities, is automatically embedded and indexed in our knowledge base. This allows users to ask questions about their data and processes without any setup. The system delivers value immediately and becomes more capable over time as additional business context is added.

Global Knowledge Sharing

The platform combines business-specific knowledge with general industry expertise. Users can ask questions about their own processes as well as industry best practices, with answers grounded in both contexts. Knowledge is shared across the organization, breaking down silos.

Through our catalog system, users can contribute and share knowledge using PDFs or website references. Catalogs can be shared with specific users or groups, allowing different departments to maintain distinct knowledge bases tailored to their needs. Together, these components provide full knowledge management through a unified interface designed for enterprise users.

Because the platform is built on an abstract data model rather than industry-specific assumptions, capabilities developed in one domain can be reused across others, innovation in one area strengthens the entire system.

How does Kizen’s AI change how work actually gets done?

One of the most impactful transformations with Kizen AI is with reporting and analytics, particularly for compliance, commission tracking, and operational analysis.

Generating custom reports traditionally required deep understanding of the data model, navigating complex relationships between entities, understanding field mappings, and building queries manually. For many users, this is prohibitively complicated and time-consuming. They either give up, wait for technical support, or work with incomplete data.

Kizen's solution: the Unified Chat

Users describe the report they need in natural language through our Unified Chat. The AI interprets the request, navigates data model relationships automatically, and generates the report immediately.

Reporting goes beyond static outputs. Our Analytics Agent can dynamically query data and produce visualizations, charts, and trend analysis in response to the question being asked. Instead of relying on pre-built dashboards that may or may not be relevant, users receive exactly the analysis they need, when they need it.

The result is a fundamental shift: reporting becomes a conversation rather than a technical skill, allowing the people closest to the business question to get answers directly without handoffs or delays.

How does Kizen’s architecture turn LLMs into real, end-to-end systems?

Kizen’s architecture is designed to deliver complete outcomes, not just isolated AI features. Instead of treating planning, retrieval, generation, and validation as separate capabilities, we use the modular “LEGO-block” approach, where small, specialized AI components combine into full production workflows.

How it works

As mentioned earlier, our whole system is managed behind the scene within our Unified Chat, our Strategic Planner gathers context on the knowledge base and asks question to the user to gather all the requirements.

From there, specialized Builder Agents execute the plan in parallel. One may generate data models, another builds forms, and another creates automations. A Critic Agent reviews the output against the original requirements and feeds back refinements. The final result is then presented to the user for review.

Why is this innovative?

What sets our architecture apart is the depth of integration. Because our platform owns the data model, automation engine, and business context, our AI operates with full visibility into how the business actually works. It’s not guessing or stitching together fragmented inputs; it’s reasoning over a complete system.

We also use a tiered model strategy. Larger models handles complex planning where precision matters most, while distilled models powers parallel generation for speed. This balance gives us both accuracy and efficiency.

These components make LLMs, RAG, and multi-agent systems work as one system focused on outcomes, not raw AI responses.

How does Kizen stay ahead of rapid advances in AI?

We’ve built disciplined, repeatable processes that let us adopt new AI capabilities quickly without destabilizing production systems.

Model evaluation pipeline

When new models are released, we test them against a golden dataset: a curated set of real-world scenarios that reflect our most critical use cases. We measure performance against a strict baseline and identify regressions before any model is considered for integration.

Daily integration testing

We run comprehensive LLM integration tests every day. These tests go beyond response quality to evaluate the actual artifacts the system generates: data models, automations, forms, and workflows, allowing us to catch regressions in either our platform or the underlying models early.

Research-first culture

We invest in emerging techniques before they become mainstream. One area we’re especially excited about is reflection systems, which are critic agents that iteratively improve prompts and outputs. Research increasingly shows that strong prompting and structured feedback can outperform fine-tuning, particularly with newer model generations. Because critic agents have been part of our architecture from the start, we’re well positioned to adopt these advances immediately.

Together, these practices allow us to move quickly without sacrificing reliability, even as the AI landscape evolves.

What differentiates our AI infrastructure from others in the market?

The difference is architectural: our platform is the context.

Most AI platforms try to work around fragmented data. They rely on layers of middleware, brittle API glue code, MCPs, and disconnected RAG pipelines to compensate for systems that were never designed to work together. These approaches are expensive to maintain, prone to failure, and force the AI to guess.

At Kizen, we took a different path. We built the platform on a shared data model that natively includes automation, ETL, data management, and application logic. AI isn’t bolted on as an afterthought; it's actually embedded in the system itself.

This unlocks several key advantages:

  • No integration tax: AI works immediately across all data and processes in the platform

  • Complete context: The AI operates on a unified system, not stitched-together fragments

  • Automatic knowledge: Every object is indexed for retrieval automatically, with no manual RAG setup

  • Native extensibility: When developers add new capabilities, the AI can understand and use them instantly

The result is a well-functioning Business Operating System, where AI is native to a coherent, shared foundation.

We also prioritize correctness over raw speed in model selection. We use industry-leading models such as Google’s Gemini Pro for planning and Anthropic’s Claude for complex reasoning: because when you’re generating business-critical assets, accuracy matters more than shaving off milliseconds.

What makes Kizen a premier place for AI engineers and researchers?

Two principles shape the Kizen experience: deep ownership and real autonomy.

Ownership, not just tickets

Engineers own outcomes. If you’re responsible for a core asset type (such as data models, automations, or forms), you shape the orchestration, architectural decisions, and implementation end to end. You collaborate with the team up front to align on approach, but once work begins, the ownership is real and sustained.

Research before building

We prioritize understanding the problem deeply before writing code. Engineers are given dedicated time to experiment, explore new techniques, and share their thinking before committing to an implementation. This process is built into how we plan and scope work.

Fast feedback from real users

Research is balanced with shipping. We deliver quickly, observe how features perform in real workflows, and iterate based on actual usage rather than tenuous assumptions. This tight feedback loop keeps the work grounded in reality.

Continuous growth

We actively invest in professional development, such as conference attendance and learning resources to stay current. AI evolves quickly, and we make sure our team evolves with it.

Impact at scale

The systems you build power critical workflows across healthcare, insurance, and financial services. The work you do reshapes how complex, high-stakes industries operate.

Together, these elements create an environment where strong engineers and researchers can do their best work and see its impact clearly.

How do our engineering principles and culture support innovation, experimentation, and responsible AI?

Our culture is designed to make innovation sustainable, encouraging experimentation while maintaining high standards for quality and responsibility.

Deep collaboration before execution

Every project begins with thorough discussion. We align on goals, surface edge cases early, and build shared understanding across the team. This front-loaded thinking reduces ambiguity and enables faster, more confident execution.

Systematic quality gates

Quality is enforced through the system itself. All AI-generated outputs are reviewed by critic agents that validate results against defined requirements. Daily integration tests catch regressions, and evaluations against a golden dataset ensure new model updates improve performance without sacrificing reliability. Innovation is encouraged, but never at the expense of trust.

Responsibility by design

Our centralized data model gives AI complete, validated context and clear boundaries. Instead of reasoning over fragmented or ambiguous data, the system operates within a unified, well-defined architecture. This makes responsible AI a default outcome of the platform, not a separate layer added after the fact.

Research grounded in real impact

We actively explore emerging techniques: such as reflection systems, prompt optimization, and multi-agent patterns, while staying anchored in user needs. Research is validated by shipping features that customers use in real workflows, ensuring experimentation translates into measurable value.

These principles create an environment where innovation, rigor, and responsibility reinforce one another.

Where are the biggest opportunities for multi-agent and RAG systems over the next few years?

The next phase of evolution will be driven by systems that are more adaptive, more proactive, and more specialized.

Reflection and self-optimization

One of the most promising frontiers is AI that can improve itself. Critic agents that evaluate outputs and refine prompts are already demonstrating that strong prompting can outperform fine-tuning. We expect this to evolve into systems that continuously optimize their own reasoning patterns based on real-world outcomes.

Proactive intelligence

Today’s AI is largely reactive; it waits for users to ask questions. The next step is AI that actively monitors operations, identifies inefficiencies, and surfaces opportunities without being prompted. This includes analyzing metrics, detecting repetitive processes that can be automated, and uncovering insights that are easy for humans to overlook.

Deeper specialization

As models advance, agent roles will become increasingly specialized. Instead of general-purpose assistants, we’ll see domain-specific agents, for areas like insurance underwriting, healthcare compliance, or financial analysis; working together through orchestration layers. This specialization will enable higher accuracy and more reliable outcomes in complex, regulated domains.

Together, these shifts point toward AI systems that don’t just respond to requests, but actively improve, anticipate needs, and collaborate with precision.

What’s next for Kizen AI?

Our north star is proactive AI. We’re not just building another chat assistant that waits for instructions; we’re building a system that understands your business deeply and takes action to improve how it operates.

In practical terms, this means:

  • Operational monitoring: AI that continuously tracks key metrics and flags issues before they become problems

  • Process optimization: Identifying repetitive workflows that can be automated and proactively surfacing those opportunities

  • Intelligent recommendations: Moving beyond answers to suggest the right questions to ask and the actions to take

We’ve already laid the groundwork with our Agent Supervisor architecture, which is an orchestration layer that interprets goals, builds execution plans, coordinates specialized agents, and validates results through critic agents.

The next phase is continuous intelligence: a system that doesn’t wait for instructions, but actively monitors, adapts, and improves business operations in real time.

Interview with Antoine Gargot

Director of Data Science @ Kizen

Last updated

Was this helpful?