Bigeye Bets on AI ‘Trust Platforms’ as Mohamed K. Alimi Joins to Build Agent Oversight Tools
Bigeye unveils its AI Trust Platform to bolster AI agent monitoring and accountability, addressing compliance, governance and data oversight challenges

Most companies want to put AI at the centre of their workflows, but making sure AI systems act reliably with sensitive data remains a blind spot for IT teams. A recent survey found that whilst 98% of organisations plan to expand their use of AI agents, only 52% can actually track and audit all data interactions by these systems.
This oversight gap has become a pressing concern as AI agents move beyond simple automation to handle complex enterprise tasks. Unlike traditional software that follows predictable code paths, AI agents make decisions in real-time, often accessing multiple data sources in ways that can be difficult to trace or explain later.
The New Challenge: Monitoring What You Can’t See
We don’t run ads or share your data. If you value independent content and real privacy, support us by sharing.
Chief data officers and IT heads face a fundamental problem: how do you govern systems that don’t follow predetermined scripts? Traditional data observability tools can track data pipelines but they weren’t built for AI agents that might query databases, call APIs and make decisions based on prompts that change with each interaction.
The stakes are particularly high because 96% of technology professionals cite concerns about limited oversight of AI systems, even as these tools gain access to increasingly sensitive corporate data. Without proper monitoring, enterprises face risks from unauthorised data exposure, compliance failures and the inability to investigate when things go wrong.
Organisations are already struggling with fragmented oversight due to complex AI workflows , insufficient real-time monitoring and gaps in governance frameworks that make it difficult to establish clear accountability when AI systems make mistakes.
Bigeye’s Pivot: From Data Pipes to AI Trust
Bigeye built its reputation helping enterprises monitor their data infrastructure – tracking quality issues, mapping data lineage and identifying problems before they reached dashboards or reports. The company’s approach centres on what it calls Dependency Driven Monitoring , which maps analytics dashboards to their underlying data dependencies with column-level precision.
As AI agents become more prevalent in enterprise workflows, Bigeye realised that monitoring data pipelines alone wasn’t enough. The company is now building what it calls an ‘AI Trust Platform’ – a system designed to bring the same level of visibility to AI agent behaviour that it already provides for traditional data operations.
‘This is an extension of data observability, and a whole new layer in the AI tool stack that enterprises will need to safely scale up their use of agents,’ said Kyle Kirwan, Bigeye’s co-founder.
The platform aims to track which AI agents access what data, monitor the prompts they receive, log their responses and provide audit trails when investigations are needed. It’s essentially trying to make AI agent monitoring as routine as checking server logs or database performance metrics.
Mohamed K. Alimi’s Mission: From Concept to Reality
To lead this technical buildout, Bigeye has hired Mohamed K. Alimi as vice president of engineering. Alimi comes with direct experience in AI monitoring – he led the team at Datadog that built LLM Observability tools , taking the product from initial research to launch in under nine months.
At Datadog, Alimi worked on monitoring tools that provided visibility into large language model performance, cost tracking and security evaluations. The experience gave him firsthand knowledge of what enterprises actually need when trying to monitor AI systems in production.
‘Enterprises are under pressure to adopt AI faster, but most don’t have the tooling to manage it reliably,’ said Alimi. ‘Bigeye is building the foundation that will make AI adoption both safe and scalable for enterprises. I’m thrilled to help lead that effort.’
Eleanor Treharne-Jones, Bigeye’s CEO, positioned the hire as moving from planning to execution. ‘Mohamed’s experience building real-time visibility into AI systems makes him an ideal partner as we move from concept to execution.’
Defining AI Trust in Practice
Bigeye’s vision for AI trust focuses on several key areas: tracking which data sources AI agents access, logging the prompts they process, monitoring their outputs for quality and providing forensic capabilities when problems occur. The company claims this goes beyond traditional observability by accounting for the unpredictable nature of AI decision-making.
The platform builds on Bigeye’s existing capabilities in AI-powered anomaly detection and data lineage tracking. The company already monitors over 70 data quality metrics automatically and provides column-level lineage across modern and legacy data stacks. For companies grappling with AI privacy concerns and compliance requirements, such comprehensive monitoring could prove essential.
The company hasn’t detailed exactly what the first release will include or how it will differ from existing LLM observability tools already available from companies like Datadog, Arize AI, Langfuse and others. These competing tools already offer functionality like tracing, prompt management and metrics for debugging LLM applications.
Reality Check: What Ships When
Bigeye’s AI Trust Platform is expected to launch later this year, but details about specific features and pricing remain limited. The company currently provides comprehensive data observability, including dependency monitoring, automated quality checks and integration with notification systems.
The challenge will be differentiating from established players. Datadog is already expanding its AI agent monitoring with features like AI Agent Monitoring, LLM Experiments and centralised governance consoles. Other companies like Arize AI and Traceloop offer ML and LLM analytics platforms designed specifically for AI applications.
Bigeye’s advantage may lie in its enterprise customer base and existing data infrastructure integrations. Companies already using Bigeye for traditional data observability could find it easier to extend their monitoring to AI agents through the same platform rather than adopting separate tools.
The Market’s Direction
The push for AI observability reflects a broader recognition that chief data officers need new approaches to balance AI development with governance and compliance requirements. As AI agents become more autonomous and handle more sensitive tasks, the ability to monitor and audit their behaviour will become essential for enterprise adoption.
The need for AI transparency and model accountability is driving demand for these monitoring platforms. Unlike traditional software deployments where companies can rely on established monitoring practices, enterprise AI integration requires new approaches to oversight and governance.
Bigeye’s bet is that enterprises will eventually treat AI agent monitoring with the same importance they give to monitoring databases, APIs and applications. Whether this vision materialises will depend on how well the company can deliver tools that solve real problems rather than just checking compliance boxes.
The first release of the AI Trust Platform later this year should provide clearer evidence of whether Bigeye can turn its data observability expertise into practical AI governance tools that enterprises will actually use. For businesses still working through trust issues around AI deployment, having reliable monitoring tools could prove crucial for successful adoption.