Designing the AI Agent Ecosystem for Trust: The Rise of the Net Fiduciary™ 

Net Fiduciaries


If your AI agent claims to be acting on your behalf, but its goals are optimised for someone else, whose interests is it really serving? 

As AI agents grow in autonomy, organisations are no longer just building tools. They’re building systems that act, decide, and sometimes initiate on behalf of users. That level of delegated power, especially in personal contexts, demands more than good intentions. It requires a new layer of governance, one that’s embedded directly into the design. 

And it starts with a shift in mindset: from feature builder to Net Fiduciary™. 

From Assistant to Actor: The Delegation Threshold 

In our last post, we explored the Agentic AI Autonomy Ladder – Levels A0 to A5 – and highlighted where trust becomes non-negotiable: Levels A2 to A4. 

This is where: 

  • A2 agents execute user commands 
  • A3 agents interpret context 
  • A4 agents pursue goals independently (within limits) 

These agents don’t just support users. They act for them. They manage tasks, predict needs, and interface with private systems: calendars, health data, finances, routines, contacts and intimate, real-time context. 

That’s the point where agency and power intersect. And that’s when governance becomes the product. 

Without clear oversight, agents can misfire. Worse, they can act in ways that benefit the provider more than the user, especially when incentives are misaligned. This isn’t theoretical. It’s already playing out in the wild. 

So what’s the solution? 

We call them a Net Fiduciary 

A Net Fiduciary™ is an organisation that voluntarily takes on fiduciary-like responsibility when offering AI agents with delegated autonomy. 

It’s not a legal category (yet), but a design stance — a new form of digital stewardship for agentic systems. 

Being a Net Fiduciary™ means building agents that: 

  • Use user-first logic: decisions align with the individual’s goals, not the provider’s metrics 
  • Offer radical transparency: no black-box behaviour 
  • Include authentic consent and override mechanisms: escalation paths, human-in-the-loop control 
  • Embed privacy into infrastructure:  through edge-native processing, not retrofitted policies 

These aren’t optional design principles. They’re structural. They define whether the agent earns trust or erodes it. 

Autonomous agents don’t just need better tech. They need better intentions, embedded into the governance layer. 

Why This Matters: Risk, Regulation, and Reputation 

The stakes are no longer hypothetical. Organisations that fail to govern their agents properly will face a trifecta of challenges: 

  • Reputational damage when systems overstep or misalign 
  • Regulatory scrutiny as global AI law catches up 
  • User rejection of automation they don’t understand or trust 

We’re already seeing the consequences of ungoverned autonomy. For example, in the Air Canada chatbot case, where a customer service AI gave misleading information, and the airline initially tried to deny liability (and blame the AI). Without governance, these failures aren’t bugs, they’re systemic risks that undermine trust at scale.  

But those who operate with fiduciary-grade intent, who design for the user’s best interest, gain real advantages: 

  • Resilient trust in a fragile, and uncertain, new AI market 
  • Faster paths to compliance and explainability 
  • Competitive differentiation in a sea of commoditised agent features 

Governance is no longer an ethical extra. It’s the indispensable layer that allows autonomy to scale. 

How DataSapien Enables Fiduciary-Grade Agent Design 

At DataSapien, we’ve built the Personal AI SDK to support organisations that want to act as Net Fiduciaries™, without starting from scratch. 

Our platform allows brands to: 

  1. Deploy agents on-device, preserving privacy and context 
  2. Surface operational and marketing insight without sharing raw data 
  3. Empower user control, overrides, and explainability 
  4. Share only what users intentionally choose, such as Zero-Party Data 
  5. Orchestrate a safe customer experience for better mutual outcomes 

This is not just infrastructure for intelligence. It’s infrastructure for trustable intelligence. 

Because we don’t just believe in better agents. We believe in better intentions, delivering better outcomes by design. 

Conclusion: Trust Infrastructure Powers Sapien-Centric Scale

Autonomous agents represent a leap in digital capability. But they’re also a test of digital responsibility; and responsibility flows from accountability.

As AI shifts from suggestive to agentic, organisations face a choice: extract value from users, or build systems that act in service of them for mutual benefit. 

AI Autonomy isn’t the destination; it’s an ongoing process. And one that requires us to design systems that know who they serve, and how to stay accountable when they act.

DataSapien and the GliaNet Fiduciary Pledge

At DataSapien, we don’t just advocate for ethical autonomy, we’ve committed to it.

We’re proud to be among the first organisations to sign the GliaNet Fiduciary Pledge. It reflects our commitment to building AI systems that serve users first, respect privacy by default, and remain accountable in how they act.

We believe Agentic AI must be governed proactively; with flexible, well-designed systems built for trust, not deferred by promises of future oversight.

Join Us

If your organisation is building AI and AI Agents to serve humans, we encourage you to explore the GliaNet Pledge and consider signing it. The GliaNet Alliance team would love to hear from you at hello@glianetalliance.org.

Net Fiduciaries