
Reporting basis for this article
Named public sources are linked here so readers can inspect the original trail, not just the summary.
Why this matters: MuleSoft Agent Fabric adds new ways to keep AI agents in line with deterministic routing, centralized LLM governance, and clearer multi-agent oversight.
MuleSoft Agent Fabric adds new ways to keep AI agents in line with deterministic routing, centralized LLM governance, and clearer multi-agent oversight.
AI tools: what to know first
The current crop of AI-tools for enterprises is shifting from single chatbots to orchestrated agent platforms. MuleSoft’s Agent Fabric sits in that second camp: it was built as a shared layer to register, view, interconnect, and govern agents across a company((REF:6),(REF:18)). That framing matters, because once organizations run more than a handful of agents, the hard problem stops being “Can the model answer?” and becomes “Who’s allowed to do what, and who’s paying for it?”
Steps
Assess agent inventory, ownership, and who pays for each agent
Start by cataloging every deployed agent, who owns it, and which cost center pays for its compute. Ask blunt questions: who built it, what data it touches, and whether anyone’s tracking token spend. Doing this early reduces surprise audits and makes governance practical rather than theoretical.
Codify deterministic workflows alongside probabilistic model paths for repeatable results
Map the decision points where you want strict rules versus where model reasoning can add value. Use deterministic scripts to handle routine approvals or compliance checks, and reserve LLM inference for ambiguous, high-value tasks. This lowers compute costs and yields more predictable, auditable outputs over time.
FAQ: Practical questions teams ask about Agent Fabric, Agent Script, and AI Gateway governance
Q: When was Agent Fabric first announced? A: Agent Fabric appeared in September 2025 inside the MuleSoft AnyPoint Platform as a place to register and govern agents. Q: What is Agent Script for Agent Broker and when will it be generally available? A: Agent Script is a deterministic scripting feature for Agent Broker that helps codify multi-agent workflows; it was in beta and slated for general availability in June 2026. Q: How does LLM Governance in AI Gateway help teams? A: AI Gateway’s LLM Governance gives a central view of token usage, costs, and data flows for third‑party models, which makes auditing and cost control much easier in practice.
AI tools: the numbers that change the answer
There’s a comfortable myth that more autonomous agents automatically mean more productivity. The MuleSoft stack quietly contradicts that. Agent Fabric added deterministic scripting plus automatic scanning to discover and register new agents((REF:2),(REF:17)). That combination tells you what actually happened in real deployments: sprawl first, then the need to catalog everything, then stricter controls. clearly: without governance-centric tools, multi-agent claims don’t hold up at a large scale.
AI tools: where the evidence is strongest
Many AI-tools pitch fully autonomous agents as the end goal. Practitioners quoted around Agent Fabric argue otherwise: pure autonomy rarely survives production because enterprises need predictable outcomes and controlled handoffs between rules and reasoning[1]. Salesforce’s answer was Agent Script for Agent Broker, which lets teams codify workflows in multi-agent systems to keep outputs consistent and reliable((REF:8),(REF:19)). The reality: the winning tools mix determinism with LLM “intuition” instead of choosing one side.
AI tools: how the decision plays out
Imagine a company that started with a few chat-style assistants and ended up with a dozen overlapping agents. Requests bounced around, costs were opaque, and no one knew which bot owned which task. They brought those agents into Agent Fabric’s registry, using the platform to interconnect and govern them as a single system((REF:6),(REF:18)). Then they layered Agent Broker’s task routing on top((REF:7),(REF:20)). Same models, different control plane; suddenly, duplication dropped and the AI-tools felt like one coordinated fabric instead of a zoo.
AI tools: what it looks like in practice
Consider a hypothetical finance team rolling out assistants for forecasting, compliance checks, and reporting. Each used a different LLM provider. Costs spiked, and auditors started asking where prompts and outputs were flowing. Central LLM Governance in Salesforce’s AI Gateway gave them a single place to see token usage, spending, and data paths for all third‑party models((REF:4),(REF:5)). The insight is straightforward: for serious AI-tools, observability over models is as non‑negotiable as logging is for transactions.
AI tools: tradeoffs that change the choice

When you compare AI-tools, the big fork is “probabilistic only” versus “probabilistic plus rules.” Agent Script for Agent Broker sits firmly in the hybrid camp. It lets teams steer parts of the decision tree through predetermined rules, instead of letting the LLM improvise every step((REF:8),(REF:9)). Those rule paths also consume fewer compute resources than pushing everything through a large model[2]. So you can chase clever autonomy, or you can choose tools that trade a bit of flexibility for cheaper, more repeatable behavior.
âś“ Pros
- Deterministic controls like Agent Script provide repeatable behavior, which makes it easier for compliance and security teams to sign off on agent-driven workflows in sensitive domains.
- Encoding parts of workflows as predetermined rules can meaningfully reduce the number of LLM calls, lowering compute costs and improving latency for high-volume enterprise usage.
- Combining deterministic and probabilistic paths lets teams allocate reasoning power where it offers the most value, instead of wasting model capacity on simple, rule-friendly decisions.
- A registry-and-broker model centralizes visibility into who owns each agent and what it does, improving governance and helping prevent the quiet sprawl of duplicative assistants.
- Central LLM Governance inside AI Gateway gives organizations a single vantage point on token usage, costs, and data flows, which directly supports budgeting and risk management.
âś— Cons
- Relying heavily on deterministic rules can slow experimentation, because every change might require coordination between engineering, compliance, and business stakeholders before deployment.
- Features like Agent Script’s deterministic orchestration are still in beta until June 2026, so early adopters need to manage version changes and possible behavior shifts carefully.
- Centralizing agents and LLM traffic through registries, brokers, and gateways can introduce new single points of failure if high availability and resilience are not designed in from the start.
- Integrating legacy REST and SOAP APIs through MCP Bridge still demands work on authentication, rate limits, and error handling, which may expose brittle patterns in older back-end services.
- Governance layers that monitor all LLM usage can feel intrusive to some product teams, potentially creating friction or encouraging shadow AI projects if communication and incentives aren’t handled well.
AI tools: what could change next
Future enterprise AI-tools are starting to look less like standalone apps and more like infrastructure: registries, brokers, and governance planes. Agent Fabric’s role-based registry and cross-domain routing((REF:6),(REF:7)), combined with AI Gateway’s LLM Governance((REF:4),(REF:5)), already resemble a control stack. Add in Model Control Protocol features—bridges to legacy APIs and hosted MCPs((REF:11),(REF:12))—and you get a direction of travel: agent platforms that treat models, tools, and data sources as pluggable, governable resources.
AI tools: the decision points to check
If you were evaluating enterprise AI-tools today, you’d start with three checks. First, does the platform offer an agent registry with clear governance, like Agent Fabric’s ability to register and interconnect agents centrally((REF:6),(REF:18))? Second, can you orchestrate multi‑agent flows with some deterministic scripting rather than pure LLM guessing[3]? Third, is there LLM Governance giving you unified visibility into token usage, costs, and data flows across providers((REF:4),(REF:5))? If any of those are missing, that tool is a short‑term experiment, not a foundation.
đź’ˇKey Takeaways
- Key point: Treat agent platforms like shared infrastructure, not isolated apps. Once your organization runs more than a few agents, you’ll need registries, brokers, and governance just like you need identity and logging today.
- Key point: Blend deterministic rules with probabilistic reasoning instead of choosing one camp. Use Agent Script-style controls to anchor mission‑critical steps, while leaving open‑ended reasoning to LLMs where nuance and creativity actually matter.
- Key point: Centralize LLM oversight early with tools like AI Gateway’s LLM Governance. Having a single choke point for token usage, costs, and data flows prevents nasty surprises when pilots quietly scale into major spend.
- Key point: Plan for legacy integration as a first‑class problem. MCP Bridge and Informatica‑hosted MCPs highlight how much value is trapped in existing REST and SOAP APIs that agents can’t reach without a thoughtful access layer.
- Key point: Design projects around value realization, not just technical feasibility. The AWS P2V framing reminds you that without clear success metrics and organizational alignment, even well‑built multi‑agent systems stall between prototype and real business impact.
AI tools: risks and mistakes to avoid
One quiet failure mode of AI-tools is their inability to reach old but key systems. Many enterprises still depend on thousands of REST and SOAP APIs that agents can’t talk to cleanly. Salesforce’s Model Control Protocol additions—especially MCP Bridge—are meant to let agents access those legacy interfaces more easily((REF:11),(REF:12),(REF:15)). Informatica‑hosted MCPs aim to simplify access to enterprise data and APIs[4]. The message is blunt: unless your AI stack can plug into yesterday’s services, tomorrow’s agents will stall on integration, not intelligence.
-
Introducing predetermined rules via Agent Script can steer decision-making away from fully probabilistic agents, reducing unpredictability in outcomes.
(infoworld.com)
↩ -
Agent Script enables some decision-making to follow predetermined rules that require fewer computing resources than running a large language model.
(infoworld.com)
↩ -
Agent Script for Agent Broker expands deterministic controls so developers can codify workflows in multi-agent systems to ensure consistent and reliable outputs.
(infoworld.com)
↩ -
Salesforce says Informatica-hosted MCPs will simplify how agents interact with enterprise data and APIs.
(infoworld.com)
↩
Sources
This article brings together the following sources so readers can review the facts in context.
- MuleSoft Agent Fabric adds new ways to keep AI agents in line (RSS)
- Navigating the generative AI journey: The Path-to-Value framework from AWS (RSS)
- Cleveland Clinic & IBM debut new quantum simulation workflow (RSS)
- MuleSoft Agent Fabric adds new ways to keep AI agents in line | InfoWorld (WEB)
- Beyond the Chatbot: How Agentic Frameworks Change Network Engineering (WEB)