A decision layer between Agent actions and onchain execution. Define boundaries. Enforce policy. Surface risk before it moves.
AI Agents can already sign transactions, call contracts, and move funds. The part that's missing is the layer that decides whether they should. Ordinel makes that layer real.
Teams rely on spending limits, whitelists, and manual approvals. Each sounds reasonable. Each fails in practice.
A system can stay inside its cap while consistently moving in the wrong direction. Limits do not capture why, when, or to whom.
Business changes. Counterparties change. Contract states change. A static list creates the illusion of control without delivering real control.
Without trigger reasons, context, or consequence analysis, humans stop being the final defense and become a bottleneck.
Turn verbal instructions and scattered approvals into executable rules. Spending caps, target trust tiers, escalation sensitivity, time conditions, action scope.
Decompose permissions beyond "can pay." What actions, what targets, what amounts, what time windows, under what preconditions an Agent is eligible to act.
Evaluate each action against the full combination of conditions. Amounts, targets, time, scope, upstream confidence, and historical patterns interact to produce a judgment.
Four possible states: auto-approved, escalated with full context, blocked outright, or halted across an entire permission set. Every outcome traceable.
Risk decomposed into sources: abnormal targets, stale policies, frequency spikes, context conflicts. Each source maps to a corrective action.
Every decision leaves a record: which rules matched, which layer acted, who overrode, why. The first audience is the team itself.
The real danger is not that a system cannot move. It is that it keeps moving after the boundaries have become unclear.
An action should never execute first and only afterward be explained as reasonable.
Permissions as a combination of action type, target category, time window, and preconditions.
The switching point between human and system must be designed in, not improvised.
Without records, a control layer is a black box.
When facing uncertainty, "continue execution" is not the default answer.
Control exists to keep running inside real teams.
Without a control layer, many functions can technically run but cannot be safely turned on for real customers.
Vendor settlements, budget allocations, operational payments. Finance will not accept "automate now, investigate later."
Turn what currently lives in habits and verbal agreements into rules the system can enforce.
The hardest risks are gradual loosening that looks normal. Prompt injection, address poisoning, policy drift.
Policy Builder, target categories, risk preview.
Escalation routing, operator inbox, simulation.
Agent SDK, policy governance, certification access.
Multi-wallet orchestration, certified templates.
A control layer between AI Agent actions and onchain execution. It evaluates every wallet action against organizational policy before it is allowed to execute, providing boundary enforcement, risk assessment, and escalation workflows.
A wallet holds permissions and signs transactions. A multisig distributes confirmation authority. Ordinel sits in front of both. It decides whether an action should reach the wallet at all, based on policy, risk, target trust, and time conditions.
Teams connecting AI Agents to real financial actions: Agent builders shipping execution products, treasury and operations teams automating payments, and protocol teams expressing governance as system behavior.
Actions can be auto-approved inside established boundaries, escalated to a human with full context and trigger reasons, or blocked outright if they violate hard rules. Every outcome is traceable.
ODL coordinates value, governance, and ecosystem incentives once Ordinel grows into a shared rules layer. It enters the picture after the product, not before it. Total supply: 750,000,000,000 ODL.
Phase 01 launches Q2 2026 with Policy Builder, target categories, and risk preview. Request early access now if your team is connecting Agents to real wallet permissions.
Over the past two years, more and more teams have started connecting AI Agents to real wallets and real onchain permissions. At first, the appeal was efficiency: automatic information gathering, automatic task execution, and the automatic completion of sequences that used to require constant human attention. But once an Agent starts touching money, the nature of the problem changes. The hardest question is no longer whether the system can run on its own. The harder question is who defines how far it is allowed to go, when it must stop, and who takes over when something deviates.
Ordinel is built out of this reality. It is not trying to become a flashier wallet, nor is it trying to rebrand traditional security processes onchain. What it adds is a clear control layer between an Agent's requested action and actual onchain execution. An Agent can submit a request, but a request is not automatically approved. Before execution, the system should determine what type of action this is, who it is targeting, what value range it falls into, whether it exceeds the current policy, whether it has entered a risk band, and whether it needs to be escalated to a human.
If Ordinel had to be summarized in one sentence, it is not about teaching Agents how to spend money. It is about giving organizations a way to govern how Agents are allowed to spend money.
Connecting an Agent to a wallet may look like the addition of one more tool, but in practice it moves a system from the stage of "doing tasks" to the stage of "moving money." In the first stage, errors usually mean poor output or a broken workflow. In the second stage, errors become something else entirely.
What makes the problem harder is that Agent errors do not always look absurd. Often they look reasonable. The amount may be small. The destination address may not be unfamiliar. The hardest requests are not the ones that should obviously be blocked. They are the ones in which each individual step is only slightly off, while the full sequence is already out of bounds.
Bounded autonomy is the answer to this misalignment. It does not argue that Agents should be fully locked down, because then automation loses its point. It also does not argue that humans should be removed from the loop entirely, because then loss of control is only a matter of time.
Ordinel is not designed for ordinary retail users, nor is it a safety assistant for people new to onchain wallets. It serves teams that are already using automation, or are preparing to connect automation to real financial actions.
The first core user group is Agent builders. The second is treasury, operations, and finance teams. The third is protocol security, infrastructure, and governance teams. The common feature across these cases is that automation is valuable, but only if control arrives before the action itself.
Ordinel is not aimed at an imagined future market. It is aimed at friction that is already happening now.
Ordinel is better understood as a control plane inserted between an Agent's requested action and real execution. It is taking rules that currently live across documents, group chats, approval forms, operator habit, and security common sense, and turning them into a decision layer that can run continuously.
The most expensive failures are often not caused by the complete absence of rules. They are caused by rules that exist only in fragments. Everyone knows part of the story, but the system knows none of it.
Ordinel's positioning can be stated directly: it is not here to help Agents gain more freedom. It is here to help organizations reclaim the right to define the scope of that freedom.
Policy must come before execution. An action should never execute first and only afterward be explained as reasonable. Least privilege means decomposing permission itself: action type, target type, time window, and preconditions.
Humans must be able to take over. Judgment must leave a trace. When the system faces uncertainty, "continue execution" should not be the default answer. The control plane must stay close to business reality.
None of these principles is especially flashy or novel. But together they determine whether Ordinel looks like a system capable of carrying responsibility over time.
Ordinel catches the request first and does not rush to execute. Instead, it breaks the action back down. What type of action is it? Who is the target? What value or permission range does it fall into? Which policy version is active?
Structurally, Ordinel is best understood as a judgment chain with a defined order. The request entry layer receives the action. The policy model answers whether this type of action is eligible. The risk layer asks how close it is to the edge. Only actions that pass this chain are handed to a wallet or external executor.
Execution is never the starting point. It is the result of judgment.
When wallet systems talk about permission, the model is usually simple: either grant it or do not. Once the executor becomes an Agent, broad permissions quickly show their limits. Agents do not naturally understand "this is technically possible, but it should not happen now."
The real problem the wallet permission model is trying to solve is not whether an action can technically be sent. It is whether the organization can accurately express what it actually intends to allow before the action is sent.
The permission model defines what the boundary looks like. The policy engine determines how the system uses that boundary to judge the action in front of it. Ordinel is not trying to build a switchboard that looks good in a demo. It is building a mechanism that continuously performs first-line judgment on the organization's behalf.
The value of the policy engine is that the organization no longer has to reopen the same meeting every time a new action appears.
The point at which a control layer starts to show its real value is not when the rules are written. It is when the system has to produce an actual decision on a real action.
The real job of the decision mechanism is not to make the system look decisive. It is to make every decisive outcome traceable to its source. Auto-approval, blocking, escalation, and override should all emerge naturally from the same control logic under different conditions.
Risk stops being a narrow question of whether an attack exists. More often, risk comes from drift in ordinary use: an old rule that remains active, a formerly trusted target whose state has changed, or a set of actions that each look acceptable on their own but no longer look acceptable in combination.
To catch those issues, a permission model and decision flow are not enough. Something in the middle has to keep working continuously, and it has to be considered together with template governance.
If your team is connecting AI Agents to real wallet permissions, we'd like to hear from you.