Skip to content

Backend Ownership And Runtime Contract

This is a reference contract for the backend boundary of the SQLite migration.

For review-level docs, start with:

The target runtime model is:

text
data_stream -> asset_selection -> trading_prompt

Product language calls this:

text
Data Stream Block -> Asset Block -> Trading Prompt Block

The frontend configures a pipeline graph. The backend compiles that graph into strategies, enabled assets, deterministic data read plans, prompt context, trading decisions, and ledger entries. Each trading prompt defines one strategy. At execution time, the backend runs that strategy once per due asset.

The workflow canvas may also show a visual Trading Clone Block. That block is clone-level layout/runtime metadata, not a fourth executable strategy node.

Product Decision

Move trading, market data, clone configuration, clone ownership, and prop trading state into the new backend server backed by SQLite.

Supabase can remain temporarily for authentication compatibility and legacy UI reads, but it should not be the target source of truth for:

  • Clone ownership and clone configuration.
  • Market memory.
  • Data stream/context configuration.
  • Pipeline graph configuration.
  • Pipeline layout metadata.
  • Trading decisions, orders, fills, positions, and ledger entries.

The frontend should talk to the backend API. It should not directly write clone/trading state to Supabase.

Backend-Owned Domains

text
users
  -> clones
  -> clone asset selection
  -> clone pipeline graph
  -> clone execution config
  -> decision runs and actions
  -> wallets, positions, orders, fills, ledger

Global market ingestion remains shared:

text
Hyperliquid / Polymarket / DefiLlama
  -> shared SQLite market memory
  -> clone-specific pipeline compilation

Clone/user state does not duplicate market data. It only stores which assets each clone can consider and how connected data streams should be resolved into prompt context.

The clone pipeline graph is the canonical configuration surface. Any simplified Basic/Custom API should be a projection over graph nodes, not an independent asset-selection or execution-config source of truth. Canvas-only graph layout, such as the visual Trading Clone Block position and prompt-to-clone route, is stored separately from executable graph nodes and edges.

Migration Policy

Do not migrate old clones wholesale.

Carry forward only:

  • Market momentum sensitivity.
  • Analysis timeframe.
  • Custom behavior prompt.

Everything else can reset into the new defaults:

  • Asset universe.
  • Data source/channel selections.
  • Pipeline graph node/edge layout.
  • Existing generated analysis logs.
  • Old analysis subscriptions.
  • Old active asset arrays.
  • Old paper trading state, unless explicitly needed for a specific user.

Naming Rules

Use asset in product/API language. Use symbol as the stable exchange-facing identifier.

Allowed:

ts
assetSelection
supportedAssets
enabledAssets
assetOverrides
symbol

Avoid:

ts
assetOrToken
tokenSelection
tokenOverrides

External upstream names can keep their native terminology, for example Polymarket clobTokenIds and DefiLlama tokenSymbol.

A. Asset Selection Format

Asset selection answers: which assets can the clone consider trading?

The stored user intent is category/subcategory plus explicit symbol overrides.

ts
type HyperliquidHighLevelCategory =
  | 'all'
  | 'perps'
  | 'spot'
  | 'crypto'
  | 'tradfi'
  | 'trending';

type AssetSelectionRules = {
  version: 1;
  cloneId: number;
  source: 'hyperliquid';
  highLevelCategories: HyperliquidHighLevelCategory[];
  subcategories: {
    crypto?: Array<'all' | 'ai' | 'defi' | 'gaming' | 'layer1' | 'layer2' | 'meme'>;
    tradfi?: Array<'all' | 'stocks' | 'indices' | 'commodities' | 'fx' | 'preipo'>;
    spot?: Array<'all' | 'USDC' | 'USDH' | 'USDT'>;
  };
  explicitlyEnabledSymbols: string[];
  explicitlyDisabledSymbols: string[];
};

Invariants:

  • Generic HIP-3 category exposure is out of scope.
  • Tradfi is included only through curated Hyperliquid Tradfi subcategories.
  • Unclassified dex:symbol builder markets are not selectable until assigned to a supported category.

The backend expands rules into an effective trade universe:

ts
type EffectiveTradeUniverseAsset = {
  cloneId: number;
  symbol: string;
  displayName: string | null;
  source: 'hyperliquid';
  enabled: boolean;
  selectionSource: 'category_rule' | 'explicit_enable' | 'explicit_disable';
  categories: Array<{
    category: HyperliquidHighLevelCategory;
    subcategory: string;
  }>;
  supportsTrading: boolean;
  supportsContext: boolean;
  sortOrder: number | null;
};

B. Data Stream Format

Data stream nodes answer: what structured market memory is available to a trading prompt?

Use Fast, Balanced, and Deep for everyone. Basic users get generated defaults, Custom users choose broad source/profile presets, and Power users can open advanced channel controls.

ts
type ContextProfileName = 'fast' | 'balanced' | 'deep';

type SourcePolicy = 'off' | 'auto' | 'standard' | 'required';

type RetrievalProfileDefinition = {
  maxContextTokens: number;
  channels: Array<{
    source: 'hyperliquid' | 'polymarket' | 'defillama';
    channel: string;
    enabled: boolean;
    retrieval: {
      mode: 'latest' | 'timeseries' | 'summary' | 'signal';
      granularitySec: number;
      lookbackSec: number;
      maxPoints: number;
    };
    prompt: {
      include: boolean;
      policy?: SourcePolicy;
      priority: number;
      label?: string;
    };
  }>;
};

type DataStreamNode = {
  id: string;
  cloneId: number;
  kind: 'data_stream';
  name: string;
  profile: ContextProfileName;
  sourcePolicy: Record<'hyperliquid' | 'polymarket' | 'defillama', SourcePolicy>;
  profileDefinitions?: Partial<Record<ContextProfileName, RetrievalProfileDefinition>>;
  assetOverrides?: Array<{
    symbol: string;
    profile?: ContextProfileName;
    maxContextTokens?: number;
    sourcePolicy?: Partial<Record<'hyperliquid' | 'polymarket' | 'defillama', SourcePolicy>>;
    channels?: Array<Partial<RetrievalProfileDefinition['channels'][number]> & {
      source: 'hyperliquid' | 'polymarket' | 'defillama';
      channel: string;
    }>;
  }>;
};

Override precedence:

text
per-asset configuration
> asset/group profile assignment
> data stream default profile
> built-in profile definition

Current user-facing controls expose retrieval depth, prompt inclusion, channel switches, lookback, and granularity. Max datapoints and prompt budget are derived or internally managed from those settings.

Basic users do not see per-asset configurations. Custom users use individual Asset Blocks plus source/profile presets. Power users can additionally configure retrieval details at the channel level.

Retrieval is deterministic. There is no AI prompt for data retrieval.

C. Pipeline Graph Format

Power users can create multiple branches:

text
Deep Hyperliquid Stream -> BTC Asset -> Macro Trading Prompt
Fast Hyperliquid Stream -> DOGE Asset -> Momentum Trading Prompt
Polymarket Stream -> BTC Asset -> Event Trading Prompt

Basic and Custom modes save generated versions of the same graph model.

Node format:

ts
type ClonePipelineNode =
  | DataStreamNode
  | {
      id: string;
      cloneId: number;
      kind: 'asset_selection';
      name: string;
      rules: AssetSelectionRules;
    }
  | {
      id: string;
      cloneId: number;
      kind: 'trading_prompt';
      name: string;
      customBehaviorPrompt: string;
      decisionCadenceSec: number | null;
      maxAssetsPerRun: number;
      candidatePolicy: {
        includeOpenPositions: boolean;
        includeUserUniverse: boolean;
        deterministicPrefilter: boolean;
      };
      coverageMode: 'global' | 'global_with_asset_overrides' | 'asset_specific_only';
      assetOverrides?: Array<{
        symbol: string;
        prompt: Partial<{
          customBehaviorPrompt: string;
          decisionCadenceSec: number | null;
          maxAssetsPerRun: number;
        }>;
      }>;
    };

type ClonePipelineEdge = {
  id: string;
  cloneId: number;
  fromNodeId: string;
  toNodeId: string;
  edgeKind: 'provides_context_to' | 'selects_assets_for';
  priority: number;
};

type ClonePipelineLayout = {
  strategyClone?: {
    position?: { x: number; y: number };
    connectedPromptNodeIds?: string[];
  };
};

Legacy migration fields such as traderType, analysisTimeframe, and marketMomentumSensitivity may still exist in compatibility config objects, but they are not the current primary workflow controls.

Allowed v1 paths:

text
data_stream -> asset_selection
asset_selection -> trading_prompt

There is no separate insight/analysis creator or generated insight storage in v1. The trading prompt consumes compact context assembled directly from connected data streams and selected assets.

The visual Trading Clone Block does not appear in ClonePipelineNode and the prompt-to-clone line does not appear in ClonePipelineEdge. Those UI details live in ClonePipelineLayout.

D. Trading Execution Format

Product UI defers strategy, risk, and execution tuning to Trading Prompt Blocks. Asset Blocks only expose asset identity, enabled/paused state, latest-data review, and exchange links. Existing asset_selection execution defaults are compatibility storage until the schema migrates fully to prompt-owned controls. Clone-level model and trading on/off live on the clone and are surfaced in the workflow Trading Clone Block.

ts
type CloneExecutionConfig = {
  version: 1;
  cloneId: number;
  customBehaviorPrompt: string;
  decisionCadenceSec: number | null;
  maxAssetsPerRun: number;
  candidatePolicy: {
    includeOpenPositions: boolean;
    includeUserUniverse: boolean;
    deterministicPrefilter: boolean;
  };
};

Compatibility-only migration fields can be carried forward if needed, but new workflow controls should not introduce trader archetype or market momentum sensitivity as primary knobs.

Trading prompt coverage modes:

text
global
  One prompt applies to all enabled assets.

global_with_asset_overrides
  Global prompt applies by default.
  Specific assets can use their own prompt settings.

asset_specific_only
  No global fallback.
  Only assets explicitly listed in per-asset prompt configurations are eligible for this prompt node.

Compiler rule:

text
specific asset prompt > global prompt

If two asset-specific prompts claim the same asset inside the same compiled branch, reject the graph as invalid in v1.

Default scheduled execution is every 5 minutes per enabled clone/branch/asset. If decisionCadenceSec is null, use the 5-minute default:

text
Default: every 5 minutes
Optional slower prompt cadences: 15 minutes, 30 minutes, 2 hours, 6 hours

Decision run:

ts
type CloneDecisionRun = {
  id: string;
  cloneId: number;
  branchId: string;
  symbol: string;
  dataStreamNodeId?: string;
  assetSelectionNodeId?: string;
  tradingPromptNodeId?: string;
  status: 'queued' | 'running' | 'completed' | 'failed' | 'skipped';
  trigger: 'schedule' | 'manual' | 'position_event';
  scheduledFor: number;
  startedAt?: number;
  completedAt?: number;
  candidateSymbols: string[];
  model: string;
  promptR2Key?: string;
  responseR2Key?: string;
  contextR2Key?: string;
  errorMessage?: string;
  metadata?: Record<string, unknown>;
};

type CloneDecisionAction = {
  id: string;
  runId: string;
  cloneId: number;
  symbol: string;
  action:
    | 'hold'
    | 'buy'
    | 'sell'
    | 'open_long'
    | 'open_short'
    | 'close_long'
    | 'close_short'
    | 'reduce_long'
    | 'reduce_short'
    | 'cancel_orders';
  status: 'proposed' | 'validated' | 'rejected' | 'queued' | 'executed' | 'failed' | 'skipped';
  confidence: number;
  quantity?: number;
  notionalUsd?: number;
  limitPrice?: number;
  reasonSummary?: string;
  reasonR2Key?: string;
  orderId?: string;
  errorMessage?: string;
  metadata?: Record<string, unknown>;
};

The model proposes actions. Code validates and executes them. SQLite keeps compact run/action indexes and one-sentence reason summaries; full prompts, full contexts, model responses, and long-form reasons go to R2. Decision timestamps are epoch milliseconds.

Execution loop:

text
decision worker tick
  -> list active clones
  -> resolve effective context per clone
  -> expand branches into assets
  -> skip assets that are not cadence-due
  -> create clone_decision_runs row
  -> call pluggable decision engine
  -> validate proposed actions against branch candidates, asset enabled state, and configured execution controls
  -> create dry-run order / execution ledger rows
  -> replace clone_decision_actions rows
  -> mark run completed, failed, or skipped

The current worker contract supports two engines:

  • DECISION_MODEL_PROVIDER=noop: safe local/dev mode that writes hold actions without model credentials.
  • DECISION_MODEL_PROVIDER=anthropic: dry-run Claude mode that assembles the effective context, calls Anthropic's Messages API, parses strict JSON actions, validates them, and stores the result without placing trades.

Order execution remains a separate handoff after safety gates and ledger tables are implemented.

The first execution layer is dry-run only:

  • clone_execution_accounts: clone execution account state, initially dry_run.
  • clone_execution_orders: validated executable actions represented as dry-run orders.
  • clone_execution_fills: future paper/real fills.
  • clone_execution_positions: future paper/real positions.
  • clone_execution_ledger: audit events for validation, rejection, orders, fills, and position updates.

Action status is now the more precise execution state. A decision run can complete while an individual action is validated or rejected.

E. Effective Context Contract

Internal resolver input:

ts
type EffectiveContextRequest = {
  cloneId: number;
  userId?: string;
  asOfMs?: number;
  trigger: 'schedule' | 'manual' | 'preview' | 'position_event';
  symbols?: string[];
  pipeline?: ClonePipelineConfig;
};

Resolved packet:

ts
type EffectiveDecisionContext = {
  version: 1;
  asOfMs: number;
  trigger: 'schedule' | 'manual' | 'preview' | 'position_event';
  clone: {
    id: number;
    ownerUserId: string | null;
    model: string | null;
  };
  pipeline: {
    nodes: ClonePipelineNode[];
    edges: ClonePipelineEdge[];
    activeTradingPromptNodeIds: string[];
  };
  compiled: CompiledPipeline;
  branches: Array<{
    id: string;
    dataStreamNodeId: string;
    assetSelectionNodeId: string;
    tradingPromptNodeId: string;
    effectiveUniverse: EffectiveTradeUniverseAsset[];
    candidateSymbols: string[];
    readPlan: Array<{
      symbol: string;
      dataStreamNodeId: string;
      profile: ContextProfileName;
      source: 'hyperliquid' | 'polymarket' | 'defillama';
      channel: string;
      mode: 'latest' | 'timeseries' | 'summary' | 'signal';
      granularitySec: number;
      lookbackSec: number;
      maxPoints: number;
    }>;
    memoryReads: Array<{
      key: string; // `${symbol}:${source}:${channel}`
      recordCount: number;
      records: unknown[];
    }>;
    rawMemories: Record<string, unknown[]>;
    prompt: {
      systemSections: string[];
      userSections: string[];
      estimatedPromptTokens: number;
      contextR2Key?: string;
    };
    warnings: string[];
  }>;
  warnings: string[];
};

Resolution order:

  1. Verify user/clone ownership in backend SQLite.
  2. Load pipeline nodes and edges.
  3. Validate graph shape.
  4. Resolve asset selection rules from connected asset selection nodes.
  5. Expand rules into effective trade universes per trading prompt node.
  6. Build candidate symbols from open positions, enabled universe, and deterministic prefilters.
  7. Cap candidates by maxAssetsPerRun.
  8. Resolve connected data streams into per-symbol read plans.
  9. Fetch structured raw memories from SQLite.
  10. Assemble prompt sections for each active trading prompt node.
  11. Load wallet, positions, orders, fills, and ledger state when the execution storage tables exist.
  12. Persist large context packet to R2 if running a real decision.
  13. Return packet to the decision engine.

The scheduler evaluates cadence per clone/branch/asset. This keeps global prompt defaults simple while allowing power-user per-asset prompt overrides to run on their own cadence.

API Surface

Minimum backend API groups:

text
GET/PUT    /api/v1/clones/:cloneId
GET/PUT    /api/v1/clones/:cloneId/pipeline
POST       /api/v1/clones/:cloneId/pipeline/preview
POST       /api/v1/clones/:cloneId/assets/:symbol/latest-data
POST       /api/v1/clones/:cloneId/decision-runs
GET        /api/v1/clones/:cloneId/decision-runs
GET        /api/v1/clones/:cloneId/ledger
GET        /api/v1/ingestion/health

Auth should be checked at the backend API boundary. UI routes should not bypass it.

The current development adapter accepts a backend-owned user id through x-user-id. This is temporary: production auth should resolve Privy, Supabase, wallet, email, or other providers into users.id through user_auth_identities, then run ownership checks against clones.owner_user_id.

PUT /api/v1/clones/:cloneId is the migration bridge for the current frontend. It upserts the clone row into SQLite using the authenticated backend user id, model, and active/paused status before the workflow loads or saves pipeline graph state. Once clone creation fully moves into services/market-backend, this remains the clone profile update endpoint.

The pipeline endpoints own the canonical configuration. Simple Basic/Custom UX should read and write generated graph nodes instead of maintaining separate asset-selection or execution-config sources of truth.

GET /api/v1/clones/:cloneId/decision-runs returns the clone's stored decision history with compact action rows attached to each run. Supported query filters are limit, status, symbol, branchId, scheduledFrom, and scheduledTo. This powers the workflow Trading History panel.

POST /api/v1/clones/:cloneId/assets/:symbol/latest-data returns the effective structured memory reads for one asset using the submitted or persisted pipeline. This powers the workflow Latest Data drawer and should match the same context resolver path used by trading decisions.

Implementation Order

  1. Pipeline node/edge repository and graph validator.
  2. Default pipeline generation for Basic mode.
  3. Pipeline compiler and preview endpoint.
  4. Effective context resolver with structured SQLite memory reads.
  5. Latest-data viewer endpoint.
  6. Decision run/action tables and dry-run execution audit storage.
  7. Frontend workflow migration to the new backend API.