Appearance
Backend Contract
This page is the compact engineering review contract for data sources, backend formats, and runtime execution.
Detailed references:
- Clash Of Clones Product Spec
- Data Sources
- SQLite + R2 Market Memory Plan
- Backend Ownership And Runtime Contract
- Data Source API And UX Spec
Implementation Status
Implemented in services/market-backend:
- SQLite migrations and repository layer.
- Hyperliquid, Polymarket, and DefiLlama ingestion/storage.
- Hyperliquid asset taxonomy and category-based selection.
- Clone pipeline graph persistence.
- Workflow graph layout persistence for the visual Trading Clone Block.
- Pipeline API and preview endpoint.
- Asset latest-data endpoint for workflow data review.
- Effective context resolver that reads structured SQLite memories.
- Decision run/action tables and repository.
- Decision worker loop with cadence checks and a pluggable engine interface.
- Safe local no-op hold engine.
- Model-backed dry-run decision engine path.
- Dry-run execution safety gates.
- Execution audit tables for dry-run orders and ledger events.
Not implemented yet:
- R2 adapter for full prompt/context/response/reason artifacts.
- Paper fill simulation and synthetic account/position mutation.
- Audition, Arena, race, ticket, settlement, promotion, and retirement tables.
- Live order adapter and personal-wallet execution boundary.
- Full frontend controls polish against the new graph builder.
Data Sources And Storage
Market data is global, not per user. Clone strategies filter shared market memory instead of duplicating source data per clone.
Source Coverage
| Source | Stored context | Role |
|---|---|---|
| Hyperliquid | Mid prices, OHLC candles, liquidity profile, funding pressure, volatility profile, positioning pressure | Trade universe and asset-level market context |
| Polymarket | Market discovery, immediate market, later market, recently closed markets | Related event/context layer |
| DefiLlama | Capital base, capital flows, economic throughput | Macro/market-regime layer |
Hyperliquid channel origin:
text
source-backed:
- mid prices: allMids WebSocket stream bucketed at 5-second granularity
- candles: 1-minute candle WebSocket subscriptions
- liquidity profile: l2Book REST snapshots
- funding pressure: metaAndAssetCtxs + predictedFundings REST polls
derived:
- volatility profile: local rollup from 1-minute candles
- positioning pressure: local composite from price momentum, funding, and liquidity imbalanceRetention And Granularity
Suggested Hyperliquid price hot retention:
text
last 30 minutes: 5-second mid-price buckets
last 24 hours: 1-minute source candles
last 7 days: 1-hour granularity
older than 7 days: delete or archive to R2Feature families should stay compact:
- Store derived profile rows in SQLite.
- Avoid retaining full raw orderbook/update payloads in SQLite.
- Archive raw payloads to R2 only when useful for audit or replay.
Working assumption:
text
7-day hot SQLite data is comfortably manageable on the initial backend box.SQLite Vs R2
SQLite stores compact queryable rows:
- Structured observations.
- Feature/profile rows.
- Clone and strategy graph configuration.
- Decision/action indexes.
- Orders, fills, positions, and ledger rows.
- R2 object keys.
R2 stores large immutable payloads:
- Full prompt/context/response payloads.
- Large reason/debug traces.
- Raw source payloads when useful for audit/replay.
- Backups or cold archives.
Backend Data Formats
The canonical backend configuration shape is:
text
data_stream -> asset_selection -> trading_promptProduct language maps to:
text
Data Stream Block -> Asset Block -> Trading Prompt BlockThe workflow canvas can also show a visual Trading Clone Block:
text
Data Stream Block -> Asset Block -> Trading Prompt Block -> Trading Clone BlockThe Trading Clone Block is a UI/runtime terminal for clone-level model, trading on/off, and history. It is not a persisted strategy node in the v1 graph. Its visual position and prompt-to-clone line are persisted as layout metadata, separate from executable graph nodes and edges.
Each Trading Prompt Block defines one strategy. At execution time, a strategy branch expands into per-asset prompt executions. Each decision run row is for one asset.
A. Asset Selection Format
Answers:
text
Which assets can this clone consider trading?Stored user intent:
highLevelCategoriessubcategoriesexplicitlyEnabledSymbolsexplicitlyDisabledSymbols
Effective output:
- Enabled trade universe for the clone or strategy branch.
- Selection source for each symbol.
- Display metadata for UI review.
- Trading/context support flags.
B. Data Stream Format
Answers:
text
What structured market memory is available to the strategy?Stored concepts:
profile:fast,balanced, ordeep.sourcePolicy: per-source behavior for Hyperliquid, Polymarket, and DefiLlama.profileDefinitions: editable retrieval definitions for power users.assetOverrides: implementation field for per-asset configurations.
Current user-facing controls expose retrieval depth, prompt inclusion, channel switches, lookback, and granularity. Max datapoints and prompt budget are derived or internally managed from those settings.
Override precedence:
text
per-asset configuration
> asset/group profile assignment
> data stream default profile
> built-in profile definitionRetrieval is deterministic. There is no AI prompt for data retrieval.
C. Pipeline Graph Format
Answers:
text
How are data streams, assets, and trading prompts connected?Allowed v1 paths:
text
data_stream -> asset_selection
asset_selection -> trading_promptThe visual Trading Clone Block is stored outside the executable graph:
ts
type ClonePipelineLayout = {
strategyClone?: {
position?: { x: number; y: number };
connectedPromptNodeIds?: string[];
};
};connectedPromptNodeIds controls the visible Trading Prompt -> Trading Clone line in the workflow canvas. It is layout metadata only; the compiler still validates only data_stream -> asset_selection -> trading_prompt.
The graph is the source of truth. Simple UI controls should update graph nodes instead of creating independent configuration sources.
D. Trading Execution Format
Answers:
text
How should this strategy decide and act?Stored concepts:
customBehaviorPromptdecisionCadenceSecmaxAssetsPerRuncandidatePolicycoverageModeassetOverrides: implementation field for per-asset prompt configurations
Legacy migration fields such as traderType, analysisTimeframe, and marketMomentumSensitivity can still exist in compatibility payloads, but they are not current primary workflow controls.
Prompt coverage modes:
text
global
global_with_asset_overrides
asset_specific_onlyE. Effective Context Packet
The effective context packet is downstream of A-D.
It combines:
- Clone ownership and config.
- Compiled graph.
- Effective asset universe.
- Per-asset read plans.
- Structured SQLite market memory.
- Wallet, positions, orders, fills, and ledger summary once execution storage exists.
- Prompt sections and estimated prompt tokens.
- R2 keys for large context/prompt/response payloads.
Workflow layout metadata is stored and returned by the pipeline API, but it is ignored by the effective context resolver because it has no impact on trading behavior.
Product Economy Boundary
The paper-trading runtime should produce auditable synthetic trading results. The Clash Arena economy consumes those results for lifecycle and settlement decisions.
Runtime-owned outputs:
- Audition PnL against a 24-hour synthetic account.
- Race PnL against a fresh 6-hour,
$1,000,000synthetic account. - Orders, fills, positions, balances, fees, and ledger events.
- Per-clone and per-race performance summaries.
Arena-owned inputs and outputs:
- Audition Fee payment status.
- Public clone identity and moderation status.
- Promotion Queue and Arena slot state.
- Race accumulation, launch, live, settling, settled, and archive state.
- Ticket purchases for the currently accumulating race.
- Pool split rows for protocol rake, creator pool, and distribution pool.
- Weighted parimutuel payout rows for ticket holders.
- Creator royalty rows.
- Retirement strike state and Hall of Fame preservation.
This boundary keeps trading correctness separate from economy correctness. Trading determines performance; Arena settlement determines who gets paid and what lifecycle transition happens next.
Runtime Pipeline
The v1 runtime path is:
text
1. Global data ingestion
2. Clone strategy graph compilation
3. Per-strategy asset expansion
4. Deterministic context retrieval from SQLite
5. Trading prompt assembly
6. Decision worker creates one due run per asset
7. Decision engine proposes an action for that asset
8. Safety gates validate or reject the action
9. Valid executable actions create dry-run orders today
10. SQLite decision/action/order/ledger storage
11. R2 archive for large prompt/context/response blobsThere is no separate v1 insight creator or generated insight storage layer.
Current local worker commands:
bash
cd services/market-backend
npm run decisions:run-once -- --clone-id=42 --force
npm run decisions:workerThe current executable path supports a no-op hold engine and a model-backed dry-run engine. Live trading still needs paper fill simulation, account mutation, and a live order adapter.
Candidate Policy
The platform can support every Hyperliquid asset, but a strategy should not blindly send every asset into a model call by default.
Separate:
- Supported assets: everything the backend knows about.
- Trade universe: assets the clone or strategy allows.
- Decision run: one model call for one selected asset.
- Decision action: the proposed or validated action from that run.
Default strategy candidates should include:
- Assets with open positions.
- Assets in the enabled trade universe.
- Assets passing deterministic prefilters.
Recommended branch candidate cap before per-asset execution:
text
10 assets by default
25 assets advanced maximumFor users watching hundreds of assets, the system should scan structured data deterministically first, then run the strategy on the strongest candidates.