-
Notifications
You must be signed in to change notification settings - Fork 6
Claude/explore liquidity engine hdf95 #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
themartinshurtleff
wants to merge
96
commits into
flowdrivenml:main
Choose a base branch
from
themartinshurtleff:claude/explore-liquidity-engine-hdf95
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Claude/explore liquidity engine hdf95 #1
themartinshurtleff
wants to merge
96
commits into
flowdrivenml:main
from
themartinshurtleff:claude/explore-liquidity-engine-hdf95
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
New files: - poc/ws_connectors.py: Websocket connectors for Binance, OKX, Bybit - poc/metrics_viewer.py: Basic real-time metrics display - poc/full_metrics_viewer.py: Complete metrics viewer showing ALL metrics - poc/list_metrics.py: Reference script listing all available metrics - run_viewer.py: Main entry point script Features: - Connects to Binance futures websocket streams (depth, trades, liquidations, mark price) - Displays real-time: price, orderbook depth, trade volume, liquidations, funding - Shows all 40+ BTC perp metrics as defined in btcSynth.data - Uses rich library for terminal UI with auto-refresh - 1-minute rolling windows matching the original library's aggregation Usage: python run_viewer.py # Start basic viewer python poc/full_metrics_viewer.py # Full metrics display python poc/list_metrics.py # List all available metrics
- Run websocket in separate thread with its own event loop - Use time.sleep() instead of asyncio.sleep() in display loop - Add screen=True for proper full-screen terminal rendering - Increase refresh rate to 4/sec for smoother updates - Add deepcopy on get_state() to prevent threading issues
New module: poc/liq_engine.py - LiquidationStressEngine class predicts liquidation zones from volume patterns - Uses leverage ladder [5, 10, 20, 50, 75, 100] to estimate liq prices - Implements per-minute updates with decay (0.97) and persistence - Removes zones when price crosses them (liquidations triggered) - Configurable: steps ($20 buckets), vol_length (50min SMA), buffer (0.2%) Math implemented: 1. ohlc4 = (open + high + low + close) / 4 2. buyRatio = buyVol / SMA(buyVol), sellRatio = sellVol / SMA(sellVol) 3. For each leverage L: offset = (1/L) + 0.002 + (0.01/L) short_zone = ohlc4 * (1 + offset), long_zone = ohlc4 * (1 - offset) 4. Accumulate ratios into price buckets 5. Decay old buckets by 0.97/min, remove when price crosses UI changes in full_metrics_viewer.py: - Added 4th column with STRESS ZONES panel - Shows top 5 short liq zones (resistance) with strength scores - Shows top 5 long liq zones (support) with strength scores - Displays distance % from current price for each zone - Shows engine stats: minutes processed, avg volumes, zone counts Usage: Run viewer, wait for minute rollover to see zones populate
New file: poc/leverage_config.py
- LeverageConfig dataclass with ladder and weights (auto-normalized)
- LeverageConfigRegistry for per-symbol and future per-exchange configs
- Pre-configured ladders for BTC, ETH, SOL with Binance max leverage
Symbol configs:
BTC/ETH: [5,10,20,25,50,75,100,125] max 125x
weights: [1.0,1.0,0.9,0.7,0.35,0.18,0.10,0.07] (normalized)
SOL: [5,10,20,25,50,75] max 75x
weights: [1.0,1.0,0.9,0.7,0.35,0.18] (normalized)
Default: [5,10,20,50,75,100] for undefined symbols
Updated: poc/liq_engine.py
- Import and use leverage_config for per-symbol ladders
- Apply weight when accumulating bucket strength:
strength += ratio * leverage_weight
- Added debug_enabled flag to gate verbose logging
- Enhanced debug output showing:
- Ladder + normalized weights on init
- Per-leverage projections table
- Top 5 strongest zones for both sides
- Stats now include leverage_ladder and leverage_weights
Weights represent estimated position distribution across leverage levels.
Higher weights = more traders using that leverage = stronger zone signal.
- Added setup_debug_logging() function to liq_engine.py - Added debug_log_file parameter to LiquidationStressEngine - Enabled debug logging by default in full_metrics_viewer.py - Logs to liq_engine_debug.log (use tail -f to watch) Usage: In separate terminal run: tail -f liq_engine_debug.log To disable: set debug_enabled=False in full_metrics_viewer.py
Introduces LiquidationCalibrator that uses real Binance forceOrder liquidation prints as feedback to adjust leverage weights online. - New liq_calibrator.py: Calibrator with leverage attribution, rolling stats over configurable window, bounded multiplicative weight updates with closer-level prior, JSONL logging - leverage_config.py: Add update_weights() for runtime changes - liq_engine.py: Add update_leverage_weights() method - full_metrics_viewer.py: Wire calibrator hooks into liquidation events and minute snapshot updates
- Save learned weights to liq_calibrator_weights.json after each calibration - Load persisted weights on startup and apply to liq_engine - Log individual liquidation events with predicted zones for analysis (type: "event" entries in JSONL showing actual vs predicted) - Each event log includes: price, side, is_hit, nearest_predicted, attributed_leverage, implied_levels_by_leverage, top_predicted_zones
- Enable buffer tuning by default (adjusts based on miss directions) - Add hit_bucket_tolerance auto-calibration (uses p75 of miss distances) - Both params now persist to weights file and load on restart - Fix event logging: show nearest_implied from leverage ladder instead of stale accumulated zones, add implied_miss_usd field - Shorten calibration window to 15 minutes for faster adaptation - Initial tolerance set to 5 buckets ($100) instead of 2
Learns and applies a USD offset correction per side (long/short) to account for positions being opened at different prices than current ohlc4. - Track implied_miss_usd per side during calibration window - Calculate mean miss and smooth-update offset (30% learning rate) - Apply offset when calculating implied levels for hit detection - Persist long_offset_usd and short_offset_usd to weights file - Log both raw and corrected implied levels in event logs - Show offsets in calibration debug output
Task 1: Replace nearest-leverage attribution with soft attribution - Add SOFT_ATTRIBUTION_CONFIG with tau_usd/delta_usd per symbol (BTC, ETH, SOL) - Compute responsibilities r(L) = exp(-d(L)/tau_usd), normalized - Update soft stats: totals[L] += r(L), hits[L] += r(L) * exp(-d(L)/delta_usd) Task 2: Remove per-cycle closer-level prior amplification - Removed (lev/max_lev)^gamma boost that was causing collapse toward CMP Task 3: Add stabilization clamps to weight update - Per-update ratio clamp: new_w/old_w in [0.85, 1.20] - Max weight cap: 0.35 per leverage - Normalize after clamping Logging: Each event now logs top 3 leverages by responsibility with d(L)
Task 1: Distance bands (NEAR/MID/FAR) - NEAR: dist_pct <= 1%, MID: 1-2.5%, FAR: >2.5% - Only NEAR + MID events update soft attribution stats - FAR events are logged but don't affect weights/buffer/offsets Task 2: Approach gating for predicted zones - Track window_high/window_low from event prices - Zone is "judged" only if within approach_pct (0.75%) of price path - Zones outside [low*(1-0.75%), high*(1+0.75%)] are "unreached" - Track zones_judged vs zones_unreached in cycle stats Task 3: Logging additions - Event logs include: dist_pct, band, updates_calibration - Cycle logs include: band_counts, zones_judged, zones_unreached, window_high/low
Task 1: Rolling per-symbol minute cache - Add MinuteCacheEntry dataclass with src, high, low, close, volumes, OI change, depth - Store ~60 minutes of market data in deque (MINUTE_CACHE_SIZE=60) - on_minute_snapshot now accepts optional market data parameters Task 2: Approach events for predicted zones - APPROACH_LEARN_PCT = 0.35% threshold for zone approach detection - Compute stress score S in [0,1] using: - depth_thinning (0.45): sigmoid of depth reduction near zone - aggression (0.35): sigmoid of net buy/sell volume ratio - oi_fuel (0.20): sigmoid of abs(OI change) - Process both long and short zones each minute Task 3: Update leverage soft stats on approach - When zone is approached: totals[L] += r(L), hits[L] += r(L) * S - Uses same soft attribution r(L) = exp(-d(L)/tau_usd) as forceOrder events - forceOrder events continue to work as additional signal Logging: - New 'approach_event' log type with zone, stress score, top-3 r(L) - Calibration cycle now logs approach_events count - Debug print includes approach count
…imbalance - Replace sigmoid-based aggression with 1-exp(-abs_imb/IMB_K) formula where abs_imb = |buyVol-sellVol|/(buyVol+sellVol), producing proper 0-1 range - Make depth_thinning zone-local: compute around zone price bucket, not CMP - Add depth_band field to MinuteCacheEntry for ±2% around src price - Enhance approach_event logging with full stress component breakdown - Add new helper methods: _compute_stress_score_with_breakdown, _get_depth_around_bucket, _get_previous_cache_entry - Remove duplicate _bucket_price method
- Add APPROACH_HIT_MULT (0.25) so approach events contribute at reduced strength vs forceOrder events: hits += r(L) * S * APPROACH_HIT_MULT - Add weight anchoring per calibration cycle: w_new = (1-λ)*w_old + λ*w_update with ANCHOR_LAMBDA=0.10, applied after ratio clamps before normalization - Add cycle summary logging with: force_events, approach_events, approach_to_force_ratio, and weight entropy (to detect collapse)
…loor Task 1 - Approach throttling + distance-decayed contribution: - Add S_eff = S * exp(-dist_pct / D0) where D0 = 0.002 (0.2%) - Collect candidates per minute, select top APPROACH_TOP_K=3 by S_eff - Only selected zones contribute to learning, preventing "30 zones in 1 min" - Log approach_candidates, approach_used, approach_skipped per minute Task 2 - Unreached zones as weak negative signal: - Track reached_zones set during approach events - Apply penalty for zones judged reachable but never reached: totals[L] += r(L), hits[L] += r(L) * (-UNREACHED_PENALTY=0.05) - Penalty applied after normal accumulation, before leverage_scores - Log unreached_zones_count, penalty_applied_count, top_penalized_leverages Task 3 - Mid-mass safety floor (hard guardrail): - After normalization, enforce sum(weights for lev<=25x) >= MIN_MASS_MID=0.25 - Proportionally drain from high leverage, add to mid/low - Log floor_triggered, mid_mass_before/after, drained_leverages This prevents collapse toward CMP by: 1. Throttling approach events (prevents volume overwhelming) 2. Penalizing over-prediction (zones predicted but never approached) 3. Hard floor on mid-leverage mass (structural guarantee)
…logging
1. Integer-bucketed zone keys:
- Added _zone_bucket_int(zone_price, steps) -> int helper
- Zone keys now (zone_bucket_int, side) instead of (zone_price, side)
- Avoids float comparison issues in set membership
2. Origin src/buffer tracking via zone_meta:
- Added ZoneMeta dataclass: origin_minute_key, origin_src, origin_buffer, zone_price
- CalibrationStats now has zone_meta: Dict[Tuple[int, str], ZoneMeta]
- zone_meta populated in on_minute_snapshot for ALL predicted zones
- _apply_unreached_penalty uses origin_src/origin_buffer for responsibility calc
3. Per-zone debug logging (top 10, behind DEBUG_UNREACHED_ZONES flag):
- Added DEBUG_UNREACHED_ZONES = True constant
- Logs zone_key, zone_price, origin_src, origin_minute_key, top3_resp
- Only top 10 penalized zones per cycle to avoid log explosion
Sample debug output:
DEBUG_UNREACHED (top 10 penalized zones):
zone=(4850,long) price=97000.0 origin_src=97500.0 origin_min=123456 top3=[25x:0.412,20x:0.298,15x:0.156]
Root cause: process_message didn't route obj_type=="oi" or "oifunding" - Bybit sends tickers with obj="oifunding" containing openInterestValue - OKX sends open-interest channel with obj="oi" - Neither was being processed, so perp_total_oi stayed at default 0.0 Fix: - Add routing for obj_type in ["oi", "oifunding"] to new _process_oi() - _process_oi handles multiple formats: - Bybit: data.data.openInterestValue (USD value) - OKX: data[0].oiCcy (BTC contracts, converted to USD) - Direct openInterestValue field - Also extracts fundingRate if present in oifunding stream - Display OIs_per_instrument in the Funding & OI panel - Add per-minute log line: "BTC OI=<value> funding=<value>%" Debug logging available (commented out): - Raw message logging - Per-update OI tracing with source identification
- New rest_pollers.py with BinanceRESTPoller for async polling - Endpoints: /fapi/v1/openInterest, topLongShortAccountRatio/PositionRatio, globalLongShortAccountRatio - Proxy-aware with rate limiting, exponential backoff, jitter - Self-test on startup prints success/failure per endpoint - Integrated into full_metrics_viewer.py with price_getter callback - Per-minute log line shows OI, funding, TTA/TTP/GTA ratios
- Remove proxy parameter from BinanceRESTPoller and BinanceRESTPollerThread - Remove proxy= from aiohttp session.get() calls - Update error messages: log status + endpoint + backoff, no proxy suggestions - Keep BINANCE_FAPI_BASE as direct HTTPS URL
- Add RATIO_PERIOD_PRIMARY="1m" with auto-fallback to "5m" - Add per-minute consolidated log: symbol, OI (USD), Δ, TTA/TTP/GTA - Log [PRICE=0] warning if price_getter returns 0/None - Log once when ratio period falls back to 5m
Phase 0 - Diagnostic logging: - Added dist_pct_event using event-time src - Added nearest_implied_dist_pct for comparison - Log src_used and src_source in event JSONL Phase 1 - Event-time src for implied levels: - Priority: markPrice > mid > last > fallback_ohlc4 - Pass mark_price, mid_price, last_price from full_metrics_viewer - Use event_src_price instead of minute ohlc4 for attribution Phase 2 - Expanded ladder: - BTC ladder now: [5,10,20,25,50,75,100,125,150,200,250] - 150x offset ~0.87%, 200x ~0.71%, 250x ~0.60% - Covers observed dist_pct ~0.4-0.5% regime Phase 3 - Percent-based EMA bias: - bias_pct_long, bias_pct_short replace USD offsets - EMA update: eta=0.05, cap=±0.30% - Requires MIN_BIAS_SAMPLES=10 per cycle - Uses median miss_pct as robust aggregate Phase 4 - Verification outputs: - Event JSONL: event_src_price, src_source, miss_pct, miss_usd, nearest_implied_raw/corrected, bias_pct_long/short, ladder - Cycle JSONL: attributed_leverage_counts histogram, bias_update_info - Console: BIAS line + LEV_HIST line in cycle summary
DEBUG flowdrivenml#1: At _process_liquidations entry - logs raw data keys and 'o' field DEBUG #2: Before calibrator.on_liquidation - logs side, price, qty, mark Interpretation guide: - No DEBUG flowdrivenml#1 output → WS not receiving forceOrder messages - DEBUG flowdrivenml#1 shows MISSING/empty 'o' → data format mismatch - DEBUG flowdrivenml#1 fires but no DEBUG #2 → symbol filter rejecting ("BTC" not in s) - Both fire but total_events=0 → issue inside calibrator.on_liquidation
Since the Rich TUI takes over the terminal, debug prints aren't visible. Now writes to poc/liq_debug.jsonl with structured JSON entries: - "raw_liquidation": When _process_liquidations receives any message Fields: exchange, data_type, keys, has_o, o_keys - "to_calibrator": When routing to calibrator.on_liquidation Fields: side, price, qty, mark_price, mid_price, last_price Monitor with: tail -f poc/liq_debug.jsonl | jq .
Logs to liq_debug.jsonl with type="event_sanity_ok" or "event_sanity_fail":
- raw_symbol (order["s"]) and raw_side (order["S"])
- computed calib_side (BUY->short, SELL->long)
- price, qty, notional
- mark_price, mid_price, last_price
- event_src_price and src_source
Assertions checked:
- price > 0
- qty > 0
- event_src_price > 0
- calib_side in {"long", "short"}
Stops logging after 50 events to avoid unbounded growth.
…pct) Key changes: - d_pct(L) = abs(event_price - implied(L)) / event_src_price - r(L) ∝ exp(-d_pct / tau_pct) instead of exp(-d_usd / tau_usd) - hit_credit ∝ exp(-d_pct / delta_pct) instead of exp(-d_usd / delta_usd) Fixed baselines for BTC: - TAU_PCT_BASE = 0.0012 (12 bps) - DELTA_PCT_BASE = 0.0008 (8 bps) Volatility scaling: - EWMA vol tracking with alpha=0.1 - scale = clamp(vol_ewma / 0.0015, 0.7, 1.6) - tau_pct = TAU_PCT_BASE * scale - delta_pct = DELTA_PCT_BASE * scale Removed: - SOFT_ATTRIBUTION_CONFIG dict (was USD-based per symbol) - "tau = mean distance" logic (not applicable in percent-space) New fields logged per event: - tau_pct, delta_pct, vol_scale, vol_ewma
Atomic writes: - Write to .tmp file first - Flush + fsync before rename - Atomic rename to final path - Clean up .tmp on failure Schema versioning: - Added WEIGHTS_SCHEMA_VERSION = 2 - Stored in file as 'schema_version' field Ladder migration: - If stored ladder differs from current, migrate weights - Interpolate in offset-space (1/L domain) for meaningful mapping - Extrapolate at boundaries (new high/low leverage levels) - Renormalize after interpolation - Fall back to defaults if migration impossible New method: _migrate_weights_to_new_ladder(old_ladder, old_weights, new_ladder)
Implements LiquidationHeatmapBuffer class for persistent storage: - Binary file format (4KB records) for efficient storage - Loads existing frames on startup (survives restarts) - Ring buffer with automatic compaction at max_frames - 12-hour history (720 frames at 1-minute resolution) Integration: - SnapshotCache now accepts optional persistence_path - V1 heatmap persists to liq_heatmap_v1.bin - V2 heatmap persists to liq_heatmap_v2.bin - Existing /v1/liq_heatmap_history and /v2/liq_heatmap_history endpoints work seamlessly with persisted data On restart: - Historical data loads from disk automatically - Gap during downtime is expected and correct - New frames append to existing history
Issues addressed: 1. API was returning 503 "Orderbook heatmap not available" - added better error handling and logging to understand why module import or buffer initialization might fail. 2. Added /v2/orderbook_heatmap_30s_debug endpoint that shows internal state (HAS_OB_HEATMAP, _ob_buffer status, file sizes, recon stats) without requiring a working heatmap. 3. The reconstructor was experiencing 1378 resyncs with only 1.16 diffs per sync cycle on average - never staying synced long enough to cross a 30-second boundary and emit frames. 4. Added detailed gap tracking stats: pu_gap_count, u_gap_count, last_gap_type, max_sync_duration_s, diffs_in_current_sync. 5. Added configurable gap_tolerance parameter to OrderbookReconstructor. For heatmap visualization, small gaps are acceptable (orderbook may be slightly off during gaps, but better than constant resyncs). Set to 100 update IDs by default in full_metrics_viewer.
The import was failing because poc/ directory wasn't in sys.path when running from outside that directory. Now adds both parent and current directory to path.
The trigger_resync(), diff_buffer.append(), and return False were incorrectly indented outside the else block, causing them to execute even when gap was within tolerance.
The U gap check was triggering independently of the pu check, causing constant resyncs even when pu validation passed. For Binance Futures, pu is the authoritative continuity check - the U gap check should only be a fallback for spot API where pu is not present. Changes: - Skip U gap check when pu is present and validation passed (exact match or within tolerance) - Increase gap_tolerance from 100 to 1000 for better network resilience - Add pu_check_passed flag to track validation state This should allow the orderbook to stay synced long enough to write 30-second heatmap frames.
Debug prints added to track why frames aren't being written to ob_heatmap_30s.bin: - _on_reconstructor_update: logs accumulator feeding (throttled to 10s) - _on_ob_frame_emitted: logs when frames are emitted - OrderbookAccumulator: logs slot boundary crossings - OrderbookHeatmapBuffer: logs add_frame and persist operations
The format string had 6 doubles before the integer, but there should only be 5: - Old: '<ddddddIddd' (6d + I + 3d) - wrong - New: '<dddddIdddd' (5d + I + 4d) - correct This caused "required argument is not an integer" error because n_prices was being packed with a 'd' format (expecting float) instead of 'I' (uint).
The local build_unified_grid function was shadowing the imported one from ob_heatmap.py. The orderbook endpoints were calling build_unified_grid with keyword args (price_min, price_max) that only exist in ob_heatmap's version. Fix: Import ob_heatmap.build_unified_grid as ob_build_unified_grid and use it in the orderbook heatmap endpoints.
Implements Option 3 fix for the orderbook heatmap sync issue where the standalone API server would load frames once at startup and never refresh. Changes: - Add poc/embedded_api.py: FastAPI module that accepts shared buffer refs - Add --api flag to full_metrics_viewer.py to enable embedded API - API reads directly from in-memory buffer, eliminating disk sync lag - Supports --api-host and --api-port for configuration Usage: python poc/full_metrics_viewer.py --api python poc/full_metrics_viewer.py --api --api-port 9000 The embedded API provides the same endpoints as liq_api.py but with zero latency for orderbook heatmap data since buffers are shared directly.
The embedded API was missing the crucial /v1/liq_heatmap_history and /v2/liq_heatmap_history endpoints that the RUST terminal relies on. Changes: - Add LiquidationHeatmapBuffer class for history persistence - Add /v1/liq_heatmap_history and /v2/liq_heatmap_history endpoints - Add /v2/liq_stats endpoint - Add LIQ_HEATMAP_V1_FILE and LIQ_HEATMAP_V2_FILE constants - Pass heatmap file paths to create_embedded_app() - Background thread refreshes JSON snapshots and builds history
Author
|
yes |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.