Skip to content

Conversation

@Mc01
Copy link
Collaborator

@Mc01 Mc01 commented Dec 19, 2025

Milestone 1: Add base to perform further exploration.

Some thoughts from Clade:

  Scenario Critique                                                                          
                                                                                             
  What's well covered                                                                        
                                                                                             
  The FIFO/LIFO pairing is smart — it directly tests whether the protocol structurally       
  advantages early or late entrants, which is the core "commonwealth as common good"         
  question. The bank run stress-tests worst-case exit dynamics. Single user isolates the     
  curve mechanics from multi-user interactions.                                              
                                                                                             
  What's missing                                                                             
                                                                                             
  1. Hold without LP. This is the biggest gap. Every scenario has every user provide         
  liquidity. But CLAUDE.md explicitly says users can buy and hold without LP — they just     
  don't earn yield. With token inflation always on, LPs receive newly minted tokens, which   
  dilutes pure holders. If a user buys 500 USDC of tokens, doesn't LP, waits 100 days while  
  others LP and earn inflation tokens, then sells — do they lose money purely from dilution? 
  If yes, the protocol punishes passive holders, which contradicts "common good." This       
  scenario would reveal whether the protocol forces participation or merely incentivizes it. 
                                                                                             
  2. Late entrant. All users enter at roughly similar prices (within the same batch). What   
  happens when someone enters after 180 days of compounding, when the price has already      
  appreciated (for Y→P models)? They buy at a higher price, LP at that price, compound for   
  less time, then exit. This tests whether the protocol is a "first-mover advantage" system  
  or genuinely open to newcomers.                                                            
                                                                                             
  3. Mixed strategy. Every user follows the identical path: buy → LP 100% → compound → remove
   all → sell all. Real users will LP different fractions, exit partially, or re-enter. A    
  scenario with heterogeneous behavior (e.g., Alice LPs 100%, Bob LPs 50% and holds 50%, Carl
   never LPs) would show whether the model rewards one strategy over others.                 
                                                                                             
  4. Whale entry. All buy amounts are 250-600 USDC — same order of magnitude. What happens   
  when one user buys 10,000 USDC after five users bought 500 each? For constant product, this
   creates significant slippage. For integral curves, the price impact is different. This    
  tests whether the protocol is robust to concentration.                                     
                                                                                             
  5. Ongoing entry + exit (steady state). Every scenario is batch-enter-then-batch-exit. Real
   protocols have continuous flow: some users enter while others exit. A scenario like "2    
  users enter, compound 30 days, 1 more enters, compound 30 days, first user exits, 2 more   
  enter, etc." would test the protocol under realistic conditions rather than batch extremes.
                                                                                             
  ---                                                                                        
  Metric Critique                                                                            
                                                                                             
  1. Profit ignores opportunity cost. If a user invests 500 USDC for 100 days and profits 14 
  USDC, that's ~10% annualized. But they could have put 500 USDC directly into the Sky vault 
  at 5% APY and earned ~6.85 USDC risk-free. The meaningful metric is alpha over benchmark:  
  protocol_return - direct_vault_return. A model showing +14 profit looks good, but only +7  
  of that is actual protocol alpha. Some models might show positive profit but negative alpha
   — meaning users would be better off skipping the protocol entirely.                       
                                                                                             
  2. Worst-case individual loss is hidden. The table shows sum of losses and loser count, but
   not the worst individual outcome. A model with 2 losers at -5 each is very different from 
  2 losers at -500 each. Max individual loss per scenario would help identify which models   
  create victims.                                                                            
                                                                                             
  3. No fairness metric. The core question from CLAUDE.md is "the fewest users lose money."  
  The comparison table shows loser counts but doesn't normalize by investment size. If Aaron 
  invests 500 and loses 50 (-10%) while Dennis invests 600 and loses 10 (-1.7%), they're both
   counted as "1 loser" but the severity is very different. A Gini coefficient or variance of
   returns would capture fairness better.                                                    
                                                                                             
  ---                                                                                        
  Model Critique                                                                             
                                                                                             
  1. Protocol fee = 0% across all models. CLAUDE.md says fee will be introduced once the best
   model is identified, but fee sensitivity could change which model wins. A model that      
  performs well at 0% might collapse at 10% fee because there isn't enough yield left to     
  offset slippage. Testing at least one non-zero fee point (e.g., 10%) early would avoid     
  selecting a model that can't sustain the protocol.                                         
                                                                                             
  2. Fixed 5% APY assumption. The vault APY is constant across all runs. Real vault rates    
  fluctuate. A scenario where APY drops from 5% to 2% mid-simulation would test model        
  resilience — which models degrade gracefully vs catastrophically?                          
                                                                                             
  3. Token inflation = always yes is untested. It's a fixed invariant per CLAUDE.md, but     
  there's no empirical validation that it's the right choice. Running the same 16 models with
   inflation=no (32 models total, or even just the best 4 with inflation toggled) would      
  confirm whether inflation actually helps or just adds complexity.                          
                                                                                             
  ---                                                                                        
  What I'd prioritize adding                                                                 
                                                                                             
  If I had to pick three additions:                                                          
                                                                                             
  1. Hold-without-LP scenario — directly tests the "common good" thesis                      
  2. Alpha-over-benchmark metric — changes how you interpret every existing result           
  3. Late-entrant scenario — tests whether the protocol is open or extractive                
                                                                                             
  These three would fundamentally change which model looks "best" in the comparison table.   
  The current results might be misleading without them. 

My ideas:

  • focus on *YN models
  • explore deeply various bonding curves
  • explore fee mechanics

@Mc01 Mc01 changed the title Add math to visualize mechanics Milestone 1: Add math to visualize mechanics Feb 1, 2026
@Mc01 Mc01 requested a review from spaceh3ad February 1, 2026 14:52
@Mc01 Mc01 marked this pull request as ready for review February 1, 2026 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant