I gave Claude a task: find every academic paper on prediction market inefficiencies that goes beyond the standard playbook. It came back with 40 papers. Most were theoretical noise — laboratory experiments that don't translate to a live CLOB (central limit order book — the matching engine where buy and sell orders meet) with real money at stake. But 5 had findings that map directly to tradeable strategies on Polymarket right now.
This is not a research paper. It's a public commitment to test specific strategies with $10,000 of real capital, starting this week.
How the Research Pipeline Worked
The process was straightforward. I gave Claude a prompt: scan academic databases, preprint servers (arXiv and SSRN — where researchers publish papers before peer review), and journals for every paper published since 2019 on prediction market mispricing, calibration failures (when prices consistently don't match actual probabilities), behavioral biases, and structural inefficiencies. Prioritize papers that use real trading data, not just laboratory settings.
Claude parsed the papers, extracted the key findings, estimated the magnitude of each documented edge, and flagged which ones required automation versus which could be tested manually. It produced a structured dossier — 40 papers organized by edge type, with specific numbers from the underlying data.
From those 40, I filtered out everything that doesn't work in practice: edges that professional capital already ate, edges that need infrastructure I don't have, and edges that disappear after fees.
The 5 strategies below survived that filter.
What's Already Known — The Standard Playbook
These strategies are well-documented in the prediction market community and increasingly crowded:
Single-market rebalancing arb. When YES + NO prices in a Polymarket multi-outcome market sum to less than $1.00, buying both is risk-free profit. IMDEA researchers documented $10.6M extracted this way in 12 months (April 2024–April 2025). Average window: 2.7 seconds. Fully bot-dominated.
Temporal lag arb. Polymarket leads price discovery by 2–10 minutes (Ng, Peng et al., SSRN 2025). The 2.7-second average arb window compressed from 12.3 seconds since 2024 — 73% of profits captured by sub-100ms bots.
Combinatorial / LLM-detected arb. AI pipelines identify semantically related markets at inconsistent joint probabilities. Backtests show 47.5% monthly ROI peaks, but 62% of theoretically valid opportunities fail in execution due to resolution criteria mismatches.
Bond / near-certainty contracts. Buying $0.95–$0.99 contracts and collecting at resolution. Theoretical annualized yield exceeds 1,800% with compounding — but one oracle attack undoes months of gains. The Ukraine mineral deal attack (March 2025) cost NO holders approximately $7 million.
Favorite-longshot bias. Contracts below 10 cents lose 60%+ on average; NO longshots outperform YES longshots by up to 64 percentage points across 300,000+ contracts. Still exploitable, but it's been public knowledge for years. The easy money has been extracted.
Market making / liquidity provision. One documented maker started with $10,000 and earned $700–$800/day at peak during the 2024 election cycle. Post-election, Polymarket cut rewards significantly. Current realistic return: 5–15% monthly on allocated capital.
Speed trading / news alpha. The 30-second to 5-minute human-accessible window is real but narrowing. The extreme case: a bot turned $313 into $438,000 in roughly one month monitoring Bitcoin spot prices on Binance and trading Polymarket crypto resolution markets before they updated. That specific edge closed within weeks of public documentation.
These aren't secrets. The question is: what comes next? What's in the research that practitioners haven't systematically traded yet?
Worth reading alongside this: prediction market pricing gaps and what creates them and why thinking in probabilities is the actual foundation.
The 5 New Edges
Strategy 1: Near-Expiry Overreaction Fade ("Prediction Market 0DTE")
The edge in one sentence: In the final 24 hours before resolution, liquidity-driven price moves of 10+ percentage points revert 60–70% of the way back within 30–120 minutes — when the move has no identifiable news catalyst.
Academic backing:
Sung et al. (2019, European Journal of Operational Research) is the anchor paper: 8.4 million price points from 6,058 individual markets. Systematic finding: prediction market prices over-react to price movements throughout the market lifecycle. Increasing odds lead to underestimation; decreasing odds lead to overestimation. The statistical bias is measurable and persistent.
Clinton and Huang (Vanderbilt, 2025–2026) analyzed 2,500+ political markets with $2.4 billion in 2024 election volume. They found negative serial correlation in 58% of Polymarket's national presidential markets — prices spiked one day and reversed the next at rates far above what genuine information arrival predicts. This is noise, not information.
Dalen (arXiv 2510.15205, October 2025) adapts an options pricing formula for prediction markets and documents "event vega" — in plain terms, how much a contract's price jumps from uncertainty itself, not from actual news. Before market close, prices get especially jittery: people panic or get greedy, and contracts systematically cost more or less than they should. This is the same thing that happens with options expiring today (0DTE) — at the last moment emotions beat logic, and prices behave irrationally.
Why it's not widely traded yet:
The idea is simple: before market close, people get nervous and the price jumps for no reason — buy cheap, sell when it calms down. But in practice you need to watch three things simultaneously: when the market closes, whether there's real news (because then the jump is deserved), and whether there are enough buyers/sellers to get in and out. Most either try to catch this manually (too slow) or run bots that can't tell panic from real news — and lose money when the jump turns out to be justified.
On Polymarket specifically:
Polymarket's liquidity is highly concentrated. 63% of active short-term markets have zero 24-hour real volume (Dune Analytics). In thin markets, a single whale trade or small retail cluster can move prices 10–15pp with zero informational content. That's the entry signal. The T-2h exit rule prevents holding through resolution — the single largest source of catastrophic loss in this type of trade.
Testing plan:
- Week 1–2: Build Polymarket CLOB WebSocket monitor with rolling 30-minute price-change detector. Flag any move ≥10pp in a market within 24h of resolution, priced between 30–70% YES before the move.
- Week 3: Apply news exclusion filter via EventRegistry or Perplexity API. Only flag signals where no major news event appears in the past 2 hours.
- Week 3–4: Paper trade 20 qualifying signals. Did the price revert ≥50% of the initial move within 120 minutes?
- Month 2: Deploy $1,500 real capital (15% of the $10K allocation for this strategy). Max risk per trade: 2% of total capital ($200). Hard stop at 5pp adverse move. T-2h exit rule enforced without exception.
Kill criteria: Win rate below 45% after 75 trades, or Sharpe below 0.5 after 100 trades.
Strategy 2: Political Calibration Compression + Limit Order Execution
The edge in one sentence: Political prediction markets systematically understate high-probability outcomes — a 70% Polymarket price in a political market at 1-week horizon reflects approximately 83% true probability — and entering via limit orders captures a structural execution advantage on top of the calibration edge.
Academic backing:
Le (arXiv 2602.19520, February 2026) is the most comprehensive calibration study in the dataset: 292 million trades across 327,000 binary contracts. A Bayesian hierarchical model explains 87.3% of calibration variance. The dominant finding for political markets: a persistent +0.15 intercept — prices are chronically compressed toward 50%. The mechanism: opposing partisan traders partially cancel each other's positions regardless of the underlying probability. More volume brings more politically motivated noise traders, not better calibration.
Translation into trading terms: a political market priced at 70% YES reflects approximately 83% true probability at a 1-week horizon. That's a 13 percentage point systematic underpricing — not in one market, but structurally, across political markets as a category.
The execution side comes from Becker (2024) — 72.1 million trades, $18.26 billion in volume. Makers (limit orders) outperform takers (market orders) by approximately 22 percentage points structurally. At 1-cent contracts, makers win at 1.57x the implied probability while takers win at 0.43x. Limit orders capture a structural advantage before any information edge is applied.
Bürgi, Deng, and Whelan (CEPR, 2025) confirm across 300,000+ contracts: takers lose 32% on average. The maker/taker asymmetry is robust across platforms.
Entry criteria (exact), Polymarket political markets only:
- Market domain: Political (elections, policy decisions, regulatory votes)
- Price zone: 55–80% YES — the compression zone where the +0.15 calibration intercept creates systematic underpricing
- Consensus check: Metaculus or Manifold shows the same event at 65%+ (confirming the compression is real, not just my belief)
- Time to resolution: 3–14 days
- Open interest: >$20,000 (exit liquidity requirement)
- Order type: Limit orders only — post buy 2pp below current best ask
- Direction: Buy YES at the compressed price
Exit: Sell limit at 85% YES (capturing 50% of the calibration correction), or hold to resolution if probability exceeds 90% at T-24h. Stop: exit if price drops below 48% (calibration was wrong, genuine uncertainty exists).
My take:
This is the strategy I'm most confident in. The academic backing is the strongest of any strategy here — 292 million trades, Bayesian model, 87.3% variance explained, published February 2026. The edge doesn't require being clever; it requires patience in a category where retail emotion generates systematic underpricing of high-probability outcomes. My job is to identify the compression, verify it with an external reference price, and wait with a limit order.
Testing plan:
- Week 1: Build political market scanner flagging Polymarket markets in 55–80% YES zone with Metaculus/Manifold comparison
- Week 2: Paper trade 15 qualifying signals
- Month 2: Deploy $2,500 real capital, maximum 4 concurrent positions, 2% risk per trade
Kill criteria: Limit order fill rate below 35%, or win rate below 52% after 80 trades.
Strategy 3: Economic Data Release Anchoring Contrarian
The edge in one sentence: Expert consensus forecasts for major economic data releases are systematically anchored to prior-month values, Polymarket prices inherit this anchoring, and a leading-indicator model that detects the anchor correctly predicts the surprise direction with exploitable regularity.
Academic backing:
Campbell and Sharpe (JFQA, 2007, updated SSRN 2021) documented the core mechanism: Bloomberg consensus forecasts for monthly economic releases (CPI, NFP, PMI) are significantly anchored to prior-month values. The critical finding: this anchoring is not fully arbitraged even in the bond market — a vastly more efficient market than Polymarket.
New validation from the Federal Reserve itself: Diercks, Katz, and Wright (Federal Reserve FEDS Paper, February 2026) analyzed economic prediction market data and found: markets had a "perfect forecast record" on Fed rate decisions by the day before FOMC meetings; statistically significant improvement over Bloomberg consensus for headline CPI; and an asymmetric response pattern — upside CPI surprises drive significantly larger market reactions than equivalent downside surprises.
That asymmetry is the specific directional signal: when leading indicators suggest an upside CPI surprise relative to the anchored Bloomberg consensus, the market reaction will be disproportionately large. A Fed research paper is now providing theoretical backing for this.
Additional confirmation: Royal Society Open Science (2025) found prediction markets outperform survey-based consensus (like Bloomberg) by approximately 7 percentage points in accuracy. When Polymarket prices diverge from Bloomberg consensus by more than 7pp, the prediction market is more likely correct — which means when they agree, the prediction market has inherited the anchoring error from the consensus.
Entry criteria:
- Event: Major economic data release with active Polymarket markets (Fed rate decision, CPI direction, NFP level)
- Timing: Enter 3–7 days before the release — before the consensus lock-in period
- Anchor test: Bloomberg consensus for the upcoming release is within 1% of prior month's actual (confirming anchoring is likely present)
- Leading indicator divergence: My nowcast-based forecast (a real-time estimate built from the latest data, unlike traditional forecasts that update monthly) diverges from Bloomberg consensus by ≥5pp on the Polymarket binary question
- Confirmation: At least 2 independent leading indicators support the direction
Free leading indicator sources:
- CME FedWatch for rate decisions (fed funds futures)
- Cleveland Fed Inflation Nowcasting for CPI
- JOLTS data for NFP (leads by 4–6 weeks)
- ADP employment report for NFP (leads by 2–4 weeks)
- Atlanta Fed GDPNow for GDP
Exit: Data releases and outcome aligns with forecast → sell at >90% YES. Pre-release stop: if 2 leading indicators flip direction → exit immediately.
My take:
This requires the most preparation. Building a rough CPI nowcast from public data takes 4–6 hours of one-time setup. But the edge is theoretically the cleanest: anchoring persists in bond markets where billions of dollars try to exploit it, and still survives. Polymarket is far less efficient than the bond market. The payoff per trade when correct is also the highest of any strategy here — a market at 35% that resolves YES pays 186% on capital deployed.
Testing plan:
- Month 1: Build leading indicator models for 3–4 economic variables (CPI, NFP, Fed decision)
- Month 2: Test against 12 months of historical Polymarket economic market data; identify past anchor divergence signals
- Month 3: Deploy $1,500 real capital. Max risk 3% of this allocation per trade. Face value: approximately $560 per trade at 8pp stop.
- Expected signal frequency: 2–4 qualifying events per month.
Kill criteria: Forecast model accuracy below 55% after 30 signals, or average market divergence when correct below 20pp.
Strategy 4: Sentiment Cycle Fade (Wash-Filtered)
The edge in one sentence: Social media-driven retail floods into prediction markets in predictable cycles; fading the hype peak on Polymarket markets with confirmed low wash-trading contamination generates consistent positive expectation.
Academic backing:
Tai et al. (Journal of Prediction Markets, 2022–2023) analyzed 1,943 individual traders and found herding is far more prevalent than estimated — less-informed traders follow more-informed ones, amplifying price movements beyond their informational content by 5–15 percentage points.
Clinton and Huang (Vanderbilt, 2025–2026) documented negative serial correlation in 58% of Polymarket's presidential markets — daily spikes reversed the following day. The pattern was strongest in the final two weeks before Election Day, when social media coverage peaked. More noise, not more information.
The practitioner validation: Controverity's 2025 Polymarket Playbook documents 70%+ win rates for traders following the Rumor→Hype→Peak fade framework across tracked cohorts.
The critical addition — what makes this distinct from generic contrarianism — is the wash-trading filter. Columbia University's network-based analysis (SSRN 5714122, November 2025) found approximately 25% of all Polymarket volume is wash trading, rising to 45%+ in sports markets and 60% during the December 2024 airdrop period. Fading a "sentiment spike" in a sports market where 45% of volume is fake is not sentiment trading — it's noise. The filter matters.
Entry criteria (exact):
- Market category wash-trading score: estimated <30% fake volume. Practical filter: avoid Sports category entirely for directional positions; avoid any market that showed volume spikes during December 2024
- Social signal: a viral post directly about this specific market on Twitter/X (>3,000 engagements) or Reddit r/polymarket (>50 upvotes) within the past 3 hours
- Price jump: YES price increased ≥8pp in the last 4 hours, driven by multiple small trades — not a single wallet
- Volume spike: Volume in last 2 hours is ≥3x the 7-day daily average
- Emotional premium: Current price is ≥8pp above Metaculus or Manifold reference probability
- No fundamental justification: No major news event explaining the move in the last 4 hours (10-minute Google News check)
- Cycle timing: Viral post is 3–8 hours old (herding has peaked, reversion is imminent)
- Entry: Buy NO at current market price — urgency justifies taker order here
What makes or breaks this strategy:
The false-positive rate is the main problem. A social media spike driven by a whale taking a large position looks identical to a retail sentiment spike — until you check on-chain wallet concentration. If a single wallet accounts for more than 40% of the volume driving the spike, pass on the trade. That's whale-driven, not sentiment-driven, and the reversion mechanics are different (and covered by Strategy 5 below).
Testing plan:
- Week 1–2: Set up Twitter API monitoring for Polymarket market URLs and keywords. Set up Reddit RSS for r/polymarket.
- Week 3: Build cross-reference scanner: social signals + Polymarket price data + Metaculus reference prices.
- Week 4: Paper trade 15 signals.
- Month 2: Deploy $2,000 real capital. Max 3 concurrent positions. $200 max loss per trade (2% of $10K total capital, 8pp stop on $2,500 face value).
Kill criteria: Win rate below 52% after 80 trades, or average reversion below 4pp after transaction costs.
Strategy 5: On-Chain Insider Signal Radar
The edge in one sentence: Genuine insiders leave a detectably distinctive on-chain footprint before major announcements — fresh wallet, single market, outsized position, specific event categories — and following these signals is legal (public blockchain data), has documented 6–48 hour lead windows, and in the strongest historical cases paid 1,000%+.
Documented cases establishing the pattern:
Nobel Peace Prize, October 2025. Account "dirtycup" placed $68,340 on María Corina Machado at 3.6% odds at 3:41 AM UTC. The wallet was created weeks before the market opened. Only this single market was ever traded. The contract resolved at $1.00. Return: approximately 2,700%. The on-chain footprint — fresh wallet, single market, geopolitics category, timing 12 hours before announcement — is textbook.
AlphaRaccoon wallet. 22 of 23 Google-related predictions correct. Made $150,000 on the exact release date of Google Gemini 3.0. Win rates statistically impossible by chance. Domain-specialist insider profile.
Israel criminal indictment, February 2026. The first criminal prosecution for prediction market insider trading in the world — establishing that on-chain patterns are legally recognized as evidence of insider activity. The flip side: following such patterns (observing public blockchain data, not originating the trades) is legal.
Maduro removal, $400,000+. Positions placed 12 hours before the announcement "suggesting White House/DoD-level advance knowledge." The 12-hour lead window is consistent across multiple documented cases.
The academic foundation comes from the Columbia wash-trading study — the network-clustering methodology that establishes what authentic versus fake trading activity looks like. That baseline is what makes insider pattern detection possible.
Entry criteria — ALL of the following required:
Suspicious wallet profile (need ≥2 of 4):
- Wallet age < 21 days
- Total markets ever traded ≤ 5
- Position size >$3,000 face value (outsized relative to wallet history)
- Event category: geopolitics, science awards (Nobel, Pulitzer), FDA approvals, corporate M&A, legal rulings
Confirmation requirements (ALL required):
- ≥ 2 distinct suspicious wallets (different age/history profiles, not the same coordinated cluster) entering the same direction in the same market within 4 hours
- Combined face value of suspicious positions >3% of the market's open interest
- No major public news event in the last 6 hours explaining the direction
- Market currently priced 10–50% YES (suspicious wallets moving a longshot, not confirming an already high-probability outcome)
- Timing: 8–48 hours before potential resolution
Entry: Take the same-direction position immediately after all criteria confirmed. Polywhaler free tier covers the $10,000+ trade alert needed.
The math I'm not hiding from:
If only 1 in 5 full-criteria signals is genuine, the EV at my profit/loss targets is negative. At 1 in 3 genuine signals, the math works — a 2,700% winner (Nobel case) offsets many 8pp stop losses. The edge only activates above a 35–40% genuine signal rate. That's why this is Strategy 5, not Strategy 1. I'm allocating the smallest capital here while running signal tracking in parallel with the other four strategies.
Testing plan:
- Week 1–2: Build wallet monitoring script (Polygon RPC via Alchemy, Polymarket API). Wallet scorer: age, trade history, market concentration, position size relative to wallet history.
- Week 3–4: Paper-trade mode. Catalog historical signals going 3–6 months back and compare to known resolution events.
- Month 2: Run live signal tracking with $0 capital deployed. Measure signal quality.
- Month 3–4: Deploy $1,500 real capital only after confirming genuine signal rate exceeds 30% on 20+ paper-trade signals.
Kill criteria: Genuine signal rate below 20% after 40 triggered full-criteria alerts.
Capital Allocation: How the $10,000 Breaks Down
| Strategy | Allocated | Max Concurrent | Max Risk/Trade | Start Date |
|---|
| #1 Near-Expiry Fade | $1,500 | 2 positions | $200 (2% of $10K) | Week 2 (after WebSocket built) |
| #2 Political Calibration | $2,500 | 4 positions | $200 (2% of $10K) | Day 1 — start here |
| #3 Economic Anchoring | $1,500 | 2 positions | $300 (3% of $10K) | Month 2 (after model built) |
| #4 Sentiment Cycle Fade | $2,000 | 3 positions | $200 (2% of $10K) | Week 3 (after social monitor built) |
| #5 On-Chain Insider Radar | $1,500 | 2 positions | $250 (2.5% of $10K) | Month 3 (after paper-trade calibration) |
| Cash Reserve | $1,000 | — | — | Liquidity buffer, opportunistic |
| Total | $10,000 | ≤8 at any time | ≤3%/trade | Never >90% deployed |
Sequencing matters. Strategy 2 starts immediately — it requires a Metaculus API call and a Polymarket account. Strategy 1 starts after the WebSocket feed is built (week 2). Strategy 4 runs after the social monitoring is live (week 3). Strategy 3 starts after the leading indicator model is built (month 2). Strategy 5 stays in paper-trade mode for 6 weeks while I collect and calibrate signal data.
Timeline
| Period | Actions |
|---|
| Week 1 | Strategy 2 scanner live. Paper trade political calibration signals. Begin Strategy 5 paper-trade signal collection. |
| Week 2 | Polymarket CLOB WebSocket feed built. Paper trade Strategy 1 (first 10 signals). Metaculus/Manifold API integrated. |
| Week 3 | Deploy $2,500 real capital to Strategy 2. Twitter/Reddit social monitor live. Paper trade Strategy 4 (first 15 signals). |
| Week 4 | Deploy $1,500 to Strategy 1 if paper trades show >50% win rate. |
| Month 2 | Build economic leading indicator model. Deploy $2,000 to Strategy 4 if paper trades validate. First Strategy 3 live event. |
| Month 2–3 | Deploy $1,500 to Strategy 3. Continue Strategy 5 signal calibration. |
| Month 3 | First comprehensive P&L review. Reallocate capital from underperformers to outperformers. |
| Month 4–6 | Full portfolio active. Monthly rebalancing. Results published weekly. |
Strategy Comparison
| Strategy | Edge Type | Est. EV/Trade | Infrastructure | Confidence |
|---|
| #1 Near-Expiry Fade | Behavioral + Timing | +2.2pp | Medium (WebSocket) | HIGH |
| #2 Political Calibration | Calibration + Structural | +4.1pp | Low (API) | HIGHEST |
| #3 Economic Anchoring | Cognitive Bias + Model | +15–25pp (rare) | Medium (data model) | HIGH |
| #4 Sentiment Cycle Fade | Behavioral + Social | +3.1pp | Medium (social monitor) | HIGH |
| #5 Insider Radar | Information Flow | +1.2pp (est.) | Low–Medium | MEDIUM |
Strategy 2 has the highest confidence: 292 million trades, Bayesian model, 87.3% variance explained, February 2026 publication. The edge mechanism is fully explained — partisan emotional participation isn't going away, and the calibration compression is structural.
Strategy 3 has the highest per-trade EV but fires infrequently — 2–4 qualifying signals per month. The Federal Reserve paper validating Polymarket's accuracy advantage over Bloomberg consensus is the most recent confirmation.
Strategy 5 has the most asymmetric payout. Most signals will fail calibration; the ones that don't can return 1,000%+.
The Risks I'm Taking Seriously
Oracle attacks. The Ukraine mineral deal (March 2025) established that qualitative resolution criteria are attack surfaces. Before any position above $500, I check: is the resolution criteria quantitative and unambiguous? "Did BTC close above $X on date Y?" is safe. "Did X agree to Y?" is not. Large positions in quantitatively-resolved markets only.
Wash trading signal pollution. 25% of Polymarket volume all-time is fake, rising to 45% in sports markets. I never use raw volume as a market quality signal. Open interest only.
Edge compression. Bid-ask spreads (the gap between what buyers offer and sellers ask — a smaller gap means a healthier market) on Polymarket compressed from 4.5% (2023) to 1.2% (2025). Professional capital entering post-2024-election is closing mechanical edges. Every strategy has defined kill criteria. If the edge disappears, I stop trading it and publish that result.
The base rate of losing. Only 7.6% of Polymarket wallets are net profitable. Only 0.51% have made more than $1,000 total. The academic backing gives me more confidence than most retail participants have — but academic edges require disciplined execution to survive contact with a live market. The test is whether the edge is real in aggregate, not on individual trades.
What I'm Not Testing
Speed trading / news alpha. 73% of arbitrage profits go to sub-100ms bots. The human-accessible window (30 seconds to 5 minutes) requires being in front of screens with positions pre-sized at all times. That's a job, not a strategy for this experiment.
Favorite-longshot bias as a standalone strategy. This is the standard playbook. Contracts below 10 cents lose 60%+ on average. I apply this as a position-sizing discipline — no YES positions below 10 cents — but it's not a separate strategy to test.
Market making. The 2024 election peak earnings of $700–$800/day are not reproducible in 2026 without significant capital and custom infrastructure. The CLOB WebSocket built for Strategy 1 enables this in month 3 — I'll assess at that point.
How I'll Know If This Works
Weekly updates on PredictionTalk. Wins and losses. If a strategy produces 40+ trades with a win rate below kill criteria, I'll say it doesn't work and reallocate capital.
The 3-month checkpoint: Strategies 1, 2, and 4 should hit 30–50 resolved trades within 8–10 weeks — enough for a preliminary assessment. Strategy 3 needs 3–6 months (economic data releases are monthly). Strategy 5 needs 3–4 months of signal calibration data before any real capital goes in.
The trade I'm most curious about: Strategy 2, the political calibration compression. Le's 292-million-trade study is from February 2026 — not stale research from 2020. The +0.15 calibration intercept is structural, driven by partisan emotional participation that isn't going anywhere. Whether that edge survives at $10K scale over a 6-month test is the real question.
Which of these strategies do you think has the most edge in current market conditions? What am I missing? If you've traded any of these patterns, I want to know what you found.
Reply below, or continue this discussion in the prediction markets forum thread. First weekly update posts next Friday — win or lose.
Sources
Academic papers cited:
Practitioner sources:
Discuss below. First weekly update posts next Friday.