Table of Contents
Kevin Davey—award-winning algorithmic trader and author—sits down for a straight-talk interview on what it actually takes to build and run profitable automated strategies. If you’ve seen his work, you know Kevin’s not about hype; he’s about process, testing, and staying power. In this conversation, he explains why more tools and Python coders haven’t made algo trading “easy,” how markets keep shifting under your feet, and why discipline beats clever code.
You’ll learn the unglamorous reality that trading costs can flip “working” ideas to losers, why robustness matters more than universality, and how Kevin uses walk-forward optimization, Monte Carlo, and live incubation to separate durable edges from curve-fit mirages. He also lays out the trader’s mindset for algos—setting turn-off rules, accepting early losses without panic, and favoring consistency you can actually stick with. If you’re new to systematic trading or refining your playbook, this piece gives you a practical blueprint to build strategies you can trust when real money is on the line.
Kevin Davey Playbook & Strategy: How He Actually Trades
Core Philosophy: Robust, Not Universal
Kevin Davey builds strategies that actually survive contact with live markets. He doesn’t chase a single “works-everywhere” holy grail; he hunts for edges that are robust enough to trade where they truly work, and he keeps adapting as markets change.
- Build strategies to trade specific markets and bar sizes where they prove themselves, rather than forcing a “one-size-fits-all” template.
- Judge strategies by risk-adjusted real-time performance, not by how pretty the backtest looks.
- Expect market behavior to shift; plan for continual improvement and retirement of systems rather than permanence.
Idea to Prototype: Model Costs First
Most backtests “win” until you add reality. Kevin insists on modeling slippage and commissions up front so the test reflects the fills you’ll actually get.
- Hard-code slippage and commission assumptions before any optimization; never run “free” fills.
- Re-test with pessimistic costs to see if the edge survives when execution gets worse.
- Prefer ideas that still work after you round entries/exits (e.g., nearest tick) to avoid micro-curve fits.
Acceptance Testing: Walk-Forward + Monte Carlo
Before a strategy gets near live trading, it must beat a gauntlet. Kevin’s staples are walk-forward optimization to generate out-of-sample results and Monte Carlo to pressure-test the equity curve.
- Run walk-forward optimization and require out-of-sample results to be in line with in-sample; if walk-forward fails, discard the strategy.
- Use Monte Carlo resampling of trade sequences to estimate probable drawdowns; don’t deploy if projected drawdowns break your tolerance.
- Perform component tests (entries vs. exits) to confirm there’s an actual edge beyond luck or exits alone.
- Delete the top 2–3% biggest winners from the test and ensure the strategy still holds up; avoid systems that rely on rare jackpots.
Go/No-Go Rules: Clear Pass/Fail Criteria
Subjective “feels” are banned. Each test has a binary outcome, so only a small fraction of candidates graduate.
- Pre-write pass/fail gates (e.g., min return-to-drawdown, max time-under-water, walk-forward pass required).
- If any gate fails—walk-forward, Monte Carlo, edge tests, or cost realism—the strategy is out with no edits.
- Keep a build log: idea, settings, gates, and final disposition (pass/fail) to avoid memory bias.
Live Incubation: Prove It Without Capital
Even winners can crumble. Kevin lets approved strategies run live (no or minimal capital) to experience different regimes before full deployment.
- Incubate ~6–9 months to observe multiple market conditions; don’t judge after a week or two.
- Track live vs. backtest trade-by-trade to confirm the rules are executable exactly as designed.
- Promote only if live performance is consistent with the test envelope (profit factor, drawdown, win/loss rhythm).
Deployment: Add, Replace, Evolve
Kevin rotates strategies over time—turning on new ones and retiring laggards—so the portfolio stays fresh without relying on any single edge.
- Turn on new strategies regularly (e.g., monthly cadence) once they graduate from incubation.
- Replace or pause strategies that deviate materially from their tested behavior or break your drawdown limits.
- Maintain execution discipline: take every signal exactly; never skip trades that were counted in the backtest.
Market Selection: Narrow Where It Works
Strategies rarely work everywhere. Kevin tests across symbols/timeframes, but he’s happy to trade a strategy only where it genuinely earns its keep.
- Test promising ideas across several markets and bar sizes, but deploy where metrics are strongest.
- Don’t reject a strategy just because it doesn’t generalize to all markets; trade it where it’s robust.
- Monitor for regime shifts (e.g., unusual commodity behavior) and be ready to downsize or shelve when conditions mutate.
Psychology & Process: Expect Early Pain
Even great systems can start cold. Kevin plans for the emotional hit so it doesn’t derail the plan.
- Assume the first few trades can lose; size small and stick to the rules anyway.
- Eliminate discretionary overrides like “turning it off for the weekend” if your backtest included weekend behavior; trade what you tested.
- Treat algo trading as a marathon: consistency and execution beat impatience and tinkering.
Risk Management: Size for Survival
Edges are fragile; capital is not. Kevin prioritizes staying power over maximum return.
- Size positions so that worst-case Monte Carlo drawdowns remain within your personal limits.
- Use uniform sizing rules for strategies of similar volatility so one system can’t dominate risk.
- Keep capital in reserve to add/replace strategies without over-concentrating.
Maintenance: Tight Feedback Loops
Edges fade or break. The job is to catch decay early and keep building the next set of candidates.
- Run a weekly dashboard: live stats vs. expected ranges, slippage drift, error logs.
- Re-validate with rolling walk-forward or periodic checkups; if reality diverges, investigate and downshift size.
- Maintain a pipeline: ideation → cost-aware prototype → test gauntlet → incubation → deploy → monitor → retire.
Size trades for survival-first results, not maximum monthly returns
Kevin Davey sizes positions with one goal: survive the next drawdown. Instead of chasing big months, he caps risk per trade so a cold streak can’t threaten the account. His benchmark is the worst-case path, not the best backtest line. That mindset keeps him in the game long enough for edges to compound.
In practice, Kevin anchors size to volatility and a precomputed max portfolio drawdown limit. If Monte Carlo shows a 20% worst-case, he’ll size so the live version lands well inside that boundary. When a strategy heats up, he resists the urge to double; he scales only after time-under-water and loss clusters remain within plan. If losses breach tripwires, he cuts size or turns the system off until the data says it’s safe. Survival sizing may feel slow in bull runs, but Kevin Davey treats consistency as a tradable edge. Your job is not to win the month; it’s to still be capitalized when the next good month arrives.
Allocate by volatility so each strategy pulls an equal risk weight.
Kevin Davey doesn’t let noisy strategies bully the portfolio just because they swing more. He scales position size so each system contributes similar expected risk, not similar nominal dollars. That way, a gentle mean-reverter and a punchy breakout model earn a fair seat at the table. Vol-adjusted sizing keeps equity curves smoother and prevents one wild child from dictating your mood.
In practice, Kevin starts with the target risk per trade, then adjusts contracts or shares by recent ATR or standard deviation. If volatility doubles, size halves; if volatility compresses, size steps up within pre-set limits. He also caps correlation clusters, so two “different” systems that spike together don’t secretly double your risk. The goal is simple: every strategy should matter, and none should matter too much. That balance lets Kevin Davey grow breadth without inviting blowups.
Diversify across markets, timeframes, and entry types to reduce whipsaws.
Kevin Davey spreads risk so no single market mood can wreck his week. He trades uncorrelated symbols—indexes, bonds, metals, energies, currencies—then mixes bar lengths so the same instrument can be approached with different speeds. That way, a choppy day on the five-minute doesn’t cancel a clean trend on the daily. Diversification here isn’t just “own more stuff”; it’s designing edges that win in different weather.
He also diversifies entry logic so the portfolio isn’t one big coin flip on a single setup. A breakout engine, a pullback model, and a mean-reversion system can all coexist if they’re independently profitable. When one style whipsaws, another often steps up and smooths the curve. That’s the quiet superpower in Kevin Davey’s approach: varied markets, varied timeframes, and varied entries working together so no single noise pattern runs the show.
Codify simple rules, test out-of-sample, then incubate before real money.
Kevin Davey keeps strategies simple enough to execute flawlessly and measure honestly. He limits the number of inputs and parameters, writes the rules in plain language, and removes any step that requires “feel.” If a rule can’t be followed by code every time, it doesn’t belong. Simple rules reduce curve-fit traps and make it obvious when a system is actually broken versus just unlucky.
After coding, Kevin demands out-of-sample proof before risking real capital. He runs walk-forward tests, sanity-checks drawdowns, and only then moves the strategy into a live incubation phase with zero or tiny size. Incubation lets him observe fills, slippage, and regime behavior without account-threatening risk. If the live stats stay inside the tested envelope, the system gets promoted; if they drift, Kevin Davey either fixes it fast or shelves it without emotion.
Trust mechanics over prediction; predefine off-switches and drawdown safeguards
Kevin Davey treats prediction as a nice-to-have and mechanics as the job. He follows rule-based entries and exits, logs every outcome, and lets the stats—not his gut—decide what happens next. When a trade triggers, he executes without second-guessing macro headlines or “feel.” That mechanical consistency is how Kevin Davey converts a small edge into repeatable results.
Before anything goes live, he writes the off-switch in plain English: what loss, what time-under-water, what volatility spike means “pause” or “retire.” He defines max daily damage, per-strategy drawdown limits, and a portfolio-wide circuit breaker so a bad morning can’t become a bad month. If the curve breaks its tested envelope, he cuts size or shuts it down first and asks questions later. The edge is the process—predict less, obey more.
Kevin Davey’s core lesson is to trade what actually works where it works, not to chase a mythical system that wins everywhere. He tests ideas across markets and bar sizes, but if a strategy is only robust on one instrument, he trades it there—so long as the live, risk-adjusted results hold up. That instrument-by-instrument pragmatism beats forcing universality and keeps him focused on the only metric that matters: real-time performance. He also stresses that algo trading hasn’t magically gotten easier—tools improved, but competition increased, regimes shift fast (think cocoa), and the edge still comes from disciplined rules and steady adaptation, not shiny code. And before any backtest looks “great,” he insists you model slippage and commissions up front; without realistic costs, paper profits are an illusion.
Davey’s build gauntlet is simple but unforgiving: walk-forward optimization to generate out-of-sample results, Monte Carlo to map probable drawdowns, and targeted tests to confirm there’s an edge in entries and exits. If a system relies on a few rare jackpots, he’ll even strip out the top winners to see if the strategy collapses—because lumpy equity is hard to trade with real money. After a pass, he incubates live with tiny or no size to verify fills and behavior before promotion. Psychologically, he rejects the myth that automation removes emotion; money at risk always stirs feelings. The antidote is consistency—execute every signal exactly as tested, don’t flip switches for headlines or weekend worries, and define in advance the specific conditions that trigger a pause or retirement. Finally, he improves in one-degree turns: collect small tidbits, make incremental upgrades, and anchor everything to a solid, real-time-proven process—because great backtest builders are common, but great real-money traders are rare.

























