sign up now for free trade alerts, other features

Sign up now before the month’s end: free monthly trade email alerts, both position entry and position exit reminders, have been implemented. The emails are sent out automatically by the backend a day before entry or exit. Positions are exited on the last trading day of every month; new positions are entered on the first trading day of the new month.

Note: the historical performance results on adaptivwealth are based on using market on close orders, an order type that allows you to buy/sell stocks right as the market closes.

Other features include the ability to view the adaptive Minimum Variance Portfolio’s historical allocations. One can see the benefits of being dynamic (vs. static, such as or during the last few months of 2007, going into 2008: the MVP during this time period was around 80% in US intermediate term bonds (IEF), which largely protected the portfolio from the precipitous losses experienced by the stock market in the next year.

The MVP vs. VTI performance table (shown below, or when you mouse over the performance time series chart on the main adaptivwealth page) also show the benefits of an adaptive/dynamic allocation model.


The Minimum Variance Portfolio has a comparable compound annualized growth rate (since June 2006, when the ETFs it uses came online) to that of VTI, the Vanguard Total Stock Market ETF, a proxy for the overall US stock market. The MVP has a much lower maximum drawdown (-16.5% compared to VTI’s -55%), and almost double the Sharpe Ratio (0.62 vs. 0.35): in essence, it seems that the adaptive Minimum Variance Portfolio achieves stock-market like returns over the long-run with much lower volatility than the stock market.

adaptivwealth: the new web app that I made to bring adaptive asset allocation to the masses



I recently finished the beta version of a web app I’ve been building, a web app that brings adaptive asset allocation to the masses.

What is adaptive asset allocation?

I’ve written about it in several previous posts. Essentially, it’s the idea that traditional Markowitz mean-variance asset allocation can be improved–generating portfolios that have better risk-adjusted performance–by making the models more adaptive to market changes.

What’s the point of the web app?

adaptivwealth’s goal is to make models that try to improve upon the weaknesses of traditional asset allocation more accessible to individual investors.

Asset allocation–allocating one’s money to different asset classes such as equities, bonds, and commodities–often produces more diversified portfolios than, for example, just picking stocks. Portfolios constructed using asset allocation can have decreased risk and increased returns (see the above screen shot of the performance of the Minimum Variance Portfolio vs. the performance of the S&P 500 for an example). A portfolio’s holdings can be optimized such that return is maximized given a level of risk. Asset allocation is powerful: the famous Brinson, Hood, and Beebower study showed that asset allocation is responsible for 91.5% of pension funds’ returns. Not stock selection, not market timing.

Asset allocation is traditionally not very accessible to individual investors. Individual investors have data, computation, knowledge, and/or time constraints that prevent them from running asset allocation algorithms to optimize their portfolios; asset allocation services are usually performed by financial advisers for individual investors, and large institutions like pension funds and hedge funds obviously have the resources to do it for themselves. Companies like are closing this gap, taking out the middle man, financial advisers, and lowering the costs of implementing asset allocation for the individual investor.

Companies like wealthfront implement traditional asset allocation algorithms. adaptivwealth differentiates itself by using models that try to improve upon the weaknesses of traditional asset allocation, and by making these models more accessible to individual investors. One approach to addressing the weakness of traditional asset allocation is by making the models more adaptive to market changes.

A call for help

adaptivwealth is still very rough around the edges, and I have a whole list of features that I want to implement, ideas for growth, etc. But I wanted to get a minimum viable product out there and collect feedback as quickly as possible. Let me know your thoughts! Questions, suggestions for features, advice, criticisms, anything and everything helps. Thank you.

Adaptive Asset Allocation: update to reflect investor data constraints

I realized that the portfolios presented so far would be pretty difficult for the individual retail investor to implement due to data constraints.

The problem

Say today is January 31, and the market has just closed. The adaptive asset allocation portfolios I constructed assume that the investor exists at the close of the last day of the month. Which is definitely reasonable assuming a brokerage account at a place like Interactive Brokers with market on close orders. However, the algorithms also assumed that we would enter the new positions on January 31. This could be possible if live streaming quotes were used, and the weights were calculated seconds before actual close and the positions entered right before close, but it’s definitely not possible for a normal retail investor to do.

The solution

So I decided to test the effect on returns of delaying the entry by one day; specifically, entering the new positions on the close of February 1 in the example above (and still exiting the positions on January 31). Again, this is a reasonable simulation for what a retail investor would actually do: he would exit his old positions on January 31, calculate new portfolio weights on February 1, and enter the new positions on the market close of February 1.


Sharpe Ratio drops from around 0.8 to 0.62. CAGR is only 7.6%. It’s interesting that performance deteriorates so much from delaying entry by just one day. Perhaps the performance decrease represents the costs of (unrealistically) both entering and exiting positions on the same day.

An interesting caveat

I wanted to test the next logical variation: what if we, instead of entering the new positions one day later, exited the old positions one day earlier? Using our example, the investor would exit his positions on January 30, calculate the new portfolio weights on January 31 (using data looking back from January 30, not 31), and then enter the new positions on February 1. Below are the results.



Both CAGR and Sharpe Ratio are higher than if we entered new positions one day late: CAGR is 2% higher, Sharpe Ratio is also 0.77 compared to 0.62. It seems we’re missing out on a lot more of the returns if we skip the first day of each month instead of the last day of each month. Is this evidence of the end of month/first of month effect (basically that the returns on the first day of a month are significantly higher than average)? Maybe, but for now, I need to move forward with my project. Creating the adaptive asset allocation algorithms is only the first part… more to come.

Adaptive Asset Allocation: minimum variance portfolios

This is a continuation of my previous post on adaptive asset allocation.

Introduction to Mean-Variance Optimization

Mean-variance optimization (the implementation of Markowitz’s modern portfolio theory) basically allows one to find the optimal weights of assets in a portfolio that maximizes expected return given a level of risk/variance, or equivalently, minimize risk/variance given a level of expected return. The biggest ingredient in mean-variance optimization is the covariance matrix of the assets’ returns. The covariance matrix contains information on not only how volatile the assets are (their variance) but also how they move with each other (covariance). Covariance adds a piece to the adaptive asset allocation puzzle that we did not have before: how the asset classes move with each other, how correlated they are, if they’re good hedges for each other. The minimum variance portfolio is the set of asset class weights that minimizes the variance of the portfolio (regardless of our expectations of future returns).

This is a step up from risk parity, which assigns each asset class a weight such that all asset classes in the portfolio contribute the same amount of variance to the portfolio variance. The overall portfolio variance could still be relatively high. Now our portfolios are being optimized to have the smallest variance possible.

I won’t get too deep into the details of the math, but there is a closed form solution to finding the set of optimal portfolio weights. It’s minimizing the quadratic function   where

  • w is a vector of holding weights such that sum w_i = 1
  • Sigma is the covariance matrix of the returns of the assets
  • q ge 0 is the “risk tolerance”: q = 0 works to minimize portfolio variance and q = inftyworks to maximize portfolio return
  • R is the vector of expected returns
  • w^{T} Sigma w is the variance of portfolio returns
  • R^{T} w is the expected return on the portfolio

For the minimum variance portfolio, q = 0 , so we’re actually just minimizing  w^T Sigma w . Actual implementation was done with python’s cvxopt (convex optimization) library.


(like last time, all portfolios are rebalanced monthly)

Minimum Variance Portfolio

tl;dr: CAGR lower, max drawdown less severe, Sharpe Ratio slightly higher than that of the momentum + risk parity portfolio.

Where all ETFs are traded, and are weighted in the portfolio such that the variance of the portfolio is minimized.


Momentum and Minimum Variance Portfolio

tl;dr: compared to the momentum + risk parity portfolio (highest Sharpe Ratio so far, talked about in the previous post): CAGR about the same, max drawdown less severe (what’s even more impressive is that the max drawdown during the recent financial crisis was only -13%), Sharpe Ratio slightly higher

Where only the top five ETFs are selected based on their momentum, and are weighted in the portfolio such that the variance of the portfolio is minimized.



The last portfolio dominates all the other portfolios tested: it has the highest CAGR, smallest max drawdown, and highest Sharpe Ratio. This is because it includes all three pieces of the asset allocation puzzle: returns, variance, and correlation/covariance. We’ve made asset allocation more adaptive by filtering assets by momentum (with the expectation that high momentum assets will continue to perform well in the near term) and using shorter timeframes for variance and correlation/covariance–through this, the portfolios are more responsive to more recent asset price action.

Adaptive Asset Allocation: momentum and risk parity

Asset allocation is powerful: the famous Brinson, Hood, and Beebower study showed that asset allocation is responsible for 91.5% of pension funds’ returns. Not stock selection, not market timing.

Also, I need to put my money to work. I don’t have time for frequent trading. I don’t trust my fundamental analysis, and I know that if I don’t have a quantitative, rule based system my emotions will get the best of me and I will make bad decisions.

Asset allocation should be easy these days, with low-cost, liquid ETFs tracking everything from gold to international REITs.

The the million dollar questions is, as always, how do we determine how much of our money to allocate to what asset classes?

Adaptive Asset Allocation

I decided to implement what’s known as Adaptive Asset Allocation, an intuitive extension of the traditional Markowitz mean-variance model. Essentially, it makes traditional portfolio optimization more “adaptive” by using shorter term metrics as inputs instead of long run averages/standard deviations.

The portfolios are rebalanced monthly. There is only a universe of 10 ETFs (gold, bonds, REITs, equities, the usual). So trading and actually implementing these portfolios should be easy.

A strategy’s ease of use is worthless if it doesn’t make money. So how does it perform? To help answer that question, I tested several portfolio construction methods to use as comparison. Here are the (incomplete) results:

Equal Weighted Portfolio

Where all 10 ETFs are given an equal weight.

Equal Weighted Asset Allocation

Momentum Portfolio

tl;dr: compared to equal weighted there is a higher CAGR, slightly higher Sharpe, much worse max draw down.

Where only the top 5 ETFs ranked by momentum are selected to be traded (equal weighted). The momentum effect has been shown to exist across asset classes and countries.


Risk Parity Portfolio

tl;dr: compared to equal weighted, CAGR is slightly higher, max draw down is smaller, Sharpe Ratio is higher

Where all 10 ETFs are included in the universe, but are weighted such that each position contributes the same amount of volatility to the portfolio (the entire portfolio has 100% exposure, i.e. the sum of the position weights equals one).


Momentum and Risk Parity Portfolio

tl;dr: compared to equal weighted, there is a much higher CAGR, smaller max draw down, much better Sharpe Ratio

Where only the top five ETFs are selected every rebalance based on their momentum, and the weighted according to risk parity.


Momentum and Minimum Variance Portfolio

Where the top five ETFs are selected by momentum, then weighted with a minimum variance optimization (weights that minimize the variance of the portfolio).

To be continued

Statistical Pair Trading on International ETFs

This is sort of a follow up to my post on the research paper on pair trading international ETFs that claimed spectacular results.

For our STAT 434 Financial Time Series final project, my partner and I decided to try and shed some statistical light on the strategy of pair trading international ETFs. We used the above mentioned research paper as our jumping off point.

Basically, the original research paper claimed to find an international ETF pair trading strategy that has a 20% annualized growth rate since 1996 and an almost unbelievably smooth equity curve, essentially printing money no matter the market conditions. Our goal was to statistically test the validity of trading these international ETF pairs, and to develop a more statistically sound international ETF pair trading strategy.

The details are in our paper, but basically we used the Engle-Granger two-step method to select the most cointegrated ETF pairs to trade (betting on pair convergence). Below is our equity curve:


Our strategy lost money consistently from 2005 to 2008. This means that instead of betting on the convergence of our ETF pairs, we should have bet on their divergence: there seemed to be momentum in international ETFs during this period, as the prices of even cointegrated pairs diverged even more in the short/medium term. During the financial crisis and Greece/EU panic in the few years during and after 2008, our strategy’s returns improved, which suggests that these international ETF pairs started converging again. This makes intuitive sense: equities do tend to be very correlated with each other in times of economic distress.

We figured we could capture these “regime shifts” with a moving average filter (we tried a 200 day average) on the trading strategy’s equity curve; this entailed “shorting the strategy” (or betting on pair divergence) when the strategy underperformed its moving average, and “going long the strategy” (or betting on pair convergence) when the strategy overperformed its moving average. The resulting strategy had a compound annualized growth rate of about 23% with a Sharpe Ratio close to 1.


The strategy development and backtesting was done in python, with heavy use of the data structures provided by the pandas library. Exploratory data analysis was done in S-plus.

Full paper below:

STAT 434 Rebecca Wu Troy Shu Final Report

Quantifying the opening range day trading strategy

It’s Thanksgiving Break, which means no classes or homework. For the past year, I’ve been using most of my free time during breaks like this to build things (web apps, trading algs, etc.). Before that, I used to have the old habit of playing online poker–not really building anything but playing games of calculated risks and personal discipline in order to win more money than I lost. I decided to satisfy my old gambling addiction  affinity for games of wit by day trading today.

I day traded AAPL, using a strategy called opening range breakout. I made about $70, which is a decent amount considering the money allocated to day trading was very small, and I only spent a total of a few hours actually doing stuff.

Whenever there’s money on the line, I always look for a quantifiable edge. So I decided to throw some statistics at the opening range day trading strategy.

The opening range day trading strategy

Here are the basic rules to the opening range strategy (caveats at the very end of this post):

  1. Use 5 min bars. Wait 30 minutes from open
  2. At 10am, determine the highest and lowest points the stock reached in the past 30 min. This is the opening range
  3. If price breaks above the opening range later in the day, enter a long trade. If price breaks below the opening range, enter a short trade. I like to wait for a confirmation (e.g. a second up bar on strong volume after the first break above the opening range) before entering.

Python programming

Next was a lot of python programming; as usual, pandas was infinitely helpful in dealing with time series and data tables, as was ipython and pdb for interaction with variables and debugging. I wrote up scripts to transform the data (all I had access to was consolidated trade data, so I had to transform it into 5 minute data), codify the trading rule, and generate statistics and graphs for the ‘event study’-style tests I ran.

I’ve uploaded my code and data files to my github: Here’s a description of the important files:

  • is a script that generates the data tables needed to conduct the event study tests on the strategy
  • the csvs labeled with “odr” are the data tables created from the script (consolidated into aapl_odr_all.csv)
  • calculates the stats, does the regressions, and generates the graphs from data in aapl_odr_all.csv

The quantifiable edge

Now for the part that matters. The results are based on concepts I call “max and min profit”. Max profit is the maximum amount of money we would’ve made that day after a breakout signal assuming we got out at the highest price that AAPL reached that day (or lowest price if we’re shorting). Min profit is the most amount of money we would’ve lost that day after entering like the breakout signal told us to do (e.g. in the long case, we get out at the lowest price that AAPL reaches that day, after the entry signal).

The graph above shows pairs of box plots for short and long trades. The upper box is for max profit, the lower box is for min profit. We can see that the return distributions for short and long trades are roughly the same. More importantly, the average max profit (~ +0.008% for both long and short) is much higher than the average min profit (-0.0024% for shorts, -0.0019% for longs). This is the quantifiable edge.

How has the quantifiable edge changed over time?

Answer: not much.

The above graph plots the time series of max and min profit over time. Note that in both series there does not seem to be a trend, so our edge seems to be pretty consistent (at least for AAPL, over a 7 month period).

Are there any factors that can improve our edge?

Yes, there are many. The immediately quantifiable ones were the range width in percentage (range high/range low – 1) and the “signal delay”, or the number of 5 min bars that had elapsed between the formation of the range and the breakout signal to buy or short. Other important factors to the opening range strategy are discussed in the caveats section below.

The graph above shows max profit (blue) and min profit (red) when trades are grouped into range width by quartile. The top quartile (quartile 4) contains the top 25% of trades with the largest opening ranges. Max profits are highest and min profits are lowest in this quartile. This is to be expected, as a larger opening range signifies more volatile price action, and so potential profits are higher and losses are larger. No guts, no sausage. That’s supposed to mean “no risk, no reward”, by the way. Whatever.

This one shows max and min profit when trades are grouped into signal delay by quartile. So the top quartile (quartile 4) contains the top 25% of trades with the longest delay (in minutes) between the actual formation of the opening range (at 10am) and the breakout entry signal (which could happen as late as 10 or 5 minutes before the 4pm close). The trades that have the highest max profit are those that occur soon after the opening range is formed–this signifies that prices are moving rapidly (a breakout of the range occurs just a few minutes after it is formed) and so are more decisive when they move in a certain direction. Interestingly, the max profit for the 4th quartile is also relatively high, and the min profit is positive. This could have something to do with the tendency for institutions to load/dump their positions toward the end of the day, thus driving the prices in apparently the same direction as the original opening range breakout.


Using the opening range strategy to day trade AAPL stock has a quantifiable edge. Things to do: other stocks, other time periods (what happened during the crisis?).


There are so many caveats and so many more things that could have been quantified and tested, but I will boil it down to one notion: the success of the opening range strategy still depends heavily on the trader using it.

The trader decides how he manages risk–his discipline and and skill determine how he cuts his losses and takes his profits. This could be based on technical analysis/chart reading experience, financial goals, even intuition. So things like using support and resistance levels for stop losses and profit targets, engaging in volume analysis, and interpreting candlestick patterns are not accounted for: they all contain subjectivity and are hard to quantify. That means that this is an area ripe with asymmetric information, and area with opportunity. How else do human traders (and even human investors like Buffett) compete in a market overrun by algorithms. At least now us humans can nurture an edge that has been quantitatively shown to even exist in the first place.

Is seeing believing? Pairs trading on International ETFs

I recently came across a paper titled “Pairs Trading on International ETFs” on A Sharpe Ratio of 1.66 and an “indicative performance” of 20.6%, as reported by quantpedia, seemed amazing. So, I decided to explore the strategy and accompanying research paper myself.


[Wow, I’ve never seen such a good looking equity curve before… except maybe Madoff’s… ]

The basics of the strategy are simple to understand. The research paper uses a metric that gauges the historical average distance between cumulative returns of two ETFs, and picks pairs that have the highest historical average distance to trade. Once the current “return difference” of two ETFs exceeds the pair’s historical average distance by a certain threshold, we enter the pair trade and exit in 20 days or when the pair converges, which ever comes first. 22 country ETFs are used, including countries like Italy, Germany, Australia, and Canada.

The biggest concern I had when reading the paper was that it just assumes convergence for country ETFs–there is no measure of price co-movement between a pair of country ETFs. The only way it “measures” past price movement between a pair is with the historical average described above. Especially with the recent events in Europe, assuming that a pair with a high historical average distance will converge in the near future seems dangerous.

A backtester (using Python and the immensely helpful pandas library) coded with the entry and exit conditions specified in the paper produced very different results: a 1.5% CAGR, with a  -20% max drawdown. Granted I used data from June 2001 to today, and the paper used data from April 1996 (before some of the 22 ETFs in the basket even existed…) through 2009, but the vastly different slopes in the equity curves speak for themselves.


[What happened to the pair trading strategy’s “returns”?]

Something seems off: maybe there’s a bug in my code (in the process of checking). Nonetheless, through this exercise, I’ve learned that verifying a strategy’s returns for yourself is always a wise idea.

Happy new year by the way. Remember this, whenever you’re starting something new:


Looking at the turn of the month effect in equity indices

After reading papers about the consistency of the turn of the month effect (Lakonishok and Schmidt (1988), Xu and McConnell (2006)) and hearing about its success from several trader friends (it’s been covered several times in the blogosphere too, e.g. at MarketSci) I decided to explore it a bit.

Basically, returns at the turn of every month (last day of current month, through the 3rd day of the next month) are positive and significantly different from market returns.

Below is a PDF detailing my findings/research replication, tables and graphs included. All the analysis was done during my free time over the summer and I just got around to writing it up. I decided to do the writeup in Latex because I think Latex documents are pretty. LyX, the latex wysiwyg processor, was used.

To test the profitability of the turn of the month effect in real time, four months ago I started trading a strategy that I developed based on the effect. The strategy enters two new positions at the beginning of every month and gets out a few days later. I have only entered six positions so far (five have been profitable), two are missing because I decided not to enter any positions at the end of July due to the debt ceiling debate (good thing too because both positions would’ve been losers).

Here’s the writeup:

An exploration: the turn of the month effect in equities from 1926 through 2010

ETFRot performance update

I posted backtested performance figures for an ETF rotational trading system back in November 2010. Here’s the forward tested performance (orange representing the start of the forward testing period):


Ouch. For a refresher on how the system works, see the old post. It looks like momentum failed to anticipate the precipitous drop in the markets during August 2011. Which makes sense, as the market truly seemed to be in panic mode during that time.

Lesson: don’t put all your eggs in one basket (in this case, in one ETF), no matter how good you think your “timing” is. Because anything can happen that throws your model out the window.

August 2011 was just another example of asset class correlations all going towards 1 during a panic selloff; historical correlations were not a good predictor of future correlations. Then I wonder if historical correlations conditional on whether the market is in crisis mode or not is predictive (eg if these two asset classes weren’t correlated in the past during market crises, does that mean that they have a good chance of remaining uncorrelated in future market crises?). Perhaps in “most cases”, but then again, if you have a chance of being completely wrong and losing all your money, do “most cases” even matter?

So the search for uncorrelated returns continues… reminds me of AQR’s new reinsurance group.