Adaptive Asset Allocation: minimum variance portfolios

This is a continuation of my previous post on adaptive asset allocation.

Introduction to Mean-Variance Optimization

Mean-variance optimization (the implementation of Markowitz’s modern portfolio theory) basically allows one to find the optimal weights of assets in a portfolio that maximizes expected return given a level of risk/variance, or equivalently, minimize risk/variance given a level of expected return. The biggest ingredient in mean-variance optimization is the covariance matrix of the assets’ returns. The covariance matrix contains information on not only how volatile the assets are (their variance) but also how they move with each other (covariance). Covariance adds a piece to the adaptive asset allocation puzzle that we did not have before: how the asset classes move with each other, how correlated they are, if they’re good hedges for each other. The minimum variance portfolio is the set of asset class weights that minimizes the variance of the portfolio (regardless of our expectations of future returns).

This is a step up from risk parity, which assigns each asset class a weight such that all asset classes in the portfolio contribute the same amount of variance to the portfolio variance. The overall portfolio variance could still be relatively high. Now our portfolios are being optimized to have the smallest variance possible.

I won’t get too deep into the details of the math, but there is a closed form solution to finding the set of optimal portfolio weights. It’s minimizing the quadratic function   where

  • w is a vector of holding weights such that sum w_i = 1
  • Sigma is the covariance matrix of the returns of the assets
  • q ge 0 is the “risk tolerance”: q = 0 works to minimize portfolio variance and q = inftyworks to maximize portfolio return
  • R is the vector of expected returns
  • w^{T} Sigma w is the variance of portfolio returns
  • R^{T} w is the expected return on the portfolio

For the minimum variance portfolio, q = 0 , so we’re actually just minimizing  w^T Sigma w . Actual implementation was done with python’s cvxopt (convex optimization) library.


(like last time, all portfolios are rebalanced monthly)

Minimum Variance Portfolio

tl;dr: CAGR lower, max drawdown less severe, Sharpe Ratio slightly higher than that of the momentum + risk parity portfolio.

Where all ETFs are traded, and are weighted in the portfolio such that the variance of the portfolio is minimized.


Momentum and Minimum Variance Portfolio

tl;dr: compared to the momentum + risk parity portfolio (highest Sharpe Ratio so far, talked about in the previous post): CAGR about the same, max drawdown less severe (what’s even more impressive is that the max drawdown during the recent financial crisis was only -13%), Sharpe Ratio slightly higher

Where only the top five ETFs are selected based on their momentum, and are weighted in the portfolio such that the variance of the portfolio is minimized.



The last portfolio dominates all the other portfolios tested: it has the highest CAGR, smallest max drawdown, and highest Sharpe Ratio. This is because it includes all three pieces of the asset allocation puzzle: returns, variance, and correlation/covariance. We’ve made asset allocation more adaptive by filtering assets by momentum (with the expectation that high momentum assets will continue to perform well in the near term) and using shorter timeframes for variance and correlation/covariance–through this, the portfolios are more responsive to more recent asset price action.

Adaptive Asset Allocation: momentum and risk parity

Asset allocation is powerful: the famous Brinson, Hood, and Beebower study showed that asset allocation is responsible for 91.5% of pension funds’ returns. Not stock selection, not market timing.

Also, I need to put my money to work. I don’t have time for frequent trading. I don’t trust my fundamental analysis, and I know that if I don’t have a quantitative, rule based system my emotions will get the best of me and I will make bad decisions.

Asset allocation should be easy these days, with low-cost, liquid ETFs tracking everything from gold to international REITs.

The the million dollar questions is, as always, how do we determine how much of our money to allocate to what asset classes?

Adaptive Asset Allocation

I decided to implement what’s known as Adaptive Asset Allocation, an intuitive extension of the traditional Markowitz mean-variance model. Essentially, it makes traditional portfolio optimization more “adaptive” by using shorter term metrics as inputs instead of long run averages/standard deviations.

The portfolios are rebalanced monthly. There is only a universe of 10 ETFs (gold, bonds, REITs, equities, the usual). So trading and actually implementing these portfolios should be easy.

A strategy’s ease of use is worthless if it doesn’t make money. So how does it perform? To help answer that question, I tested several portfolio construction methods to use as comparison. Here are the (incomplete) results:

Equal Weighted Portfolio

Where all 10 ETFs are given an equal weight.

Equal Weighted Asset Allocation

Momentum Portfolio

tl;dr: compared to equal weighted there is a higher CAGR, slightly higher Sharpe, much worse max draw down.

Where only the top 5 ETFs ranked by momentum are selected to be traded (equal weighted). The momentum effect has been shown to exist across asset classes and countries.


Risk Parity Portfolio

tl;dr: compared to equal weighted, CAGR is slightly higher, max draw down is smaller, Sharpe Ratio is higher

Where all 10 ETFs are included in the universe, but are weighted such that each position contributes the same amount of volatility to the portfolio (the entire portfolio has 100% exposure, i.e. the sum of the position weights equals one).


Momentum and Risk Parity Portfolio

tl;dr: compared to equal weighted, there is a much higher CAGR, smaller max draw down, much better Sharpe Ratio

Where only the top five ETFs are selected every rebalance based on their momentum, and the weighted according to risk parity.


Momentum and Minimum Variance Portfolio

Where the top five ETFs are selected by momentum, then weighted with a minimum variance optimization (weights that minimize the variance of the portfolio).

To be continued

Naive Bayes classification and Livingsocial deals

Naive Bayes

Problem: I was planning my trip to Florida and looking for fun things (“adventure” activities like jet ski rentals, kayaking, and go karting) to do in Orlando and Miami. I like saving money, so I subscribed to Groupon, Livingsocial, and Google Offers for those cities. Those sites then promptly flooded my inbox with deals for gym membership, in-ear headphones, and anti-cellulite treatment. Not useful. Going to each site and specifying my deal preferences took a while. Plus, if I found a deal that I liked, I had to copy-paste the link to that deal in another document so that I had it for future reference (in case I wanted to buy it later). Too many steps, too much hassle, unhappy email inbox.

Solution: So I wanted to build a site that scraped the fun/adventure deals automatically from these deal sites. Example use case: if a person plans to visit a new city (e.g. Los Angeles), he or she could just visit the site and see in one glance a list of the currently active adventure deals (e.g. scuba diving) in that city. Sure, it seems that aggregator sites like Yipit solve this. Almost all aggregation sites like Yipit require users to give them their email address before showing them any deals (most are also difficult to navigate). More unnecessary steps for the user. Plus, I found that the Yipit deals weren’t the same as the ones displayed on the actual Groupon/Livingsocial/Google Offer sites.

“pre” minimum viable product: I gathered feedback for my idea to see if other people besides me would actually use it. This time, I just made a few quick posts on reddit (in the city subreddits), and got many comments. People said they would use it. Next.

MVP: The site I built scrapes Livingsocial; Groupon generates its pages dynamically with ajax… can’t scrape that w/o a JS engine, a big PITA to set up. Google Offers didn’t have very many quality deals, and I thought I’d simplify by making the MVP only for Livingsocial for now.

Applying the Naive Bayes classifier

After scraping all the deals, they need to be classified as “adventure” or not. Obviously, doing this by hand is not scalable if I wanted to scrape deals for more than a couple cities. So I implemented the Naive Bayes classifier. Naive Bayes is often used in author text identification, e.g. finding out if Madison or Hamilton wrote certain unidentified essays in the Federalist Papers.

At a high level, Naive Bayes treats each “document” or block of text as a “bag of words”, meaning that it doesn’t care about the order of the words. When given a new “document” to classify, Naive Bayes asks and answers the question, “given each classification/category, what is the probability that this new document belongs to that classification/category?” The category with the highest probability is then the category that Naive Bayes has “predicted” the new document should belong to.

The site currently uses the deal “headline” (e.g. “Five Women’s Fitness Classes” or “Chimney Flue Sweep”) as the document text that Naive Bayes uses. I also tried using the actual deal description (i.e. the paragraph or two of text that Livingsocial writes to describe the deal), and from eyeballing the predictions, it looked like both gave similar prediction accuracy. Using the deal headline is a lot faster though.

Prediction accuracy is still pretty bad. I didn’t want Naive Bayes to automatically assign its predicted categories to the deals, so I decided to keep categorizing the deals manually, but with the help of Naive Bayes’s recommendations. I also decided to make its binary classification decisions more “fuzzy”. Here’s a screenshot of the admin page that tells me the predicted deal type of the scraped deals, with a column called “prediction confidence”, which is a score derived from the Naive Bayes output that signifies how strong its prediction is.


No better way to learn than to do

Doing is the best way to learn, because working on your own projects forces you to engage in deliberate practice (Cal Newport’s key to living a remarkable life). Not only do you practice your skills, but you also learn about learning: when faced with an obstacle while working on a personally initiated project, you have just you and your own resourcefulness–no boss telling you what to do or professor giving guidelines. For example, this time, I encountered the issue of my requests timing out when in production on Heroku, since Heroku has a max request time of 30 seconds and some of my requests were taking up to a few  minutes (when my Naive Bayes implementation was inefficient). I googled my problem, found a stackoverflow post, and learned about worker queues and the Ruby library delayed_job, which fixed my problem by allowing more time intensive requests to be run in the background.

The site is at

Sand castles and the art of Now

The past is history, the future is a mystery, but today is a gift. That is why it is called the present. ~ Master Oogway, Kung Fu Panda

My brother and I just got back from a week long vacation in Florida, driving from Orlando to Miami to Ft. Myers to Tampa and then back to Orlando.

It was the first time I had seen my father in a while (he has lived and worked in China for the past 7 years). After having been reminded of the art–and power–of the now by this Psychology Today article, I wanted to make a conscious effort to live in the present as much as possible and enjoy every moment with my father and brother.

our sand castles

I remember the day the three of us spent a beautiful, warm, cloudless afternoon on Sunny Isles Beach. We each built a sand castle, digging with our hands like dogs, scooping hand fulls of soaked sand to use as structural support, and carving our castles with the small plastic shovel.

I was completely in the moment, in a state of flow, focusing only on the task at hand–digging the moat, carving the sides. When I wasn’t building, I would take breaks by sitting on the sand at the edge of the water, letting the waves flow over my feet and legs while I stared at the horizon, just appreciating the sun’s warmth on my body, the muffled talking and excited shouts of the other beach-goers around me, the crashing of the waves–the present moment–and thinking of nothing else.

The Now, gratitude, and happiness

Through this experience and many others, I’ve learned that living in the moment begets gratitude. And the attitude of gratitude is a powerful one. On the beach, I felt gratitude for being able to spend time with my father and brother, for the luxury of being able to travel to somewhere so beautiful, for the warm water, sand, sun, air–for being alive.

Fears and worries are predominately rooted in the past or future. None of that is happening in the present moment, and the present moment is reality, it is all that one has.

I remember being nervous about going to my first few days of work two summers ago (it was my first real “office” job). Every morning, before I left for work, I meditated. I focused on nothing but my breath, the only thing that was actually happening in the present moment, the thing that reminded me of the life that I have; however, my mind would wander to worrying thoughts of the future and I would get that “butterflies in my stomach” feeling. Then I did one thing: acknowledge it. The simple act of acknowledging that physical feeling, that I was feeling it right now in the present, and labeling it as anxiety, removed it. The feeling melted away.

Neither the past nor the future exist yet. There is nothing to fear or worry about, because those things are derivatives of thoughts about the past or future. Regardless of what situation one might be in, we all have a ton to be grateful for, especially what’s happening in the present (even if it’s something as simple as being alive). What’s something about the current moment that you can be grateful for?