Home Artificial Intelligence Beating the Market with K-Means Clustering

Beating the Market with K-Means Clustering

1
Beating the Market with K-Means Clustering

This text explains a trading strategy that has demonstrated exceptional results over a 10-year period, outperforming the market by 53% by timing market’s returns using k-means clustering on historical macroeconomic sentiment data. The strategy has achieved a better Sharpe Ratio than that of the market and a statistically significant Treynor-Mazuy market timing coefficient of 1.2040 (p-value = 0.02).

Returns are ultimately driven by exposure to the underlying macroeconomic risk aspects that drive business cycles and long term growth. Linking the returns of an asset or asset class to those underlying aspects provides a more robust framework for asset allocation that may higher have in mind changing economic conditions than historical regression or mean variance evaluation alone.

Macroeconomic expectations can have a major impact on asset returns. Macroeconomic variables, comparable to inflation, rates of interest, and GDP growth, are closely monitored by investors and analysts because they’ll provide insight into the general health of the economy and the potential performance of varied asset classes. When these variables are expected to vary in the longer term, investors may adjust their expectations in regards to the future prospects of the market.

The strategy into account employs the usage of k-means clustering, an unsupervised machine learning algorithm, on historical macroeconomic sentiment data to discover essentially the most similar historical period to the current day. The burden to be assigned to the market portfolio is then determined after analyzing how the market performed during similar periods previously. The target of the strategy is to achieve a positive alpha by timing the very best moments to obese or underweight the exposure to the market portfolio, without considering any stock selection process. Moreover, the variety of clusters for k-means is 2, with the aim of identifying risk-on and risk-off scenarios. The only real security traded is the SPDR S&P 500 ETF trust (NYSE: SPY), and the temporal window spans 20 years from January 2003 to January 2023.

To judge the performance of the trading strategy, three primary measures will likely be taken into consideration: the Sharpe Ratio, which is defined as the surplus return generated by an investment or portfolio per unit of risk taken; the Treynor and Mazuy market timing coefficient, which is calculated by regressing the surplus returns of the strategy with the surplus market returns and the squares of the surplus market returns; and the alpha of the Capital Asset Pricing Model (CAPM), which is defined because the return that isn’t explained by the market risk. Python functions were developed for all three metrics to watch their evolution through the backtesting period and determine when the trading strategy outperformed the market.

The algorithm’s strength lies in its ability to cluster past periods’ data using a set of sentiment variables. Subsequently, it’s crucial to discover essentially the most significant indicators for this purpose. The variables used are extracted from the market itself to avoid any publishing lag. These variables include:

• : Inflation expectations are calculated by subtracting the 5-year TIPS rate from the 5-year treasury bond yield. A time series of inflation expectations and a 60-month trailing average are obtained, and the share difference between each expectation and the trailing average is calculated. The opposite variable is the 60-month trailing standard deviation of expected inflation. The aim of those two measures is to grasp how quickly the market changes its expectations, not only their value.

• : Much like expected inflation, it’s crucial to know the way quickly the market changes its expectations about volatility, not only the worth of volatility in the beginning of each month.

• : There may be a powerful correlation between the yield curve slope and market performance, as demonstrated by Professor Campbell Harvey. The slope is calculated as the straightforward difference between 10-year and three-month US Treasury yields.

• : To gauge how the market perceives the likelihood of a recession, a beta-neutral portfolio is created. It’s long on consumer discretionary stocks and short on consumer staples stocks because consumer staples firms should outperform consumer discretionary firms in recession periods. The Consumer Staples Select Sector SPDR Fund (NYSE: XLP) is used for consumer discretionary, whereas the Consumer Discretionary Select Sector SPDR Fund (NYSE: XLY) is used for consumer staples.

• : To find out whether investors would change their allocation when it comes to geography, a time series of idiosyncratic returns is extracted from the iShares MSCI Emerging Markets ETF (NYSE: EEM).

This system functions in the next manner. First, the sentiment variables that will likely be used to discover risk on and risk off scenarios are chosen. A Pandas dataframe is created by merging the values of the information series. A logarithmic excess returns time series is then generated to correspond to the index of the dataframe that’s shifted by one month. That is essential since the values of the dataframe relate to the start of the month while the market returns confer with the month that concludes on the date of the index.

Next, a trailing window is defined to find out the quantity of information that will likely be considered through the clustering process. In my view, 60 months of information ought to be sufficient for conducting robust statistical analyses without considering only essentially the most recent market behaviors. Once the window length is chosen, backtesting begins. This system chooses the 61st month of the dataframe and uses k-means on the previous 60 months. Then, the surplus returns of the next month of the identical cluster of the date in consideration are chosen. To avoid using data from the longer term, the returns of the month following the date in query should not taken into consideration.

To find out the load to allocate to the portfolio, I made the belief that returns are lognormally distributed and attempted to check the market returns achieved within the cluster of the date with the returns of the whole 60-month sample. To accomplish that, I conducted a hypothesis test that compares the mean return of the cluster with the mean return of the whole sample and divided it by the usual error of the cluster’s returns. If the p-value of the test is lower than 0.1, the load in the marketplace portfolio will differ from 1. Specifically, the left tail of the distribution is found and utilized to pick the load proportionally. If the clustering performance is in the highest 5% of returns, the load assigned will likely be double the left tail of returns. In contrast, if the market performed within the cluster’s months in the underside 5% of the whole sample period’s months, the load assigned for the next month will likely be half the left tail. This process is repeated for every month after the 61st. After obtaining the weights for each analyzed month, computing the return of the strategy is achieved by multiplying the weights with the returns of the next month.

Selecting the barrier p-value is crucial since it determines when the strategy returns will deviate from the market. In my case, I chosen a worth of 0.05. Changing the p-value after reviewing the outcomes of the strategy could possibly be considered an overfitting issue; due to this fact, I also devised two additional methods with Monte Carlo simulations and backward testing to acquire a p-value optimizer while maintaining statistically sound results, which I’ll discuss in a subsequent article.

Regarding cumulative returns, the strategy began outperforming the market in August 2020 and maintained this superior performance all year long.

As you possibly can see within the graph above, the strategy selected to allocate a greater proportion of its portfolio to the market in July and October, leading to returns of 13% and 20% in August and November respectively, in comparison with the market’s returns of 6.7% and 10%. This outperformance endured in 2021, with the strategy delivering a complete annual overperformance of around 7%, while the market achieved a return of 19%, despite missing 3 predictions. In 2022, the strategy replicated completely the market’s returns.

By examining the five-year trailing Sharpe Ratio, we are able to gain a more comprehensive understanding of how the strategy performed in relation to its level of risk. As anticipated, the outperformance began through the same timeframe because the returns and endured throughout 2021, 2022, and 2023, even when considering its volatility.

Despite the outperformance, the alpha isn’t statistically significant. The trailing alpha graph displays a positive alpha of the regression between the strategy’s excess returns and the market’s excess returns since August 2020, but with no and statistically significance.

Quite the opposite, the trailing beta timing coefficient graph displays a major market timing ability, as evidenced by the positive and statistically significant Treynor-Mazuy market timing coefficient since August 2020.

Furthermore, the monthly Sharpe Ratio for the whole series of strategy returns is 0.194, in comparison with the market’s Sharpe Ratio of 0.185, which is barely around 5% higher. Nevertheless, upon analyzing the performance of the strategy in several time periods, it becomes evident that the strategy replicated the marketplace for seven years and only began making decisions within the last three years of information. This explains the relatively small increase within the Sharpe Ratio in comparison with the market.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here