2

I am looking at how to implement Modified Sharpe Ratio optimization in R in the most performant/fastest way.

Modified Sharpe Ratio (MSR) as I define it for this problem is

MSR = r/(sd^f)

(where r is return of particular asset, sd is standard deviation of returns of particular assets and f, a scalar, is the volatility factor or volatility attenuator).

I can see that PortfolioAnalytics has an argument risk_aversion in add.objective function which seem to do the same/similar thing as I want (for f=0 the algorithm will choose the composition with the highest return regardless of volatility, for f=1 it will choose the composition that will maximize classic Sharpe ratio and for f > 1 the algorithm will choose low volatility composition with extreme case being a mean variance portfolio).

This is exactly what I see parameter risk_aversion is doing from here https://github.com/R-Finance/PortfolioAnalytics/blob/master/demo/demo_max_quadratic_utility.R

I can solve this optimization problem using PortfolioAnalytics using random portfolios approach or DEoptim (see below code), however, maybe there is a fastest (and exact) solution possible using ROI.

if (!require("PortfolioAnalytics")) {
    install.packages("PortfolioAnalytics", dependencies=TRUE)
}

library(PortfolioAnalytics)
data(edhec)
R <- edhec[, 1:8]
funds <- colnames(R)

modifiedStdDev<-function(R, f, ...,
        clean=c("none","boudt","geltner"), portfolio_method=c("single","component"), weights=NULL, mu=NULL, sigma=NULL, use="everything", method=c("pearson", "kendall", "spearman")) {
    modStdDev <- StdDev(R, ..., clean=clean, portfolio_method=portfolio_method, weights=weights, mu=mu, sigma=sigma, use=use, method=method)^f

    return(modStdDev)
}

# Construct initial portfolio with basic constraints.
init.portf.MaxModifiedSharpe <- portfolio.spec(assets=funds)
init.portf.MaxModifiedSharpe <- add.constraint(portfolio=init.portf.MaxModifiedSharpe, type="long_only", enabled=TRUE)
init.portf.MaxModifiedSharpe <- add.constraint(portfolio=init.portf.MaxModifiedSharpe, type="weight_sum", min_sum=0.99, max_sum=1.01, enabled=TRUE)
init.portf.MaxModifiedSharpe <- add.objective(portfolio=init.portf.MaxModifiedSharpe, type="return", name="mean", enabled=TRUE, multiplier=-1)
init.portf.MaxModifiedSharpe <- add.objective(portfolio=init.portf.MaxModifiedSharpe, type="risk", name="modifiedStdDev", enabled=TRUE, multiplier=1, arguments=list(f=1.0))

# Use DEoptim
maxModifiedSR.lo.DEoptim <- optimize.portfolio(R=R, 
        portfolio=init.portf.MaxModifiedSharpe, 
        optimize_method="DEoptim",
        search_size=2000,
        trace=TRUE)

maxModifiedSR.lo.DEoptim

chart.RiskReward(maxModifiedSR.lo.DEoptim, risk.col="modifiedStdDev", return.col="mean")

# Use random portfolios to run the optimization.
maxModifiedSR.lo.RP <- optimize.portfolio(R=R, 
        portfolio=init.portf.MaxModifiedSharpe, 
        optimize_method="random",
        search_size=2000,
        trace=TRUE)

maxModifiedSR.lo.RP

chart.RiskReward(maxModifiedSR.lo.RP, risk.col="modifiedStdDev", return.col="mean")
Samo
  • 2,065
  • 20
  • 41
  • I believe this problem is non-convex. That would explain the choice for DEoptim. The problem with f=1 (i.e. standard Sharpe Ratio) is still non-convex but can be reformulated as a convex QP. For f<>1 this is not possible. – Erwin Kalvelagen Jan 23 '20 at 23:06

0 Answers0