This guest article was written by author and consultant Tristan Yates (see his bio below). It emphasizes R’s data object manipulation and scoring capabilities via a detailed financial analysis example.
Scoring and ranking systems are extremely valuable management tools. They can be used to predict the future, make decisions, and improve behavior – sometimes all of the above. Think about how the simple grade point average is used to motivate students and make admissions decisions.
R is a great tool for building scoring and ranking systems. It’s a programming language designed for analytical applications with statistical capabilities. The capability to store and manipulate data in list and table form is built right into the core language.
But there’s also some validity to the criticism that R provides too many choices and not enough guidance. The best solution is to share your work with others, so in this article we show a basic design workflow for one such scoring and ranking system that can be applied to many different types of projects.
The Approach
For a recent article in Active Trader, we analyzed the risk of different market sectors over time with the objective of building less volatile investment portfolios. Every month, we scored each sector by its risk, using its individual ranking within the overall population, and used these rankings to predict future risk.
Here’s the workflow we used, and it can be applied to any scoring and ranking system that must perform over time (which most do):
- Load in the historical data for every month and ticker symbol.
- Load in the performance data for every month and ticker symbol.
- Generate scores and rankings for every month and ticker symbol based upon their relative position in the population on various indicators.
- Review the summary and look for trends.
In these steps, we used four data frames, as shown below:
Name | Contents |
---|---|
my.history | historical data |
my.scores | scoring components, total scores, rankings |
my.perf | performance data |
my.summary | summary or aggregate data |
One of my habits is to prefix my variables – it helps prevent collisions in the R namespace.
Some people put all of their data in the same data.frame, but keeping it separate reinforces good work habits. First, the historical data and performance data should never be manipulated, so it makes sense to keep it away from the more volatile scoring data.
Second, it helps draw a clear distinction between what we know at one point in time – which is historical data – and what we will know later – which is the performance data. That’s absolutely necessary for the integrity of the scoring system.
My.history, my.scores, and my.perf are organized like this:
yrmo | ticker | var1 | var2 | etc… |
---|---|---|---|---|
200401 | XLF | |||
200401 | XLB | |||
etc… |
yrmo is the year and month and ticker is the item to be scored. We maintain our own list of dates (in yrmo format) and items in my.dates and my.items. Both these lists are called drivers, as they can help iterate through the data.frame, and we also have a useful data.frame called my.driver which has only the yrmo and ticker.
One trick – we keep the order the same for all of these data.frames. That way we can use indexes on one to query another. For example, this works just fine:
Vol.spy
Loading Data
First, we get our driver lists and my.driver data.frame set up. We select our date range and items from our population, and then build a data.frame using the rbind command.
#this is based on previous analysis
my.dates
my.items
#now the driver
my.driver
for (z.date in my.dates) {
my.driver
}
Next, let’s get our historical and performance data. We can make a function that can be called once for each row in my.driver that then loads any data needed.
my.seq
my.history
vol.1=sapply(my.seq,calc.sd.fn,-1,-1))
Each variable can be loaded by a function called with the sapply command. The calc.sd.fn function first looks up the ticker and yrmo from my.driver using the index provided, and then returns the data. You would have one function for each indicator that you want to load. My.perf, which holds the performance data, is built in the exact same way.
The rbind command is slow unfortunately, but loading the historical and performance data only needs to be done once.
Scoring The Data
This is where R really shines. Let’s look at the highest level first.
my.scores
for (z.yrmo in my.dates) {
my.scores
}
my.scores$p.tot
Every indicator gets its own score, and then that can be combined in any conceivable way to create total score. In this very simple case, we’re only scoring one indicator, so we just use that score as the total score.
For more complex applications, the ideal strategy is to use multiple indicators from multiple data sources to tell the same story. Ignore those who advocate reducing variables and cross-correlations. Instead, think like a doctor that wants to run just one more test and get that independent confirmation.
Now the calc functions:
scaled.score.fn
{pnorm(z.raw,mean(z.raw),sd(z.raw))*100}
scaled.rank.fn
calc.scores.fn
z.df
z.scores
p.vol.1=scaled.score.fn(z.df$vol.1),r.vol.1=scaled.rank.fn(z.df$vol.1))
z.scores
}
The calc.scores.fn function queries the data.frame to pull the population data for just a single point in time. Then, each indicator is passed to the scaled.score.fn and scaled.rank.fn function, returning a list of scores and ranks.
Here, we use the pnorm function to calculate a statistical Z-score, which is a good practice for ensuring that a scoring system isn’t dominated by a single indicator.
Checking the Scores
At this point, we create a new data.frame for summary analysis. We use the always useful and always confusing aggregate function and combine by rank. Notice how we easily we can combine data from my.history, my.scores and my.perf.
data.frame(rank=1:9,p.tot=aggregate(my.scores$p.tot,
list(rank=my.scores$r.vol.1),mean)$x,ret.1=aggregate(my.perf$ret.1,
list(rank=my.scores$r.vol.1),mean)$x,sd.1=aggregate(my.perf$ret.1,
list(rank=my.scores$r.vol.1),sd)$x,vol.1=aggregate(my.history$vol.1,
list(rank=my.scores$r.vol.1),mean)$x,vol.p1=aggregate(my.history$vol.1,
list(rank=my.scores$r.vol.1),mean)$x)
Here’s the result. We could check plots or correlations, but the trend – higher relative volatility in the past (vol.p1, p.tot) is more likely to mean higher relative volatility in the future (vol.1, sd.1) – is crystal clear.
rank | p.tot | ret.1 | sd.1 | vol.1 | vol.p1 |
---|---|---|---|---|---|
1 | 12.1 | 0.131 | 4.03 | 16.5 | 13.8 |
2 | 19.4 | 0.0872 | 4.82 | 16.6 | 16.1 |
3 | 27.1 | 0.2474 | 4.96 | 20.1 | 18 |
4 | 35.6 | 0.4247 | 5.31 | 20.9 | 19.9 |
5 | 44.9 | 0.6865 | 5.98 | 22.1 | 21.7 |
6 | 53 | 0.3235 | 5.84 | 21.5 | 23.2 |
7 | 65.1 | 1.019 | 5.86 | 24.6 | 25.4 |
8 | 78 | 0.7276 | 6.04 | 26.9 | 28.4 |
9 | 96.4 | 0.0837 | 9.34 | 35.2 | 38.3 |
In the case of our analysis, the scores aren’t really necessary – we’re only ranking nine items every month. If we did have a larger population, we could use code like this to create subgroups (six groups shown here), and then use the above aggregate function with the new my.scores$group variable.
my.scores$group
breaks=quantile(my.scores$p.tot,(0:6)/6),include.lowest=TRUE,labels=1:6)
Wrap-up
We ultimately only ended up scoring one variable, but it’s pretty easy to see how this framework could be expanded to dozens or more. Even so, it’s an easy system to describe – we grade each item by its ranking within the population. People don’t trust scoring systems that can’t be easily explained, and with good reason.
There’s not a lot of code here, and that’s a testimony to R’s capabilities. A lot of housekeeping work is done for you, and the list operations eliminate confusing nested loops. It can be a real luxury to program in R after dealing with some other “higher level” language.
We hope you find this useful and encourage you to share your own solutions as well.
Tristan Yates is the Executive Director of Yates Management, a management and analytical consulting firm serving financial and military clients. He is also the author of Enhanced Indexing Strategies and his writing and research have appeared in publications including the Wall Street Journal and Forbes/Investopedia.
Some speed up could be done in rbind part. Instead of
my.driver <- data.frame()
for (z.date in my.dates) {
my.driver <- rbind(my.driver,data.frame(ticker=my.items,yrmo=z.date))
}
could be
my.driver <- do.call(rbind, lapply(my.dates, function(z.date) data.frame(ticker=my.items,yrmo=z.date)))
Although there have been many techniques are introduced to measuring many facts, but here we got an innovative and unique idea for scoring and ranking systems which are being used to predicts some future for developing the strategies of human behavior, so I must appreciate the whole decription of the post and should be utilize the efficient part of these management tool.
R is indeed a great tool for building and ranking systems. The most important feature associated with R is the way it is used to predict the future, make desicions and improving behavior. It is providing us with many choices and it is benificial on our part as we are getting lots of options to choose from.
These applications are delivers a positive result among the users, most of the users are simply preferred scoring and ranking management tools to predict the future and some other sources like improving behavior and making decisions, this management tool refers as R and offers some beneficial options, it is simple in use but much powerful.
Above brilliant article explains each analytical applications with statistical application by using this programming language R.To understand building Scoring and Ranking Systems in R thoroughly ,its an fabulous opportunity for me through your addition.Approaches and facts associated to this great tool will improve ideas on this concept.