What: Fast calculation on R data structures
How: Use data tables instead of data frames
Some days ago I struggled with performance topics on R. Task was the analysis of a larger text corpus (~600MB english text). The preprocessing was finally done in Java (made the difference between unusable (R) and quiet fast (Java)). The analysis itself was possible in R after switching to data tables instead of data frames.
The followig snippet shows an example of calculating the mean on subgroups of the data:
fac<-sample(1:5, n, replace = T)
aggregate(col1 ~ fac, dt, mean)
dt[, mean(col1), by=fac]
The results are:
|Data structure||Time [s]|
Essentially, this is a factor of 80 on my machine (Win8.1, 8GB, i5).
Where to go next: Read this interesting blog post regarding data tables and performance. Also have a look at the fread function for reading in data. Also, have a look at other nice features of data tables like keys or assignment of columns.