In the following example, why should we favour using f1
over f2
? Is it more efficient in some sense? For someone used to base R, it seems more natural to use the "substitute + eval" option.
library(dplyr)
d = data.frame(x = 1:5,
y = rnorm(5))
# using enquo + !!
f1 = function(mydata, myvar) {
m = enquo(myvar)
mydata %>%
mutate(two_y = 2 * !!m)
}
# using substitute + eval
f2 = function(mydata, myvar) {
m = substitute(myvar)
mydata %>%
mutate(two_y = 2 * eval(m))
}
all.equal(d %>% f1(y), d %>% f2(y)) # TRUE
In other words, and beyond this particular example, my question is: can I get get away with programming using dplyr
NSE functions with good ol' base R like substitute+eval, or do I really need to learn to love all those rlang
functions because there is a benefit to it (speed, clarity, compositionality,...)?