Computer Languages (R, Python, Julia, C/C++)

Biostat/Biomath M257

Author

Dr. Hua Zhou @ UCLA

Published

April 11, 2023

System information (for reproducibility):

versioninfo()
Julia Version 1.8.5
Commit 17cfb8e65ea (2023-01-08 06:45 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin21.5.0)
  CPU: 12 × Apple M2 Max
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, apple-m1)
  Threads: 8 on 8 virtual cores
Environment:
  JULIA_NUM_THREADS = 8
  JULIA_EDITOR = code

Load packages:

using Pkg
Pkg.activate(pwd())
Pkg.instantiate()
Pkg.status()
  Activating project at `~/Documents/github.com/ucla-biostat-257/2023spring/slides/02-langs`
Status `~/Documents/github.com/ucla-biostat-257/2023spring/slides/02-langs/Project.toml`
  [6e4b80f9] BenchmarkTools v1.3.2
  [31c24e10] Distributions v0.25.87
  [bdcacae8] LoopVectorization v0.12.157
  [438e738f] PyCall v1.95.1
  [6f49c342] RCall v0.13.14
  [8f399da3] Libdl
  [9a3f8284] Random
  [10745b16] Statistics

In this lecture, we review some popular computer languages commonly used in computational statistics. We want to understand why some languages are faster than others and what are the remedies for the slow languages.

1 Types of computer languages

  • Compiled languages (low-level languages): C/C++, Fortran, …
    • Directly compiled to machine code that is executed by CPU
    • Pros: fast, memory efficient
    • Cons: longer development time, hard to debug
  • Interpreted language (high-level languages): R, Matlab, Python, SAS IML, JavaScript, …
    • Interpreted by interpreter
    • Pros: fast prototyping
    • Cons: excruciatingly slow for loops
  • Mixed (dynamic) languages: Matlab (JIT), R (compiler package), Julia, Cython, JAVA, …
    • Pros and cons: between the compiled and interpreted languages
  • Script languages: Linux shell scripts, Perl, …
    • Extremely useful for some data preprocessing and manipulation
  • Database languages: SQL, Hive (Hadoop).
    • Data analysis never happens if we do not know how to retrieve data from databases
    203B (Introduction to Data Science) covers some basics on scripting and database.

2 How high-level languages work?

Comics: The Life of a Bytecode Language

  • Typical execution of a high-level language such as R, Python, and Matlab.

  • To improve efficiency of interpreted languages such as R or Matlab, conventional wisdom is to avoid loops as much as possible. Aka, vectorize code > The only loop you are allowed to have is that for an iterative algorithm.

  • When looping is unavoidable, need to code in C, C++, or Fortran. This creates the notorious two language problem
    Success stories: the popular glmnet package in R is coded in Fortran; tidyverse and data.table packages use a lot RCpp/C++.

  • High-level languages have made many efforts to bring themselves closer to the performance of low-level languages such as C, C++, or Fortran, with a variety levels of success.

    • Matlab has employed JIT (just-in-time compilation) technology since 2002.
    • Since R 3.4.0 (Apr 2017), the JIT bytecode compiler is enabled by default at its level 3.
    • Cython is a compiler system based on Python.
  • Modern languages such as Julia tries to solve the two language problem. That is to achieve efficiency without vectorizing code.

  • Julia execution.

3 Gibbs sampler example by Doug Bates

  • Doug Bates (member of R-Core, author of popular R packages Matrix, lme4, RcppEigen, etc)

    As some of you may know, I have had a (rather late) mid-life crisis and run off with another language called Julia.

    Doug Bates (on the knitr Google Group)

  • An example from Dr. Doug Bates’s slides Julia for R Programmers.

  • The task is to create a Gibbs sampler for the density
    \[ f(x, y) = k x^2 \exp(- x y^2 - y^2 + 2y - 4x), \quad x > 0 \] using the conditional distributions \[ \begin{eqnarray*} X | Y &\sim& \Gamma \left( 3, \frac{1}{y^2 + 4} \right) \\ Y | X &\sim& N \left(\frac{1}{1+x}, \frac{1}{2(1+x)} \right). \end{eqnarray*} \]

  • R solution. The RCall.jl package allows us to execute R code without leaving the Julia environment. We first define an R function Rgibbs().

using RCall

# show R information
R"""
sessionInfo()
"""
RObject{VecSxp}
R version 4.2.2 (2022-10-31)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.3.1

Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
[1] compiler_4.2.2
# define a function for Gibbs sampling

R"""
library(Matrix)

Rgibbs <- function(N, thin) {
  mat <- matrix(0, nrow=N, ncol=2)
  x <- y <- 0
  for (i in 1:N) {
    for (j in 1:thin) {
      x <- rgamma(1, 3, y * y + 4) # 3rd arg is rate
      y <- rnorm(1, 1 / (x + 1), 1 / sqrt(2 * (x + 1)))
    }
    mat[i,] <- c(x, y)
  }
  mat
}
"""
RObject{ClosSxp}
function (N, thin) 
{
    mat <- matrix(0, nrow = N, ncol = 2)
    x <- y <- 0
    for (i in 1:N) {
        for (j in 1:thin) {
            x <- rgamma(1, 3, y * y + 4)
            y <- rnorm(1, 1/(x + 1), 1/sqrt(2 * (x + 1)))
        }
        mat[i, ] <- c(x, y)
    }
    mat
}

To generate a sample of size 10,000 with a thinning of 500. How long does it take?

R"""
system.time(Rgibbs(10000, 500))
"""
RObject{RealSxp}
   user  system elapsed 
  9.605   0.574  10.180 
  • This is a Julia function for the same Gibbs sampler:
using Distributions

function jgibbs(N, thin)
    mat = zeros(N, 2)
    x = y = 0.0
    for i in 1:N
        for j in 1:thin
            x = rand(Gamma(3, 1 / (y * y + 4)))
            y = rand(Normal(1 / (x + 1), 1 / sqrt(2(x + 1))))
        end
        mat[i, 1] = x
        mat[i, 2] = y
    end
    mat
end
jgibbs (generic function with 1 method)

Generate the same number of samples. How long does it take?

jgibbs(100, 5); # warm-up
@elapsed jgibbs(10000, 500)
0.107618

We see 40-80 fold speed up of Julia over R on this example, with similar coding effort!

4 Comparing C, C++, R, Python, and Julia

To better understand how these languages work, we consider a simple task: summing a vector.

Let’s first generate data: 1 million double precision numbers from uniform [0, 1].

using Random

Random.seed!(257) # seed
x = rand(1_000_000) # 1 million random numbers in [0, 1)
sum(x)
500156.3977288406

In this class, we extensively use package BenchmarkTools.jl for robust benchmarking. It’s the analog of the microbenchmark or bench package in R.

using BenchmarkTools

4.1 C

We would use the low-level C code as the baseline for copmarison. In Julia, we can easily run compiled C code using the ccall function.

using Libdl

C_code = """
#include <stddef.h>

double c_sum(size_t n, double *X) {
    double s = 0.0;
    for (size_t i = 0; i < n; ++i) {
        s += X[i];
    }
    return s;
}
"""

const Clib = tempname()   # make a temporary file

# compile to a shared library by piping C_code to gcc
# (works only if you have gcc installed):
open(`gcc -std=c99 -fPIC -O3 -msse3 -xc -shared -o $(Clib * "." * Libdl.dlext) -`, "w") do f
    print(f, C_code) 
end

# define a Julia function that calls the C function:
c_sum(X :: Array{Float64}) = ccall(("c_sum", Clib), Float64, (Csize_t, Ptr{Float64}), length(X), X)
clang: warning: argument unused during compilation: '-msse3' [-Wunused-command-line-argument]
c_sum (generic function with 1 method)
# make sure it gives same answer
c_sum(x)
500156.39772885485
# dollar sign to interpolate array x into local scope for benchmarking
bm = @benchmark c_sum($x)
BenchmarkTools.Trial: 6129 samples with 1 evaluation.
 Range (minmax):  811.583 μs960.708 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     811.750 μs                GC (median):    0.00%
 Time  (mean ± σ):   815.030 μs ±   9.003 μs   GC (mean ± σ):  0.00% ± 0.00%
   ▄▁   ▁                                                 ▁
  ▅█▆█▇██▅███▆▇██▆▇▆▆▆█▇▇▇▆▆▆▆▇▅▆▅▄▄▄▄▄▅▄▄▅▅▅▁▆▄▄▄▄▄▄▃▃▃▃▅▁▃▄ █
  812 μs        Histogram: log(frequency) by time        853 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
# a dictionary to store median runtime (in milliseconds)
benchmark_result = Dict() 

# store median runtime (in milliseconds)
benchmark_result["C"] = median(bm.times) / 1e6
0.81175

4.2 R, buildin sum

Next we compare to the build in sum function in R, which is implemented using C.

R"""
library(bench)
library(tidyverse)

# interpolate x into R workspace
y <- $x
rbm_builtin <- bench::mark(sum(y)) %>%
  print(width = Inf)
""";
┌ Warning: RCall.jl: ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
│ ✔ dplyr     1.1.0     ✔ readr     2.1.4
│ ✔ forcats   1.0.0     ✔ stringr   1.5.0
│ ✔ ggplot2   3.4.1     ✔ tibble    3.2.0
│ ✔ lubridate 1.9.2     ✔ tidyr     1.3.0
│ ✔ purrr     1.0.1     
│ ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
│ ✖ tidyr::expand() masks Matrix::expand()
│ ✖ dplyr::filter() masks stats::filter()
│ ✖ dplyr::lag()    masks stats::lag()
│ ✖ tidyr::pack()   masks Matrix::pack()
│ ✖ tidyr::unpack() masks Matrix::unpack()
│ ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
└ @ RCall ~/.julia/packages/RCall/Wyd74/src/io.jl:172
# A tibble: 1 × 13
  expression      min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc
  <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>
1 sum(y)       1.33ms   1.33ms      746.        0B        0   373     0
  total_time result    memory             time             gc                
    <bch:tm> <list>    <list>             <list>           <list>            
1      500ms <dbl [1]> <Rprofmem [0 × 3]> <bench_tm [373]> <tibble [373 × 3]>
# store median runtime (in milliseconds)
@rget rbm_builtin # dataframe
benchmark_result["R builtin"] = median(rbm_builtin[!, :median]) * 1000
1.3345909828785807

4.3 R, handwritten loop

Handwritten loop in R is much slower.

R"""
sum_r <- function(x) {
  s <- 0
  for (xi in x) {
    s <- s + xi
  }
  s
}
library(bench)
y <- $x
rbm_loop <- bench::mark(sum_r(y)) %>%
  print(width = Inf)
""";
# A tibble: 1 × 13
  expression      min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc
  <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>
1 sum_r(y)     7.65ms   7.75ms      128.    13.1KB        0    65     0
  total_time result    memory              time            gc               
    <bch:tm> <list>    <list>              <list>          <list>           
1      506ms <dbl [1]> <Rprofmem [36 × 3]> <bench_tm [65]> <tibble [65 × 3]>
# store median runtime (in milliseconds)
@rget rbm_loop # dataframe
benchmark_result["R loop"] = median(rbm_loop[!, :median]) * 1000
7.746868010144681

4.4 R, Rcpp

Rcpp package provides an easy way to incorporate C++ code in R.

R"""
library(Rcpp)

cppFunction('double rcpp_sum(NumericVector x) {
  int n = x.size();
  double total = 0;
  for(int i = 0; i < n; ++i) {
    total += x[i];
  }
  return total;
}')

rcpp_sum
"""
RObject{ClosSxp}
function (x) 
.Call(<pointer: 0x2c81fcd00>, x)
R"""
rbm_rcpp <- bench::mark(rcpp_sum(y)) %>%
  print(width = Inf)
""";
# A tibble: 1 × 13
  expression       min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc
  <bch:expr>  <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>
1 rcpp_sum(y)    799µs    803µs     1234.    2.49KB        0   617     0
  total_time result    memory             time             gc                
    <bch:tm> <list>    <list>             <list>           <list>            
1      500ms <dbl [1]> <Rprofmem [1 × 3]> <bench_tm [617]> <tibble [617 × 3]>
# store median runtime (in milliseconds)
@rget rbm_rcpp # dataframe
benchmark_result["R Rcpp"] = median(rbm_rcpp[!, :median]) * 1000
0.8029030286706984

4.5 Python, builtin sum

Built in function sum in Python.

using PyCall

PyCall.pyversion
v"3.10.9"
# get the Python built-in "sum" function:
pysum = pybuiltin("sum")
bm = @benchmark $pysum($x)
BenchmarkTools.Trial: 135 samples with 1 evaluation.
 Range (minmax):  36.486 ms 37.890 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     37.058 ms                GC (median):    0.00%
 Time  (mean ± σ):   37.095 ms ± 163.792 μs   GC (mean ± σ):  0.00% ± 0.00%
                         ▃▂▄█▇                                
  ▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▆▆█████▆▆▃▁▄▅▃▃▁▁▆▃▄▃▁▄▃▃▁▃▁▁▁▁▁▁▁▁▃ ▃
  36.5 ms         Histogram: frequency by time         37.7 ms <
 Memory estimate: 240 bytes, allocs estimate: 6.
# store median runtime (in miliseconds)
benchmark_result["Python builtin"] = median(bm.times) / 1e6
37.058458

4.6 Python, handwritten loop

py"""
def py_sum(A):
    s = 0.0
    for a in A:
        s += a
    return s
"""

sum_py = py"py_sum"

bm = @benchmark $sum_py($x)
BenchmarkTools.Trial: 110 samples with 1 evaluation.
 Range (minmax):  45.192 ms 46.912 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     45.812 ms                GC (median):    0.00%
 Time  (mean ± σ):   45.784 ms ± 287.684 μs   GC (mean ± σ):  0.00% ± 0.00%
                       ▂ ▇█ ▃                                 
  ▃▁▁▃▅▅▆▆▆▆▅▅▅▃▅▃▁▁▆▃▅████▆█▅▃▁▁▁▃▃▃▃▅▁▃▃▁▁▁▃▃▃▁▁▁▁▁▁▁▁▁▁▁▃ ▃
  45.2 ms         Histogram: frequency by time         46.7 ms <
 Memory estimate: 240 bytes, allocs estimate: 6.
# store median runtime (in miliseconds)
benchmark_result["Python loop"] = median(bm.times) / 1e6
45.812208

4.7 Python, numpy

Numpy is the high-performance scientific computing library for Python.

# bring in sum function from Numpy 
numpy_sum = pyimport("numpy")."sum"
PyObject <function sum at 0x2c96bd3f0>
bm = @benchmark $numpy_sum($x)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  159.708 μs313.833 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     174.000 μs                GC (median):    0.00%
 Time  (mean ± σ):   172.386 μs ±   5.816 μs   GC (mean ± σ):  0.00% ± 0.00%
  ▂▅▅▃▁     ▁                      ▅█▇▄▁▁  ▃▃▂                 ▂
  █████▇▇▇▇███▇█▇▇▇▇▆▆▇▆▆▇▇▆▆▇▆▆▆███████▇█████████▇▇▇▆▅▅▅▅▄▅▆ █
  160 μs        Histogram: log(frequency) by time        185 μs <
 Memory estimate: 240 bytes, allocs estimate: 6.
# store median runtime (in miliseconds)
benchmark_result["Python numpy"] = median(bm.times) / 1e6
0.174

Numpy performance is on a par with Julia build-in sum function. Both are about 3 times faster than C, possibly because of usage of SIMD.

4.8 Julia, builtin sum

@time, @elapsed, @allocated macros in Julia report run times and memory allocation.

@time sum(x) # no compilation time after first run
  0.000174 seconds (1 allocation: 16 bytes)
500156.3977288406

For more robust benchmarking, we use BenchmarkTools.jl package.

bm = @benchmark sum($x)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  159.291 μs203.583 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     159.459 μs                GC (median):    0.00%
 Time  (mean ± σ):   160.519 μs ±   2.727 μs   GC (mean ± σ):  0.00% ± 0.00%
  █         ▂▅▁      ▁▁                                     ▁
  ██▆▆▅▅▆▆▁▁▁▁███▇▆▇▇▇▇██▇▇▆▅▁▅▅▅▆▆▇▆▇▇▇▇▇▆▆▇▆▆▅▅▄▅▆▆▆▅▆▅▅▅▅▅ █
  159 μs        Histogram: log(frequency) by time        173 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
benchmark_result["Julia builtin"] = median(bm.times) / 1e6
0.159459

4.9 Julia, handwritten loop

Let’s also write a loop and benchmark.

function jl_sum(A)
    s = zero(eltype(A))
    for a in A
        s += a
    end
    s
end

bm = @benchmark jl_sum($x)
BenchmarkTools.Trial: 6130 samples with 1 evaluation.
 Range (minmax):  811.583 μs940.250 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     811.792 μs                GC (median):    0.00%
 Time  (mean ± σ):   814.884 μs ±   7.989 μs   GC (mean ± σ):  0.00% ± 0.00%
    ▄  ▁ ▁                                                ▁
  ██▇▄███▅▇█▇█▆▇█▇█▇▆▆▅█▇▇▆▆▅▅▆▆▆▄▆▄▆▄▅▅▅▄▅▄▃▅▅▄▃▅▁▁▃▃▃▁▃▅▁▁▇ █
  812 μs        Histogram: log(frequency) by time        851 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
benchmark_result["Julia loop"] = median(bm.times) / 1e6
0.811792

Exercise: annotate the loop by @simd or LoopVectorization.@turbo and benchmark again.

using LoopVectorization

function jl_sum_turbo(A)
    s = zero(eltype(A))
    @turbo for i in eachindex(A)
      s += A[i]
    end
    s
end

bm = @benchmark jl_sum_turbo($x)
bm
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  82.166 μs142.542 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     89.250 μs                GC (median):    0.00%
 Time  (mean ± σ):   87.171 μs ±   3.736 μs   GC (mean ± σ):  0.00% ± 0.00%
  ▇▆▃▁            ▁        ▄▁            ▂▂          ▂
  ████▇▇▆▇▇▆▆▆▇▇▆▆██▇▇▇▆▆▄▇▇▆▆▅▅▆▅███▇▅▃▄▄▃▄▄▁▁▁▄██▇▆▄▅▅▄▅▅█ █
  82.2 μs       Histogram: log(frequency) by time      94.7 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
benchmark_result["Julia turbo"] = median(bm.times) / 1e6
0.08925

@tturbo macro turns on multi-threading. Not for this to function, Julia needs to be started with multi-threading.

function jl_sum_tturbo(A)
    s = zero(eltype(A))
    @tturbo for i in eachindex(A)
      s += A[i]
    end
    s
end

bm = @benchmark jl_sum_tturbo($x)
bm
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  31.500 μs148.459 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     32.541 μs                GC (median):    0.00%
 Time  (mean ± σ):   32.772 μs ±   2.423 μs   GC (mean ± σ):  0.00% ± 0.00%
       ▄▄▃▅█   ▁ ▁▂▂                                          ▁
  ▃▃▁▁▁█████▆█▇███▅▇▆▅▁▃▁▁▄▄▁▃▄▅▆█▃▅▄▅█▇▆▆▇▅▆▆█▅▅▃▄▄▃▄▃▅▃▄▇ █
  31.5 μs       Histogram: log(frequency) by time      37.9 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
benchmark_result["Julia tturbo"] = median(bm.times) / 1e6
0.032541

4.10 Summary

sort(collect(benchmark_result), by = x -> x[2])
11-element Vector{Pair{Any, Any}}:
   "Julia tturbo" => 0.032541
    "Julia turbo" => 0.08925
  "Julia builtin" => 0.159459
   "Python numpy" => 0.174
         "R Rcpp" => 0.8029030286706984
              "C" => 0.81175
     "Julia loop" => 0.811792
      "R builtin" => 1.3345909828785807
         "R loop" => 7.746868010144681
 "Python builtin" => 37.058458
    "Python loop" => 45.812208
  • C, R builtin are the baseline C performance (gold standard).

  • Python builtin and Python loop are >50 fold slower than C because the loop is interpreted.

  • R loop is about 10 times slower than C and indicates the performance of JIT bytecode generated by its compiler package (turned on by default since R v3.4.0 (Apr 2017)).

  • Julia loop is close to C performance, because Julia code is JIT compiled.

  • Julia builtin, Python numpy, and Julia turbo are 4-5 fold faster than C because of SIMD.

  • @turbo (SIMD) and @tturbo (multithreading+SIMD), offered by the LoopVectorization.jl package. yields the top performance on this machine.

5 Take home message for computational scientists

  • High-level language (R, Python, Matlab) programmers should be familiar with existing high-performance packages. Don’t reinvent wheels.
    • R: RcppEigen, tidyverse, …
    • Python: numpy, scipy, …
  • In most research projects, looping is unavoidable. Then we need to use a low-level language.
    • R: Rcpp, …
    • Python: Cython, …
  • In this course, we use Julia, which circumvents the two language problem. So we can spend more time on algorithms, not on juggling \(\ge 2\) languages.