Numerical Linear Algebra

Biostat/Biomath M257

Author

Dr. Hua Zhou @ UCLA

Published

April 18, 2023

System information (for reproducibility):

versioninfo()
Julia Version 1.8.5
Commit 17cfb8e65ea (2023-01-08 06:45 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin21.5.0)
  CPU: 12 × Apple M2 Max
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, apple-m1)
  Threads: 8 on 8 virtual cores
Environment:
  JULIA_NUM_THREADS = 8
  JULIA_EDITOR = code

Load packages:

using Pkg

Pkg.activate(pwd())
Pkg.instantiate()
Pkg.status()
  Activating project at `~/Documents/github.com/ucla-biostat-257/2023spring/slides/08-numalgintro`
Status `~/Documents/github.com/ucla-biostat-257/2023spring/slides/08-numalgintro/Project.toml`
  [6e4b80f9] BenchmarkTools v1.3.2
  [0e44f5e4] Hwloc v2.2.0
  [bdcacae8] LoopVectorization v0.12.157
  [6f49c342] RCall v0.13.15
  [37e2e46d] LinearAlgebra
  [9a3f8284] Random

1 Introduction

  • Topics in numerical algebra:

    • BLAS
    • solve linear equations \(\mathbf{A} \mathbf{x} = \mathbf{b}\)
    • regression computations \(\mathbf{X}^T \mathbf{X} \beta = \mathbf{X}^T \mathbf{y}\)
    • eigen-problems \(\mathbf{A} \mathbf{x} = \lambda \mathbf{x}\)
    • generalized eigen-problems \(\mathbf{A} \mathbf{x} = \lambda \mathbf{B} \mathbf{x}\)
    • singular value decompositions \(\mathbf{A} = \mathbf{U} \Sigma \mathbf{V}^T\)
    • iterative methods for numerical linear algebra
  • Except for the iterative methods, most of these numerical linear algebra tasks are implemented in the BLAS and LAPACK libraries. They form the building blocks of most statistical computing tasks (optimization, MCMC).

  • Our major goal (or learning objectives) is to

    1. know the complexity (flop count) of each task
    2. be familiar with the BLAS and LAPACK functions (what they do)
    3. do not re-invent wheels by implementing these dense linear algebra subroutines by yourself
    4. understand the need for iterative methods
    5. apply appropriate numerical algebra tools to various statistical problems
  • All high-level languages (Julia, Matlab, Python, R) call BLAS and LAPACK for numerical linear algebra.

    • Julia offers more flexibility by exposing interfaces to many BLAS/LAPACK subroutines directly. See documentation.

2 BLAS

  • BLAS stands for basic linear algebra subprograms.

  • See netlib for a complete list of standardized BLAS functions.

  • There are many implementations of BLAS.

    • Netlib provides a reference implementation.
    • Matlab uses Intel’s MKL (mathematical kernel libaries). MKL implementation is the gold standard on market. It is not open source but the compiled library is free for Linux and MacOS. However, not surprisingly, it only works on Intel CPUs.
    • Julia uses OpenBLAS. OpenBLAS is the best cross-platform, open source implementation. With the MKL.jl package, it’s also very easy to use MKL in Julia.
  • There are 3 levels of BLAS functions.

Level Example Operation Name Dimension Flops
1 \(\alpha \gets \mathbf{x}^T \mathbf{y}\) dot product \(\mathbf{x}, \mathbf{y} \in \mathbb{R}^n\) \(2n\)
1 \(\mathbf{y} \gets \mathbf{y} + \alpha \mathbf{x}\) axpy \(\alpha \in \mathbb{R}\), \(\mathbf{x}, \mathbf{y} \in \mathbb{R}^n\) \(2n\)
2 \(\mathbf{y} \gets \mathbf{y} + \mathbf{A} \mathbf{x}\) gaxpy \(\mathbf{A} \in \mathbb{R}^{m \times n}\), \(\mathbf{x} \in \mathbb{R}^n\), \(\mathbf{y} \in \mathbb{R}^m\) \(2mn\)
2 \(\mathbf{A} \gets \mathbf{A} + \mathbf{y} \mathbf{x}^T\) rank one update \(\mathbf{A} \in \mathbb{R}^{m \times n}\), \(\mathbf{x} \in \mathbb{R}^n\), \(\mathbf{y} \in \mathbb{R}^m\) \(2mn\)
3 \(\mathbf{C} \gets \mathbf{C} + \mathbf{A} \mathbf{B}\) matrix multiplication \(\mathbf{A} \in \mathbb{R}^{m \times p}\), \(\mathbf{B} \in \mathbb{R}^{p \times n}\), \(\mathbf{C} \in \mathbb{R}^{m \times n}\) \(2mnp\)
  • Typical BLAS functions support single precision (S), double precision (D), complex (C), and double complex (Z).

3 Examples

The form of a mathematical expression and the way the expression should be evaluated in actual practice may be quite different.

Some operations appear as level-3 but indeed are level-2.

Example 1. A common operation in statistics is column scaling or row scaling \[ \begin{eqnarray*} \mathbf{A} &=& \mathbf{A} \mathbf{D} \quad \text{(column scaling)} \\ \mathbf{A} &=& \mathbf{D} \mathbf{A} \quad \text{(row scaling)}, \end{eqnarray*} \] where \(\mathbf{D}\) is diagonal. For example, in generalized linear models (GLMs), the Fisher information matrix takes the form
\[ \mathbf{X}^T \mathbf{W} \mathbf{X}, \] where \(\mathbf{W}\) is a diagonal matrix with observation weights on diagonal.

Column and row scalings are essentially level-2 operations!

using BenchmarkTools, LinearAlgebra, Random

Random.seed!(257) # seed
n = 2000
A = rand(n, n) # n-by-n matrix
d = rand(n)  # n vector
D = Diagonal(d) # diagonal matrix with d as diagonal
2000×2000 Diagonal{Float64, Vector{Float64}}:
 0.0416032   ⋅         ⋅         ⋅       …   ⋅         ⋅         ⋅ 
  ⋅         0.887679   ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅        0.102233   ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅        0.08407      ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅       …   ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅       …   ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
 ⋮                                       ⋱                      
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅       …   ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅       …   ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅          0.213471   ⋅         ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅        0.870533   ⋅ 
  ⋅          ⋅         ⋅         ⋅           ⋅         ⋅        0.318876
Dfull = diagm(d) # convert to full matrix
2000×2000 Matrix{Float64}:
 0.0416032  0.0       0.0       0.0      …  0.0       0.0       0.0
 0.0        0.887679  0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.102233  0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.08407     0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0      …  0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0      …  0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 ⋮                                       ⋱                      
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0      …  0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0      …  0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.0
 0.0        0.0       0.0       0.0         0.213471  0.0       0.0
 0.0        0.0       0.0       0.0         0.0       0.870533  0.0
 0.0        0.0       0.0       0.0         0.0       0.0       0.318876
# this is calling BLAS routine for matrix multiplication: O(n^3) flops
# this is SLOW!
@benchmark $A * $Dfull
BenchmarkTools.Trial: 103 samples with 1 evaluation.
 Range (minmax):  46.682 ms91.738 ms   GC (min … max): 0.00% … 2.04%
 Time  (median):     47.301 ms               GC (median):    0.00%
 Time  (mean ± σ):   48.864 ms ±  5.360 ms   GC (mean ± σ):  0.85% ± 1.47%
  █▇   ▄▄                                                    
  ██▁▅▇██▅▅▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▅▁▁▁▁▅▁▁▁▁▁▁▅▁▁▁▁▁▁▇▁▅ ▅
  46.7 ms      Histogram: log(frequency) by time      62.7 ms <
 Memory estimate: 30.52 MiB, allocs estimate: 2.
# dispatch to special method for diagonal matrix multiplication.
# columnwise scaling: O(n^2) flops
@benchmark $A * $D
BenchmarkTools.Trial: 1900 samples with 1 evaluation.
 Range (minmax):  2.159 ms  6.428 ms   GC (min … max):  0.00% … 24.64%
 Time  (median):     2.356 ms                GC (median):     0.00%
 Time  (mean ± σ):   2.629 ms ± 527.545 μs   GC (mean ± σ):  10.29% ± 13.59%
    ▇█▇▇                                                      
  ▄██████▆▅▄▄▄▃▃▂▂▃▂▂▂▂▁▁▁▁▁▁▂▁▁▁▂▃▄▅▅▆▅▆▅▄▃▃▃▃▂▃▂▂▂▂▂▂▂▂▂▂ ▃
  2.16 ms         Histogram: frequency by time           4 ms <
 Memory estimate: 30.52 MiB, allocs estimate: 2.
# Or we can use broadcasting (with recycling)
@benchmark $A .* transpose($d)
BenchmarkTools.Trial: 1798 samples with 1 evaluation.
 Range (minmax):  2.169 ms  7.500 ms   GC (min … max):  0.00% … 25.93%
 Time  (median):     2.452 ms                GC (median):     0.00%
 Time  (mean ± σ):   2.778 ms ± 684.447 μs   GC (mean ± σ):  10.85% ± 14.18%
   ▃█▆▂                                                        
  ▄█████▆▅▄▄▃▃▂▂▂▂▂▁▂▃▄▄▅▄▃▃▃▃▃▃▃▃▃▃▃▃▃▂▂▂▂▂▂▂▁▂▂▁▂▂▂▂▂▁▂▂▂ ▃
  2.17 ms         Histogram: frequency by time        5.18 ms <
 Memory estimate: 30.52 MiB, allocs estimate: 2.
# in-place: avoid allocate space for result
# rmul!: compute matrix-matrix product A*B, overwriting A, and return the result.
@benchmark rmul!($A, $D)
BenchmarkTools.Trial: 8666 samples with 1 evaluation.
 Range (minmax):  523.417 μs709.209 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     569.541 μs                GC (median):    0.00%
 Time  (mean ± σ):   573.723 μs ±  12.789 μs   GC (mean ± σ):  0.00% ± 0.00%
                          ▆█▄▁                                   
  ▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▆████▆▆▅▅▄▄▄▃▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ ▃
  523 μs           Histogram: frequency by time          624 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
# In-place broadcasting 
@benchmark $A .= $A .* transpose($d)
BenchmarkTools.Trial: 3177 samples with 1 evaluation.
 Range (minmax):  1.460 ms  7.036 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     1.563 ms                GC (median):    0.00%
 Time  (mean ± σ):   1.570 ms ± 100.481 μs   GC (mean ± σ):  0.00% ± 0.00%
                             ▃▇▇▇█▇▇▂▁▁  ▁                  
  ▂▂▁▁▁▁▁▁▁▁▂▁▁▁▁▁▁▂▁▂▂▁▂▄▅▆████████████▇█▆▆▅▄▅▄▄▄▄▃▃▃▃▂▃▃ ▄
  1.46 ms         Histogram: frequency by time        1.64 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.

Exercise: Try @turbo (SIMD) and @tturbo (SIMD) from LoopVectorization.jl package.

Note: In R or Matlab, diag(d) will create a full matrix. Be cautious using diag function: do we really need a full diagonal matrix?

using RCall

R"""
d <- runif(5)
diag(d)
"""
RObject{RealSxp}
          [,1]      [,2]      [,3]      [,4]       [,5]
[1,] 0.9887967 0.0000000 0.0000000 0.0000000 0.00000000
[2,] 0.0000000 0.7087668 0.0000000 0.0000000 0.00000000
[3,] 0.0000000 0.0000000 0.8747552 0.0000000 0.00000000
[4,] 0.0000000 0.0000000 0.0000000 0.4828469 0.00000000
[5,] 0.0000000 0.0000000 0.0000000 0.0000000 0.04755812
# This works only when Matlab is installed
using MATLAB

mat"""
d = rand(5, 1)
diag(d)
"""
LoadError: ArgumentError: Package MATLAB not found in current path.
- Run `import Pkg; Pkg.add("MATLAB")` to install the MATLAB package.

Example 2. Innter product between two matrices \(\mathbf{A}, \mathbf{B} \in \mathbb{R}^{m \times n}\) is often written as \[ \text{trace}(\mathbf{A}^T \mathbf{B}), \text{trace}(\mathbf{B} \mathbf{A}^T), \text{trace}(\mathbf{A} \mathbf{B}^T), \text{ or } \text{trace}(\mathbf{B}^T \mathbf{A}). \] They appear as level-3 operation (matrix multiplication with \(O(m^2n)\) or \(O(mn^2)\) flops).

Random.seed!(123)
n = 2000
A, B = randn(n, n), randn(n, n)

# slow way to evaluate tr(A'B): 2mn^2 flops
@benchmark tr(transpose($A) * $B)
BenchmarkTools.Trial: 105 samples with 1 evaluation.
 Range (minmax):  46.778 ms67.629 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     46.993 ms               GC (median):    0.00%
 Time  (mean ± σ):   47.919 ms ±  2.842 ms   GC (mean ± σ):  0.99% ± 1.70%
  █      ▄▁                                                   
  █▄▁▄▄▆██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▄ ▄
  46.8 ms      Histogram: log(frequency) by time      61.4 ms <
 Memory estimate: 30.52 MiB, allocs estimate: 2.

But \(\text{trace}(\mathbf{A}^T \mathbf{B}) = <\text{vec}(\mathbf{A}), \text{vec}(\mathbf{B})>\). The latter is level-1 BLAS operation with \(O(mn)\) flops.

# smarter way to evaluate tr(A'B): 2mn flops
@benchmark dot($A, $B)
BenchmarkTools.Trial: 2750 samples with 1 evaluation.
 Range (minmax):  1.765 ms 2.040 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     1.812 ms               GC (median):    0.00%
 Time  (mean ± σ):   1.815 ms ± 26.006 μs   GC (mean ± σ):  0.00% ± 0.00%
             ▃▂▂▆▄▂▇▅▇▄▇█▄▄▅▁▃▂                             
  ▂▄▅▄▅▅▆▆▇▇▇█████████████████████▆▆▆▆▆▆▆▅▄▄▄▄▃▃▃▂▂▃▂▂▂▂▂▂ ▅
  1.77 ms        Histogram: frequency by time        1.89 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.

Example 3. Similarly \(\text{diag}(\mathbf{A}^T \mathbf{B})\) can be calculated in \(O(mn)\) flops.

# slow way to evaluate diag(A'B): O(n^3)
@benchmark diag(transpose($A) * $B)
BenchmarkTools.Trial: 104 samples with 1 evaluation.
 Range (minmax):  46.959 ms55.136 ms   GC (min … max): 0.00% … 3.46%
 Time  (median):     47.519 ms               GC (median):    0.00%
 Time  (mean ± σ):   48.120 ms ±  1.222 ms   GC (mean ± σ):  0.99% ± 1.68%
      █▅                                                      
  ▃▁▁▆██▆▅▄▁▃▁▁▁▁▁▁▁▁▃▃▃▁▄▄▆▅▅▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▃ ▃
  47 ms           Histogram: frequency by time        52.1 ms <
 Memory estimate: 30.53 MiB, allocs estimate: 3.
# smarter way to evaluate diag(A'B): O(n^2)
@benchmark Diagonal(vec(sum($A .* $B, dims = 1)))
BenchmarkTools.Trial: 1350 samples with 1 evaluation.
 Range (minmax):  3.265 ms  5.263 ms   GC (min … max): 0.00% … 31.35%
 Time  (median):     3.395 ms                GC (median):    0.00%
 Time  (mean ± σ):   3.699 ms ± 550.626 μs   GC (mean ± σ):  8.57% ± 11.84%
    ▆█▅                                                       
  ▂▆███▇▅▄▃▃▃▂▂▂▂▂▂▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▃▅▆▅▅▄▄▃▂▃ ▃
  3.27 ms         Histogram: frequency by time        4.79 ms <
 Memory estimate: 30.53 MiB, allocs estimate: 5.

To get rid of allocation of intermediate arrays at all, we can just write a double loop or use dot function.

function diag_matmul!(d, A, B)
    m, n = size(A)
    @assert size(B) == (m, n) "A and B should have same size"
    fill!(d, 0)
    for j in 1:n, i in 1:m
        d[j] += A[i, j] * B[i, j]
    end
    Diagonal(d)
end

d = zeros(eltype(A), size(A, 2))
@benchmark diag_matmul!($d, $A, $B)
BenchmarkTools.Trial: 1412 samples with 1 evaluation.
 Range (minmax):  3.463 ms 3.693 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     3.536 ms               GC (median):    0.00%
 Time  (mean ± σ):   3.539 ms ± 36.346 μs   GC (mean ± σ):  0.00% ± 0.00%
                ▁ ▃▂▃▃▆▆▅▃▅ ▆▁▁▃▁    ▁                       
  ▄▅▄▅▄▅▃▅▃▃▂▅▇▇████████████████▆▇█▆█▇▅▇▅▃▅▅▅▃▄▃▃▂▂▂▂▂▂▂▂▂ ▄
  3.46 ms        Histogram: frequency by time        3.63 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.

Exercise: Try @turbo (SIMD) and @tturbo (SIMD) from LoopVectorization.jl package.

4 Memory hierarchy and level-3 fraction

Key to high performance is effective use of memory hierarchy. True on all architectures.

  • Flop count is not the sole determinant of algorithm efficiency. Another important factor is data movement through the memory hierarchy.

Source: https://cs.brown.edu/courses/csci1310/2020/assign/labs/lab4.html

  • In Julia, we can query the CPU topology by the Hwloc.jl package. For example, this laptop runs an Apple M2 Max chip with 4 efficiency cores and 8 performance cores.
using Hwloc

topology_graphical()
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Machine (1975MB total)                                                                                                                     │
│                                                                                                                                            │
│ ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │
│ │ Package L#0                                                                                                                            │ │
│ │                                                                                                                                        │ │
│ │ ┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ │
│ │ │ NUMANode L#0 P#0 (1975MB)                                                                                                          │ │ │
│ │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ │ │
│ │                                                                                                                                        │ │
│ │ ┌────────────────────────────────────────────────────────────────┐  ┌────────────────────────────────────────────────────────────────┐ │ │
│ │ │ L2 (4096KB)                                                    │  │ L2 (16MB)                                                      │ │ │
│ │ └────────────────────────────────────────────────────────────────┘  └────────────────────────────────────────────────────────────────┘ │ │
│ │                                                                                                                                        │ │
│ │ ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐ │ │
│ │ │ L1d (64KB)  │  │ L1d (64KB)  │  │ L1d (64KB)  │  │ L1d (64KB)  │  │ L1d (128KB) │  │ L1d (128KB) │  │ L1d (128KB) │  │ L1d (128KB) │ │ │
│ │ └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘ │ │
│ │                                                                                                                                        │ │
│ │ ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐ │ │
│ │ │ L1i (128KB) │  │ L1i (128KB) │  │ L1i (128KB) │  │ L1i (128KB) │  │ L1i (192KB) │  │ L1i (192KB) │  │ L1i (192KB) │  │ L1i (192KB) │ │ │
│ │ └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘ │ │
│ │                                                                                                                                        │ │
│ │ ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐ │ │
│ │ │ Core L#0    │  │ Core L#1    │  │ Core L#2    │  │ Core L#3    │  │ Core L#4    │  │ Core L#5    │  │ Core L#6    │  │ Core L#7    │ │ │
│ │ │             │  │             │  │             │  │             │  │             │  │             │  │             │  │             │ │ │
│ │ │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │ │ │
│ │ │ │ PU L#0  │ │  │ │ PU L#1  │ │  │ │ PU L#2  │ │  │ │ PU L#3  │ │  │ │ PU L#4  │ │  │ │ PU L#5  │ │  │ │ PU L#6  │ │  │ │ PU L#7  │ │ │ │
│ │ │ │         │ │  │ │         │ │  │ │         │ │  │ │         │ │  │ │         │ │  │ │         │ │  │ │         │ │  │ │         │ │ │ │
│ │ │ │   P#0   │ │  │ │   P#1   │ │  │ │   P#2   │ │  │ │   P#3   │ │  │ │   P#4   │ │  │ │   P#5   │ │  │ │   P#6   │ │  │ │   P#7   │ │ │ │
│ │ │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │ │ │
│ │ └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘ │ │
│ │                                                                                                                                        │ │
│ │ ┌────────────────────────────────────────────────────────────────┐                                                                     │ │
│ │ │ L2 (16MB)                                                      │                                                                     │ │
│ │ └────────────────────────────────────────────────────────────────┘                                                                     │ │
│ │                                                                                                                                        │ │
│ │ ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐                                                                     │ │
│ │ │ L1d (128KB) │  │ L1d (128KB) │  │ L1d (128KB) │  │ L1d (128KB) │                                                                     │ │
│ │ └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘                                                                     │ │
│ │                                                                                                                                        │ │
│ │ ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐                                                                     │ │
│ │ │ L1i (192KB) │  │ L1i (192KB) │  │ L1i (192KB) │  │ L1i (192KB) │                                                                     │ │
│ │ └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘                                                                     │ │
│ │                                                                                                                                        │ │
│ │ ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐                                                                     │ │
│ │ │ Core L#8    │  │ Core L#9    │  │ Core L#10   │  │ Core L#11   │                                                                     │ │
│ │ │             │  │             │  │             │  │             │                                                                     │ │
│ │ │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │                                                                     │ │
│ │ │ │ PU L#8  │ │  │ │ PU L#9  │ │  │ │ PU L#10 │ │  │ │ PU L#11 │ │                                                                     │ │
│ │ │ │         │ │  │ │         │ │  │ │         │ │  │ │         │ │                                                                     │ │
│ │ │ │   P#8   │ │  │ │   P#9   │ │  │ │  P#10   │ │  │ │  P#11   │ │                                                                     │ │
│ │ │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │  │ └─────────┘ │                                                                     │ │
│ │ └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘                                                                     │ │
│ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
  • For example, Xeon X5650 CPU has a theoretical throughput of 128 DP GFLOPS but a max memory bandwidth of 32GB/s.

  • Can we keep CPU cores busy with enough deliveries of matrix data and ship the results to memory fast enough to avoid backlog?
    Answer: use high-level BLAS as much as possible.

BLAS Dimension Mem. Refs. Flops Ratio
Level 1: \(\mathbf{y} \gets \mathbf{y} + \alpha \mathbf{x}\) \(\mathbf{x}, \mathbf{y} \in \mathbb{R}^n\) \(3n\) \(2n\) 3:2
Level 2: \(\mathbf{y} \gets \mathbf{y} + \mathbf{A} \mathbf{x}\) \(\mathbf{x}, \mathbf{y} \in \mathbb{R}^n\), \(\mathbf{A} \in \mathbb{R}^{n \times n}\) \(n^2\) \(2n^2\) 1:2
Level 3: \(\mathbf{C} \gets \mathbf{C} + \mathbf{A} \mathbf{B}\) \(\mathbf{A}, \mathbf{B}, \mathbf{C} \in\mathbb{R}^{n \times n}\) \(4n^2\) \(2n^3\) 2:n
  • Higher level BLAS (3 or 2) make more effective use of arithmetic logic units (ALU) by keeping them busy. Surface-to-volume effect.

Source: Jack Dongarra’s slides.

  • A distinction between LAPACK and LINPACK (older version of R uses LINPACK) is that LAPACK makes use of higher level BLAS as much as possible (usually by smart partitioning) to increase the so-called level-3 fraction.

  • To appreciate the efforts in an optimized BLAS implementation such as OpenBLAS (evolved from GotoBLAS), see the Quora question, especially the video. Bottomline is

Get familiar with (good implementations of) BLAS/LAPACK and use them as much as possible.

5 Effect of data layout

  • Data layout in memory affects algorithmic efficiency too. It is much faster to move chunks of data in memory than retrieving/writing scattered data.

  • Storage mode: column-major (Fortran, Matlab, R, Julia) vs row-major (C/C++).

  • Cache line is the minimum amount of cache which can be loaded and stored to memory.

    • x86 CPUs: 64 bytes
    • ARM CPUs: 32 bytes

  • In Julia, we can query the cache line size by Hwloc.jl.
# Apple Silicon (M1/M2 chips) don't have L3 cache
Hwloc.cachelinesize()
LoadError: Your system doesn't seem to have an L3 cache.
  • Accessing column-major stored matrix by rows (\(ij\) looping) causes lots of cache misses.

  • Take matrix multiplication as an example \[ \mathbf{C} \gets \mathbf{C} + \mathbf{A} \mathbf{B}, \quad \mathbf{A} \in \mathbb{R}^{m \times p}, \mathbf{B} \in \mathbb{R}^{p \times n}, \mathbf{C} \in \mathbb{R}^{m \times n}. \] Assume the storage is column-major, such as in Julia. There are 6 variants of the algorithms according to the order in the triple loops.

    • jki or kji looping:
# inner most loop
for i in 1:m
    C[i, j] = C[i, j] + A[i, k] * B[k, j]
end
- `ikj` or `kij` looping:
# inner most loop        
for j in 1:n
    C[i, j] = C[i, j] + A[i, k] * B[k, j]
end
  • ijk or jik looping:
# inner most loop        
for k in 1:p
    C[i, j] = C[i, j] + A[i, k] * B[k, j]
end
  • We pay attention to the innermost loop, where the vector calculation occurs. The associated stride when accessing the three matrices in memory (assuming column-major storage) is
Variant A Stride B Stride C Stride
\(jki\) or \(kji\) Unit 0 Unit
\(ikj\) or \(kij\) 0 Non-Unit Non-Unit
\(ijk\) or \(jik\) Non-Unit Unit 0

Apparently the variants \(jki\) or \(kji\) are preferred.

"""
    matmul_by_loop!(A, B, C, order)

Overwrite `C` by `A * B`. `order` indicates the looping order for triple loop.
"""
function matmul_by_loop!(A::Matrix, B::Matrix, C::Matrix, order::String)
    
    m = size(A, 1)
    p = size(A, 2)
    n = size(B, 2)
    fill!(C, 0)
    
    if order == "jki"
        @inbounds for j = 1:n, k = 1:p, i = 1:m
            C[i, j] += A[i, k] * B[k, j]
        end
    end

    if order == "kji"
        @inbounds for k = 1:p, j = 1:n, i = 1:m
            C[i, j] += A[i, k] * B[k, j]
        end
    end
    
    if order == "ikj"
        @inbounds for i = 1:m, k = 1:p, j = 1:n
            C[i, j] += A[i, k] * B[k, j]
        end
    end

    if order == "kij"
        @inbounds for k = 1:p, i = 1:m, j = 1:n
            C[i, j] += A[i, k] * B[k, j]
        end
    end
    
    if order == "ijk"
        @inbounds for i = 1:m, j = 1:n, k = 1:p
            C[i, j] += A[i, k] * B[k, j]
        end
    end
    
    if order == "jik"
        @inbounds for j = 1:n, i = 1:m, k = 1:p
            C[i, j] += A[i, k] * B[k, j]
        end
    end
    
end

using Random

Random.seed!(123)
m, p, n = 2000, 100, 2000
A = rand(m, p)
B = rand(p, n)
C = zeros(m, n);
  • \(jki\) and \(kji\) looping:
using BenchmarkTools

@benchmark matmul_by_loop!($A, $B, $C, "jki")
BenchmarkTools.Trial: 86 samples with 1 evaluation.
 Range (minmax):  57.729 ms 58.987 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     57.935 ms                GC (median):    0.00%
 Time  (mean ± σ):   58.163 ms ± 433.233 μs   GC (mean ± σ):  0.00% ± 0.00%
  ▂▄▄█                                                          
  ████▅▆▃▃▅█▃▃▃▃▁▅▁▃▁▁▁▃▃▃▃▁▃▁▁▁▃▁▁▁▁▃▅▃▁▁▅▁▅▃▁▅▃▁▅▅▅▃▁▁▆▁▆▃ ▁
  57.7 ms         Histogram: frequency by time           59 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
@benchmark matmul_by_loop!($A, $B, $C, "kji")
BenchmarkTools.Trial: 27 samples with 1 evaluation.
 Range (minmax):  184.051 ms191.960 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     187.742 ms                GC (median):    0.00%
 Time  (mean ± σ):   187.688 ms ±   2.181 ms   GC (mean ± σ):  0.00% ± 0.00%
  ▁▁  ▁▁        ▁█▁▁ ▁▁ █      ▁▁       █▁▁▁  █     ▁   ▁    ▁  
  ██▁▁██▁▁▁▁▁▁▁▁████▁██▁█▁▁▁▁▁██▁▁▁▁▁▁▁████▁▁█▁▁▁▁▁█▁▁▁█▁▁▁▁█ ▁
  184 ms           Histogram: frequency by time          192 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
  • \(ikj\) and \(kij\) looping:
@benchmark matmul_by_loop!($A, $B, $C, "ikj")
BenchmarkTools.Trial: 10 samples with 1 evaluation.
 Range (minmax):  510.386 ms520.613 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     514.270 ms                GC (median):    0.00%
 Time  (mean ± σ):   514.856 ms ±   3.683 ms   GC (mean ± σ):  0.00% ± 0.00%
  █  ██          █ █          █    █              █   █       █  
  █▁▁██▁▁▁▁▁▁▁▁▁▁█▁█▁▁▁▁▁▁▁▁█▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁█▁▁▁▁▁▁▁█ ▁
  510 ms           Histogram: frequency by time          521 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
@benchmark matmul_by_loop!($A, $B, $C, "kij")
BenchmarkTools.Trial: 10 samples with 1 evaluation.
 Range (minmax):  505.004 ms521.187 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     506.261 ms                GC (median):    0.00%
 Time  (mean ± σ):   508.999 ms ±   5.690 ms   GC (mean ± σ):  0.00% ± 0.00%
  █ ▁█ ▁   ▁                                    ▁            ▁  
  █▁███▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  505 ms           Histogram: frequency by time          521 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
  • \(ijk\) and \(jik\) looping:
@benchmark matmul_by_loop!($A, $B, $C, "ijk")
BenchmarkTools.Trial: 22 samples with 1 evaluation.
 Range (minmax):  234.204 ms243.705 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     234.779 ms                GC (median):    0.00%
 Time  (mean ± σ):   235.851 ms ±   2.308 ms   GC (mean ± σ):  0.00% ± 0.00%
  ▂█                                                       
  ████▁▁▁█▁▁▁▁▁▁▁▁▅▁▁▁▅▁▁▅▁▁▁▁▁▁▁▁▁▁▅▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅ ▁
  234 ms           Histogram: frequency by time          244 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
@benchmark matmul_by_loop!($A, $B, $C, "ijk")
BenchmarkTools.Trial: 22 samples with 1 evaluation.
 Range (minmax):  234.338 ms244.125 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     235.966 ms                GC (median):    0.00%
 Time  (mean ± σ):   237.434 ms ±   3.474 ms   GC (mean ± σ):  0.00% ± 0.00%
  █ █                                                         ▁  
  █▆█▁▁▁▁▆▁▆▁▆▆▆▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▆▁▆▁▁▁▁▆▆▁▆▁▁▁▁▁▁▁▁▁▁▁█ ▁
  234 ms           Histogram: frequency by time          244 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
  • Question: Can our loop beat BLAS? Julia wraps BLAS library for matrix multiplication. We see BLAS library wins hands down (multi-threading, Strassen algorithm, higher level-3 fraction by block outer product).
@benchmark mul!($C, $A, $B)
BenchmarkTools.Trial: 1820 samples with 1 evaluation.
 Range (minmax):  2.635 ms  8.466 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     2.694 ms                GC (median):    0.00%
 Time  (mean ± σ):   2.746 ms ± 439.602 μs   GC (mean ± σ):  0.00% ± 0.00%
  ▅ ▂▂▁▁▁▁▁▁▂▁▁▁▂▁▁▁▂▁▁▁▁▂▁▁▂▂▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▂▁▁▁▁▁▂▁▁▂ ▂
  2.64 ms         Histogram: frequency by time        5.84 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.
# direct call of BLAS wrapper function
@benchmark LinearAlgebra.BLAS.gemm!('N', 'N', 1.0, $A, $B, 0.0, $C)
BenchmarkTools.Trial: 1864 samples with 1 evaluation.
 Range (minmax):  2.636 ms 5.764 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     2.668 ms               GC (median):    0.00%
 Time  (mean ± σ):   2.681 ms ± 81.102 μs   GC (mean ± σ):  0.00% ± 0.00%
           ▄█▅                                               
  ▁▁▁▁▁▁▁▃▇███▇▅▄▃▂▂▂▃▃▃▃▃▃▂▂▂▂▂▂▁▁▂▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂
  2.64 ms        Histogram: frequency by time        2.78 ms <
 Memory estimate: 0 bytes, allocs estimate: 0.

Question (again): Can our loop beat BLAS?

Exercise: Annotate the loop in matmul_by_loop! by @turbo and @tturbo (multi-threading) and benchmark again.

6 BLAS in R

  • Tip for R users. Standard R distribution from CRAN uses a very out-dated BLAS/LAPACK library.
using RCall

R"""
sessionInfo()
"""
RObject{VecSxp}
R version 4.2.2 (2022-10-31)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.3.1

Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
[1] compiler_4.2.2
R"""
library(dplyr)
library(bench)
A <- $A
B <- $B
bench::mark(A %*% B) %>%
  print(width = Inf)
""";
┌ Warning: RCall.jl: 
│ Attaching package: ‘dplyr’
│ 
│ The following objects are masked from ‘package:stats’:
│ 
│     filter, lag
│ 
│ The following objects are masked from ‘package:base’:
│ 
│     intersect, setdiff, setequal, union
│ 
└ @ RCall ~/.julia/packages/RCall/LWzAQ/src/io.jl:172
# A tibble: 1 × 13
  expression      min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc
  <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>
1 A %*% B       125ms    125ms      7.67    30.5MB     7.67     4     4
  total_time result                memory             time          
    <bch:tm> <list>                <list>             <list>        
1      522ms <dbl [2,000 × 2,000]> <Rprofmem [1 × 3]> <bench_tm [4]>
  gc              
  <list>          
1 <tibble [4 × 3]>
┌ Warning: RCall.jl: Warning: Some expressions had a GC in every iteration; so filtering is disabled.
└ @ RCall ~/.julia/packages/RCall/LWzAQ/src/io.jl:172
  • Re-build R from source using OpenBLAS or MKL will immediately boost linear algebra performance in R. Google build R using MKL to get started. Similarly we can build Julia using MKL.

  • Matlab uses MKL. Usually it’s very hard to beat Matlab in terms of linear algebra.

using MATLAB

mat"""
f = @() $A * $B;
timeit(f)
"""
LoadError: ArgumentError: Package MATLAB not found in current path.
- Run `import Pkg; Pkg.add("MATLAB")` to install the MATLAB package.

7 Avoid memory allocation: some examples

7.1 Transposing matrix is an expensive memory operation

In R, the command

t(A) %*% x

will first transpose A then perform matrix multiplication, causing unnecessary memory allocation

using Random, LinearAlgebra, BenchmarkTools
Random.seed!(123)

n = 1000
A = rand(n, n)
x = rand(n);
R"""
A <- $A
x <- $x
bench::mark(t(A) %*% x) %>%
  print(width = Inf)
""";
# A tibble: 1 × 13
  expression      min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc
  <bch:expr> <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>
1 t(A) %*% x   1.77ms    2.1ms      475.    7.64MB     96.1   178    36
  total_time result            memory             time            
    <bch:tm> <list>            <list>             <list>          
1      375ms <dbl [1,000 × 1]> <Rprofmem [2 × 3]> <bench_tm [214]>
  gc                
  <list>            
1 <tibble [214 × 3]>

Julia is avoids transposing matrix whenever possible. The Transpose type only creates a view of the transpose of matrix data.

typeof(transpose(A))
Transpose{Float64, Matrix{Float64}}
fieldnames(typeof(transpose(A)))
(:parent,)
# same data in tranpose(A) and original matrix A
pointer(transpose(A).parent), pointer(A)
(Ptr{Float64} @0x0000000131de0000, Ptr{Float64} @0x0000000131de0000)
# dispatch to BLAS
# does *not* actually transpose the matrix
@benchmark transpose($A) * $x
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  22.666 μs65.000 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     23.750 μs               GC (median):    0.00%
 Time  (mean ± σ):   23.975 μs ±  1.843 μs   GC (mean ± σ):  0.00% ± 0.00%
  ▃█▆▄▁ ▄▅▃▂               ▁ ▁                             ▂
  █████▆█████▆▆▆▅▆▆▅▆▇▇███████████▇██▆▇▆▅▆▅▄▅▆▅▄▅▃▅▆▅▄▅▅▄▆ █
  22.7 μs      Histogram: log(frequency) by time      31.7 μs <
 Memory estimate: 7.94 KiB, allocs estimate: 1.
# pre-allocate result
out = zeros(size(A, 2))
@benchmark mul!($out, transpose($A), $x)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  22.708 μs67.042 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     23.000 μs               GC (median):    0.00%
 Time  (mean ± σ):   23.248 μs ±  1.350 μs   GC (mean ± σ):  0.00% ± 0.00%
   ▆                                                       ▁
  ▆██▇▅▃▄▁▅▄▅▅▅▅▄▅▄▃▅▅▅▅▄▅▅▅▆▆▇▇▆▇▇▇█▇▆▆▇▇▇▆▆▆▅▅▅▃▄▄▁▃▄▄▃▄▅ █
  22.7 μs      Histogram: log(frequency) by time      29.7 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.
# or call BLAS wrapper directly
@benchmark LinearAlgebra.BLAS.gemv!('T', 1.0, $A, $x, 0.0, $out)
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  22.583 μs70.083 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     22.791 μs               GC (median):    0.00%
 Time  (mean ± σ):   23.098 μs ±  1.443 μs   GC (mean ± σ):  0.00% ± 0.00%
  ▆█    ▂▃▂                                                  ▁
  ██▄▅████▅▄▆▅▆▄▄▄▃▃▃▄▅▆▄▅▆▆▆▅▆▆▆▅▆▅▄▅▅▅▅▅▅▅▃▅▄▃▃▄▅▄▅▄▄▄▅▅ █
  22.6 μs      Histogram: log(frequency) by time        30 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.

7.2 Broadcast (dot operation) fuses loops and avoids memory allocation

Broadcasting in Julia achieves vectorized code without creating intermediate arrays.

Suppose we want to calculate elementsize maximum of absolute values of two large arrays. In R or Matlab, the command

max(abs(X), abs(Y))

will create two intermediate arrays and then one result array.

using RCall

Random.seed!(123)
X, Y = rand(1000, 1000), rand(1000, 1000)

R"""
library(dplyr)
library(bench)
bench::mark(max(abs($X), abs($Y))) %>%
  print(width = Inf)
""";
# A tibble: 1 × 13
  expression                           min   median `itr/sec` mem_alloc `gc/sec`
  <bch:expr>                      <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl>
1 max(abs(`#JL`$X), abs(`#JL`$Y))   8.11ms   8.41ms      118.    15.3MB     118.
  n_itr  n_gc total_time result    memory             time           
  <int> <dbl>   <bch:tm> <list>    <list>             <list>         
1    28    28      237ms <dbl [1]> <Rprofmem [2 × 3]> <bench_tm [56]>
  gc               
  <list>           
1 <tibble [56 × 3]>

In Julia, dot operations are fused so no intermediate arrays are created.

# no intermediate arrays created, only result array created
@benchmark max.(abs.($X), abs.($Y))
BenchmarkTools.Trial: 4774 samples with 1 evaluation.
 Range (minmax):  599.666 μs  5.585 ms   GC (min … max):  0.00% … 78.72%
 Time  (median):     930.083 μs                GC (median):     0.00%
 Time  (mean ± σ):     1.046 ms ± 484.822 μs   GC (mean ± σ):  11.73% ± 16.36%
  ▄     ▄█                                                   ▁
  █▅▁▃▁▁███▇▅▁▁▁▃▁▃▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▇█▇▅▆█████▇▅▅▆▅▄▅▄▄▅ █
  600 μs        Histogram: log(frequency) by time        3.2 ms <
 Memory estimate: 7.63 MiB, allocs estimate: 2.

Pre-allocating result array gets rid of memory allocation at all.

# no memory allocation at all!
Z = zeros(size(X)) # zero matrix of same size as X
@benchmark $Z .= max.(abs.($X), abs.($Y)) # .= (vs =) is important!
BenchmarkTools.Trial: 8178 samples with 1 evaluation.
 Range (minmax):  599.667 μs709.500 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     606.917 μs                GC (median):    0.00%
 Time  (mean ± σ):   610.346 μs ±  10.294 μs   GC (mean ± σ):  0.00% ± 0.00%
      █▅                                                         
  ▁▁▁▂██▃▃▇▃▂▂▃▃▂▂▂▃▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂
  600 μs           Histogram: frequency by time          650 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.

Exercise: Annotate the broadcasting by @turbo and @tturbo (multi-threading) and benchmark again.

7.3 Views

View avoids creating extra copy of matrix data.

Random.seed!(123)
A = randn(1000, 1000)

# sum entries in a sub-matrix
@benchmark sum($A[1:2:500, 1:2:500])
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  45.416 μs  3.829 ms   GC (min … max):  0.00% … 97.56%
 Time  (median):     70.125 μs                GC (median):     0.00%
 Time  (mean ± σ):   82.526 μs ± 190.593 μs   GC (mean ± σ):  15.00% ±  6.41%
                            ▃█▁  ▁▅            
  ▆▃▂▂▂▂▃▃▂▂▂▂▂▂▂▂▂▂▁▂▂▁▂▁▂▄███▇██████▇▇▆▆▅▅▅▄▄▄▄▃▃▃▃▂▂▂▂▂▂▂ ▃
  45.4 μs         Histogram: frequency by time           90 μs <
 Memory estimate: 488.36 KiB, allocs estimate: 2.
# view avoids creating a separate sub-matrix
# unfortunately it's much slower possibly because of cache misses
@benchmark sum(@view $A[1:2:500, 1:2:500])
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  239.916 μs301.958 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     240.083 μs                GC (median):    0.00%
 Time  (mean ± σ):   242.768 μs ±   5.627 μs   GC (mean ± σ):  0.00% ± 0.00%
   ▅▂▃▁ ▁   ▁▁▁ ▁                                         ▁
  █▇▇▇▄██████▅▅█████████▆▇█▇██▇▆▆▆▇▆▆▆▆▆▆▅▆▅▅▆▅▅▆▄▅▅▄▅▄▅▅▃▄▄▄ █
  240 μs        Histogram: log(frequency) by time        267 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.

The @views macro, which can be useful in some operations.

@benchmark @views sum($A[1:2:500, 1:2:500])
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (minmax):  239.875 μs337.917 μs   GC (min … max): 0.00% … 0.00%
 Time  (median):     240.083 μs                GC (median):    0.00%
 Time  (mean ± σ):   242.261 μs ±   4.676 μs   GC (mean ± σ):  0.00% ± 0.00%
  █        ▅▂▂▁        ▁▁                                      ▁
  ██▅▇█▆▃▄███████▅▄▄▇████████▇▅▆▆▇█▇▇▇▇▇▆▆▆▅▅▅▆▆▇▄▆▆▅▅▄▅▄▃▅▅▅ █
  240 μs        Histogram: log(frequency) by time        260 μs <
 Memory estimate: 0 bytes, allocs estimate: 0.

Exercise: Here we saw that, although we avoids memory allocation, the actual run times are longer. Why?