Biostat 257 Homework 3

Due May 6 May 13 @ 11:59PM

In [ ]:
versioninfo()

Consider a linear mixed effects model $$ \mathbf{Y}_i = \mathbf{X}_i \boldsymbol{\beta} + \mathbf{Z}_i \boldsymbol{\gamma} + \boldsymbol{\epsilon}_i, \quad i=1,\ldots,n, $$ where

  • $\mathbf{Y}_i \in \mathbb{R}^{n_i}$ is the response vector of $i$-th individual,
  • $\mathbf{X}_i \in \mathbb{R}^{n_i \times p}$ is the fixed effect predictor matrix of $i$-th individual,
  • $\mathbf{Z}_i \in \mathbb{R}^{n_i \times q}$ is the random effect predictor matrix of $i$-th individual,
  • $\boldsymbol{\epsilon}_i \in \mathbb{R}^{n_i}$ are multivariate normal $N(\mathbf{0}_{n_i},\sigma^2 \mathbf{I}_{n_i})$,
  • $\boldsymbol{\beta} \in \mathbb{R}^p$ are fixed effects, and
  • $\boldsymbol{\gamma} \in \mathbb{R}^q$ are random effects assumed to be $N(\mathbf{0}_q, \boldsymbol{\Sigma}_{q \times q}$) independent of $\boldsymbol{\epsilon}_i$.

Q1 Formula (10 pts)

Write down the log-likelihood of the $i$-th datum $(\mathbf{y}_i, \mathbf{X}_i, \mathbf{Z}_i)$ given parameters $(\boldsymbol{\beta}, \boldsymbol{\Sigma}, \sigma^2)$.

Hint: For non-statisticians, feel free to ask for help in class or office hours. Point of this exercise is computing not statistics.

Q2 Start-up code

Use the following template to define a type LmmObs that holds an LMM datum $(\mathbf{y}_i, \mathbf{X}_i, \mathbf{Z}_i)$.

In [ ]:
# define a type that holds LMM datum
struct LmmObs{T <: AbstractFloat}
    # data
    y :: Vector{T}
    X :: Matrix{T}
    Z :: Matrix{T}
    # working arrays
    # whatever intermediate vectors/arrays you may want to pre-allocate
    storage_p  :: Vector{T}
    storage_q  :: Vector{T}
    xtx        :: Matrix{T}
    ztx        :: Matrix{T}
    ztz        :: Matrix{T}
    storage_qq :: Matrix{T}
end

# constructor
function LmmObs(
        y::Vector{T}, 
        X::Matrix{T}, 
        Z::Matrix{T}
        ) where T <: AbstractFloat
    storage_p  = Vector{T}(undef, size(X, 2))
    storage_q  = Vector{T}(undef, size(Z, 2))
    xtx        = transpose(X) * X
    ztx        = transpose(Z) * X
    ztz        = transpose(Z) * Z
    storage_qq = similar(ztz)
    LmmObs(y, X, Z, storage_p, storage_q, xtx, ztx, ztz, storage_qq)
end

Write a function, with interface

logl!(obs, β, L, σ²)

that evaluates the log-likelihood of the $i$-th datum. Here L is the lower triangular Cholesky factor from the Cholesky decomposition Σ=LL'. Make your code efficient in the $n_i \gg q$ case. Think the intensive longitudinal measurement setting.

In [ ]:
function logl!(
        obs :: LmmObs{T}, 
        β   :: Vector{T}, 
        L   :: Matrix{T}, 
        σ²  :: T) where T <: AbstractFloat
    n, p, q = size(obs.X, 1), size(obs.X, 2), size(obs.Z, 2)    
    # TODO: compute and return the log-likelihood
    obs.storage_qq = 0
    sleep(1e-3) # wait 1 ms as if your code takes 1ms
    return zero(T)
end

Hint: This function shouldn't be very long. Mine, obeying 80-character rule, is 25 lines. If you find yourself writing very long code, you're on the wrong track. Think about algorithm first then use BLAS functions to reduce memory allocations.

Q3 Correctness (15 pts)

Compare your result (both accuracy and timing) to the Distributions.jl package using following data.

In [ ]:
using BenchmarkTools, Distributions, LinearAlgebra, Random

Random.seed!(257)
# dimension
n, p, q = 2000, 5, 3
# predictors
X  = [ones(n) randn(n, p - 1)]
Z  = [ones(n) randn(n, q - 1)]
# parameter values
β  = [2.0; -1.0; rand(p - 2)]
σ² = 1.5
Σ  = fill(0.1, q, q) + 0.9I
# generate y
y  = X * β + Z * rand(MvNormal(Σ)) + sqrt(σ²) * randn(n)

# form an LmmObs object
obs = LmmObs(y, X, Z)

This is the standard way to evaluate log-density of a multivariate normal, using the Distributions.jl package. Let's evaluate the log-likelihood of this datum.

In [ ]:
μ  = X * β
Ω  = Z * Σ * transpose(Z) +  σ² * I
mvn = MvNormal(μ, Symmetric(Ω)) # MVN(μ, Σ)
logpdf(mvn, y)

Check that your answer matches that from Distributions.jl

In [ ]:
L = Matrix(cholesky(Σ).L)
logl!(obs, β, L, σ²)

You will lose all 15 + 30 + 30 = 75 points if the following statement throws AssertionError.

In [ ]:
@assert logl!(obs, β, Matrix(cholesky(Σ).L), σ²)  logpdf(mvn, y)

Q4 Efficiency (30 pts)

Benchmarking your code and compare to the Distributions.jl function logpdf.

In [ ]:
# benchmark the `logpdf` function in Distribution.jl
bm1 = @benchmark logpdf($mvn, $y)
In [ ]:
# benchmark your implementation
L = Matrix(cholesky(Σ).L)
bm2 = @benchmark logl!($obs, $β, $L, $σ²)

The points you will get is $$ \frac{x}{1000} \times 30, $$ where $x$ is the speedup of your program against the standard method.

In [ ]:
# this is the points you'll get
clamp(median(bm1).time / median(bm2).time / 1000 * 30, 0, 30)

Hint: Apparently I am using 1000 as denominator because I expect your code to be at least $1000 \times$ faster than the standard method.

Q5 Memory (30 pts)

You want to avoid memory allocation in the "hot" function logl!. You will lose 1 point for each 1 KiB = 1024 bytes memory allocation. In other words, the points you get for this question is

In [ ]:
clamp(30 - median(bm2).memory / 1024, 0, 30)

Hint: I am able to reduce the memory allocation to 0 bytes.

Q6 Misc (15 pts)

Coding style, Git workflow, etc. For reproducibity, make sure we (TA and myself) can run your Jupyter Notebook. That is how we grade Q4 and Q5. If we cannot run it, you will get zero points.