Biostat/Biomath M257 Homework 2

Due Apr 28 @ 11:59PM

Author

Student Name and UID

Published

April 18, 2023

System information (for reproducibility):

versioninfo()
Julia Version 1.8.5
Commit 17cfb8e65ea (2023-01-08 06:45 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin21.5.0)
  CPU: 12 × Apple M2 Max
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, apple-m1)
  Threads: 8 on 8 virtual cores
Environment:
  JULIA_NUM_THREADS = 8
  JULIA_EDITOR = code

Load packages:

using Pkg

Pkg.activate(pwd())
Pkg.instantiate()
Pkg.status()
  Activating project at `~/Documents/github.com/ucla-biostat-257/2023spring/hw/hw2`
Status `~/Documents/github.com/ucla-biostat-257/2023spring/hw/hw2/Project.toml`
  [6e4b80f9] BenchmarkTools v1.3.2
  [7073ff75] IJulia v1.24.0
  [916415d5] Images v0.25.2
  [bdcacae8] LoopVectorization v0.12.157
  [8bb1440f] DelimitedFiles
  [37e2e46d] LinearAlgebra
  [9abbd945] Profile
# load libraries
using BenchmarkTools, DelimitedFiles, Images, LinearAlgebra, LoopVectorization
using Profile, Random

1 Q1. Nonnegative Matrix Factorization

Nonnegative matrix factorization (NNMF) was introduced by Lee and Seung (1999) as an alternative to principal components and vector quantization with applications in data compression, clustering, and deconvolution. In this homework we consider algorithms for fitting NNMF and (optionally) high performance computing using graphical processing units (GPUs).

In mathematical terms, one approximates a data matrix \(\mathbf{X} \in \mathbb{R}^{m \times n}\) with nonnegative entries \(x_{ij}\) by a product of two low-rank matrices \(\mathbf{V} \in \mathbb{R}^{m \times r}\) and \(\mathbf{W} \in \mathbb{R}^{r \times n}\) with nonnegative entries \(v_{ik}\) and \(w_{kj}\). Consider minimization of the squared Frobenius norm \[ L(\mathbf{V}, \mathbf{W}) = \|\mathbf{X} - \mathbf{V} \mathbf{W}\|_{\text{F}}^2 = \sum_i \sum_j \left(x_{ij} - \sum_k v_{ik} w_{kj} \right)^2, \quad v_{ik} \ge 0, w_{kj} \ge 0, \] which should lead to a good factorization. Lee and Seung suggest an iterative algorithm with multiplicative updates \[ v_{ik}^{(t+1)} = v_{ik}^{(t)} \frac{\sum_j x_{ij} w_{kj}^{(t)}}{\sum_j b_{ij}^{(t)} w_{kj}^{(t)}}, \quad \text{where } b_{ij}^{(t)} = \sum_k v_{ik}^{(t)} w_{kj}^{(t)}, \] \[ w_{kj}^{(t+1)} = w_{kj}^{(t)} \frac{\sum_i x_{ij} v_{ik}^{(t+1)}}{\sum_i b_{ij}^{(t+1/2)} v_{ik}^{(t+1)}}, \quad \text{where } b_{ij}^{(t+1/2)} = \sum_k v_{ik}^{(t+1)} w_{kj}^{(t)} \] that will drive the objective \(L^{(t)} = L(\mathbf{V}^{(t)}, \mathbf{W}^{(t)})\) downhill. Superscript \(t\) indicates the iteration number. In following questions, efficiency (both speed and memory) will be the most important criterion when grading this problem.

1.1 Q1.1 Develop code

Implement the algorithm with arguments: \(\mathbf{X}\) (data, each row is a vectorized image), rank \(r\), convergence tolerance, and optional starting point.

function nnmf(
    # positional arguments
    X       :: AbstractMatrix{T}, 
    r       :: Integer;
    # kw arguments
    maxiter :: Integer = 1000, 
    tolfun  :: Number = 1e-4,
    V       :: AbstractMatrix{T} = Random.rand!(similar(X, size(X, 1), r)),
    W       :: AbstractMatrix{T} = Random.rand!(similar(X, r, size(X, 2))),
    ) where T <: AbstractFloat
    # TODO: implementation
    # Output
    V, W, obj, niter
end
nnmf (generic function with 1 method)

1.2 Q1.2 Data

Database 1 from the MIT Center for Biological and Computational Learning (CBCL) reduces to a matrix \(\mathbf{X}\) containing \(m = 2,429\) gray-scale face images with \(n = 19 \times 19 = 361\) pixels per face. Each image (row) is scaled to have mean and standard deviation 0.25.

Read in the nnmf-2429-by-361-face.txt file, e.g., using readdlm function, and display a couple of sample images, e.g., using the Images.jl package.

X = readdlm("nnmf-2429-by-361-face.txt")
colorview(Gray, reshape(X[1, :], 19, 19))
colorview(Gray, reshape(X[10, :], 19, 19))

1.3 Q1.3 Correctness and efficiency

Report the run times, using @btime, of your function for fitting NNMF on the MIT CBCL face data set at ranks \(r=10, 20, 30, 40, 50\). For ease of comparison (and grading), please start your algorithm with the provided \(\mathbf{V}^{(0)}\) (first \(r\) columns of V0.txt) and \(\mathbf{W}^{(0)}\) (first \(r\) rows of W0.txt) and stopping criterion \[ \frac{|L^{(t+1)} - L^{(t)}|}{|L^{(t)}| + 1} \le 10^{-4}. \]

Hint: When I run the following code using my own implementation of nnmf

# provided start point
V0full = readdlm("V0.txt", ' ', Float64)
W0full = readdlm("W0.txt", ' ', Float64);

# benchmarking
for r in [10, 20, 30, 40, 50]
    println("r=$r")
    V0 = V0full[:, 1:r]
    W0 = W0full[1:r, :]
    _, _, obj, niter = nnmf(X, r, V = V0, W = W0)
    @btime nnmf($X, $r, V = $V0, W = $W0) setup=(
        copyto!(V0, V0full[:, 1:r]), 
        copyto!(W0, W0full[1:r, :])
        )
    println("obj=$obj, niter=$niter")
end

the output is

r=10
  162.662 ms (9 allocations: 437.19 KiB)
obj=11730.866905748058, niter=239
r=20
  234.293 ms (9 allocations: 875.44 KiB)
obj=8497.605595863002, niter=394
r=30
  259.524 ms (9 allocations: 1.28 MiB)
obj=6621.94596847528, niter=482
r=40
  289.918 ms (9 allocations: 1.72 MiB)
obj=5256.866299829562, niter=581
r=50
  397.511 ms (10 allocations: 2.15 MiB)
obj=4430.362097310877, niter=698

Due to machine differences, your run times can be different from mine but certainly can not be order of magnitude longer. Your memory allocation should be less or equal to mine.

1.4 Q1.4 Non-uniqueness

Choose an \(r \in \{10, 20, 30, 40, 50\}\) and start your algorithm from a different \(\mathbf{V}^{(0)}\) and \(\mathbf{W}^{(0)}\). Do you obtain the same objective value and \((\mathbf{V}, \mathbf{W})\)? Explain what you find.

1.5 Q1.5 Fixed point

For the same \(r\), start your algorithm from \(v_{ik}^{(0)} = w_{kj}^{(0)} = 1\) for all \(i,j,k\). Do you obtain the same objective value and \((\mathbf{V}, \mathbf{W})\)? Explain what you find.

1.6 Q1.6 Interpret NNMF result

Plot the basis images (rows of \(\mathbf{W}\)) at rank \(r=50\). What do you find?

1.7 Q1.7 GPU (optional)

Investigate the GPU capabilities of Julia. Report the speed gain of your GPU code over CPU code at ranks \(r=10, 20, 30, 40, 50\). Make sure to use the same starting point as in Q1.3.