Use this function to set the penalty function to l1 trend filtering. An
important special case is when \(k=1\). In this case the penalty
term becomes
$$\lambda \sum | x_{i-1} - 2x_{i} + x_{i+1} |,$$
where \(\lambda\) is set by the lambda
argument below.
moma_l1tf(..., l1tf_k = 1, ..., lambda = 0, select_scheme = "g")
... | Forces users to specify all arguments by name. |
---|---|
l1tf_k | Use (\(k+1\))-difference matrix in trend filtering. Note \(k = 0\) implies piecewise constant, \(k=1\) implies piecewise linear, \(k=2\) piecewise quadratic etc. |
lambda | A vector containing penalty values |
select_scheme | A char being either "b" (nested BIC search) or "g" (grid search). MoMA provides a flexible framework for regularized multivariate analysis
with several tuning parameters for different forms of regularization.
To assist the user in selecting these parameters ( To explain nested BIC search, we need to look into how the algorithm runs. To find an (approximate) solution to a penalized SVD (Singular Value Decomposition) problem is to solve two penalized regression problems iteratively. Let's call them problem u and problem v, which give improving estimates of the right singular vector, u, and the left singular vector, v, respectively. For each regression problem, we can select the optimal parameters based on BIC. The nested BIC search is essentially two 2-D searches. We start from SVD solutions, and then find the optimal parameters for problem u, given current estimate of v. Using the result from previous step, update current estimate of u, and then do the same thing for problem v, that is, to find the optimal parameters for problem v given current estimate of u. Repeat the above until convergence or the maximal number of iterations has been reached. Users are welcome to refer to section 3.1: Selection of Regularization Parameters in the paper cited below. |
A moma_sparsity_type
object, which is an empty list.
The general formula of the penalty term for \(k \in N\) can be found in the paper cited in Reference. For other values of \(k\) please refer to the following table:
\(k = 0\) | |||
\(k = 1\) | |||
\(k = 2\) | |||
Type of sparsity | piecewise constant | peicewise linear | piecewise quadratic |
Tibshirani, Ryan J. "Adaptive Piecewise Polynomial Estimation via Trend Filtering." The Annals of Statistics 42.1 (2014): 285-323. doi: 10.1214/13-AOS1189 .
Aaditya Ramdas & Ryan J. Tibshirani (2016) Fast and Flexible ADMM Algorithms for Trend Filtering, Journal of Computational and Graphical Statistics, 25:3, 839-858. doi: 10.1080/10618600.2015.1054033 .