Use this function to set the penalty function to sparse fused lasso $$\lambda \sum | x_{i} - x_{i-1} | + \lambda_2 \sum |x_{i} | ,$$ where \(\lambda\) is set by the lambda argument below, and \(\lambda_2\) is specified in by the lambda_2 argument.

moma_spfusedlasso(..., lambda2, ..., lambda = 0, select_scheme = "g")

Arguments

...

Forces users to specify all arguments by name.

lambda2

A scalar. The level of penalty on the absolute values of the coefficients. Note that it remains fixed when searching over lambda, rather than changes with lambda in a way that the lambda / lambda_2 ratio remains fixed (which is the defualt behavior in the package glmnet).

lambda

A vector containing penalty values

select_scheme

A char being either "b" (nested BIC search) or "g" (grid search).

MoMA provides a flexible framework for regularized multivariate analysis with several tuning parameters for different forms of regularization. To assist the user in selecting these parameters (alpha_u, alpha_v, lambda_u, lambda_v), we provide two selection modes: grid search ("g") and nested BIC search ("b"). Grid search means we solve the problem for all combinations of parameter values provided by the user.

To explain nested BIC search, we need to look into how the algorithm runs. To find an (approximate) solution to a penalized SVD (Singular Value Decomposition) problem is to solve two penalized regression problems iteratively. Let's call them problem u and problem v, which give improving estimates of the right singular vector, u, and the left singular vector, v, respectively. For each regression problem, we can select the optimal parameters based on BIC.

The nested BIC search is essentially two 2-D searches. We start from SVD solutions, and then find the optimal parameters for problem u, given current estimate of v. Using the result from previous step, update current estimate of u, and then do the same thing for problem v, that is, to find the optimal parameters for problem v given current estimate of u. Repeat the above until convergence or the maximal number of iterations has been reached.

Users are welcome to refer to section 3.1: Selection of Regularization Parameters in the paper cited below.

Value

A moma_sparsity_type object, which is a list containing the value of lambda_2.

References

Tibshirani, Robert, et al. "Sparsity and Smoothness via the Fused Lasso." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67.1 (2005): 91-108. doi: 10.1111/j.1467-9868.2005.00490.x .