Use this function to set the penalty function to SCAD $$ P (x; \lambda, \gamma) = \left\{\begin{array}{ll}{ \lambda|x|} & {\text { if }|x| \leq \lambda} \\ {\frac{2 \gamma \lambda|x|-x^{2}- \lambda^{2}}{2(\gamma-1)}} & {\text { if } \lambda<|x|<\gamma \lambda} \\ {\frac{\lambda^{2}(\gamma+1)}{2}} & {\text { if }|x| \geq \gamma \lambda}\end{array}\right.,$$ where \(\lambda\) is set by the lambda argument below.

moma_scad(..., gamma = 3.7, non_negative = FALSE, ..., lambda = 0,
  select_scheme = "g")

Arguments

...

Forces users to specify all arguments by name.

gamma

Non-convexity. Must be larger than 2.

non_negative

A Boolean value. Set to TRUE to add non-negativity constraint.

lambda

A vector containing penalty values

select_scheme

A char being either "b" (nested BIC search) or "g" (grid search).

MoMA provides a flexible framework for regularized multivariate analysis with several tuning parameters for different forms of regularization. To assist the user in selecting these parameters (alpha_u, alpha_v, lambda_u, lambda_v), we provide two selection modes: grid search ("g") and nested BIC search ("b"). Grid search means we solve the problem for all combinations of parameter values provided by the user.

To explain nested BIC search, we need to look into how the algorithm runs. To find an (approximate) solution to a penalized SVD (Singular Value Decomposition) problem is to solve two penalized regression problems iteratively. Let's call them problem u and problem v, which give improving estimates of the right singular vector, u, and the left singular vector, v, respectively. For each regression problem, we can select the optimal parameters based on BIC.

The nested BIC search is essentially two 2-D searches. We start from SVD solutions, and then find the optimal parameters for problem u, given current estimate of v. Using the result from previous step, update current estimate of u, and then do the same thing for problem v, that is, to find the optimal parameters for problem v given current estimate of u. Repeat the above until convergence or the maximal number of iterations has been reached.

Users are welcome to refer to section 3.1: Selection of Regularization Parameters in the paper cited below.

Value

A moma_sparsity_type object, which is a list containing the values of non_negative and gamma.

References

Fan, Jianqing, and Runze Li. "Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties." Journal of the American Statistical Association 96.456 (2001): 1348-1360. doi: 10.1198/016214501753382273 .