dagma.linear.DagmaLinear.minimize

dagma.linear.DagmaLinear.minimize(W: numpy.ndarray, mu: float, max_iter: int, s: float, lr: float, tol: float = 1e-06, beta_1: float = 0.99, beta_2: float = 0.999, pbar: tqdm.auto.tqdm | None = None) tuple[numpy.ndarray, bool]
Solves the optimization problem:
\[\arg\min_{W \in \mathbb{W}^s} \mu \cdot Q(W; \mathbf{X}) + h(W),\]

where \(Q\) is the score function. This problem is solved via (sub)gradient descent, where the initial point is W.

Parameters:
W : np.ndarray

Initial point of (sub)gradient descent.

mu : float

Weights the score function.

max_iter : int

Maximum number of (sub)gradient iterations.

s : float

Number that controls the domain of M-matrices.

lr : float

Learning rate.

tol : float, optional

Tolerance to admit convergence. Defaults to 1e-6.

beta_1 : float, optional

Hyperparamter for Adam. Defaults to 0.99.

beta_2 : float, optional

Hyperparamter for Adam. Defaults to 0.999.

pbar : tqdm, optional

Controls bar progress. Defaults to tqdm().

Returns:

Returns an adjacency matrix until convergence or max_iter is reached. A boolean flag is returned to point success of the optimization. This can be False when at any iteration, the current W point went outside of the domain of M-matrices.

Return type:

Tuple[np.ndarray, bool]


Last update: Jan 14, 2024