The problem of denoising piecewise constant signals while preserving their jumps is a challenging problem that arises in many scientific areas. Several denoising algorithms exist such as total variation, convex relaxation, Markov random fields models, etc. The DPS algorithm is a combinatorial algorithm that excels the classical GNC in term of speed and SNR resistance. However, its running time slows down considerably for large signals. The main reason for this bottleneck is the size and the number of linear systems that need to be solved. We develop a recursive implementation of the DPS algorithm that uses the conditional independence, created by a confirmed discontinuity between two parts, to separate the reconstruction process of each part. Additionally, we propose an accelerated Cholesky solver which reduces the computational cost and memory usage. We evaluate the new implementation on a set of synthetic and real world examples to compare the quality of our solver. The results show a significant speed up, especially with a higher number of discontinuities.

M. J. Black and A. Rangarajan,
On the unification of line processes outlier rejection, and robust statistics with applications in early vision, International Journal of Computer Vision, 19 (1996), 57-91.

M. J. Black and A. Rangarajan,
On the unification of line processes outlier rejection, and robust statistics with applications in early vision, International Journal of Computer Vision, 19 (1996), 57-91.

Figure 2.
DPS search space and its pruning process. (Figure 2a) shows the search space (an Hypercube) structured in levels where each level regroups the set of LP with equal number of 1 bits. (Figure 2b) illustrates DPS pruning process which only considers the neighbors of the previous optimal solution. As an example, in the second level, we already found the optimal LP $(0100)$ in the first level. By this information, DPS will only consider its neighbors (red and blue nodes) and will prune the rest (empty nodes). Additionally, DPS will stop the search if the energy did not decrease from a level to another. In the example, we stop the search at the second level (red dashed line) since the energy increased from the first level to the second one

Figure 3.
(a): The decomposition binary tree, for restroing a signal with three discontinuities $I = \{i_1, i_2, i_3\}$, (b) the recursive process where the final solution is obtained by regrouping the leaves of the tree

Figure 5.
Restoration of a synthetic simulated data of the movement of a molecular motor. the initial data contains $10^4$ data points regularly distributed in $t\in[0, 2s]$. Left (5a) we remark that DPS exactly recovers all parts except for the hard jump at $t\approx 1.7$. In the right (5b), the restored signal by EBS algorithm with several missed jumps

Figure 6.
Restoration quality of DPS and $l_2$-Potts for a signal with size $256$ and values $x_i \in [0, 1]$ corrupted by a white noise with standard deviation $\sigma = 0.2$. Figure (6a) represents the initial signal, (6b) shows the reconstructed signal with DPS with no false discontinuities, but with some errors in the height of some plateaus. Finally, the figure (6c) plots the optimal solution for the $l_2$-Potts solver which also detect all the discontinuities

Figure 9.
The figure plots the difference $T_b -T_r$, in seconds, between the classical implementation time $T_b$ and the recursive implementation $T_r$. The difference varies depending on two factors : the signal size $n$ and the number of discontinuities $k$. From this figure it can be seen that the reduction is important especially with a higher number of discontinuities

Figure 10.
The figure presents the running time in a logarithmic scale as a function of the signal size with fixed number of discontinuities $k = 15$ (a) and as a function of the number of discontinuities with fixed size $n = 10^3$ (b). As shown in the both figures, the recursive implementation offers a lower time and the deviation, from the classical DPS, becomes more compelling with higher number of discontinuities