Workpackages


The scientific program is structured around three tasks related to the three outcomes of the project:
 
  1. New algorithmic methodologies.
  2. New theoretical tools to characterize the success and failure of algorithms.
  3. 3D volume reconstruction in tomographic PIV.
Tasks 1 and 2 are motivated by the natural assumption of sparsity and the ill-conditioned nature of the tomographic PIV problem. Their scope is wider than tomographic PIV, whereas Task 3 is a specialization to tomographic PIV.

 

Task 1. The algorithmic methodologies will revolve around three axes. First, algorithms working with continuous dictionaries. So far, we have been working with discrete dictionaries induced by a discretization of atoms over a 3D grid. The conditioning of the sparse reconstruction problem being worse for finely resolved grids, we want to elaborate whether some recently proposed continuous algorithms can be more effective than their discrete counterparts, and possibly propose new continuous algorithms. Second, handling non-negativity constraints which is a very natural assumption in tomographic PIV. A number of classical algorithms can be straightforwardly adapted to the non-negativity setting. This is not true for orthogonal greedy algorithms.Their adaptations have been recently proposed allowing fast implementations, but the orthogonal projection steps are solved approximately. It is necessary to assess the influence of this approximation on the sparse reconstruction accuracy before proposing new greedy-based algorithms. Third, accelerated forward-backward greedy algorithms. The proposed acceleration strategy first relies on screening techniques (also called pruning in tomographic PIV) allowing pre-detection of non-active atoms in the sparse representation. We plan to further accelerate greedy algorithms by decoupling the original high dimensional inverse problem into a set of problems of smaller dimensions.

 

Task 2. Understanding the average behavior of algorithms is a very important challenge in tomographic PIV reconstruction since the classical worst-case guarantees are unfortunately highly pessimistic. The average analysis of sparse algorithms for deterministic dictionaries is a major open issue. We will first address several simpler issues to better conjecture the average behavior of greedy algorithms. First, a guaranteed failure analysis will allow us to characterize the worst possible performance of algorithms. Second, the analysis of success for specific families of sparse signals, e.g., sparse representations with strongly decaying coefficients or random signs of coefficients will allow us to bridge the gap between guaranteed success and guaranteed failure. Analyzing sparse algorithms involving continuous dictionaries is another main challenge. We propose to extend classical exact recovery conditions from the discrete to the continuous case.

 

Task 3. Tomographic PIV reconstruction. We will first implement nonconvex algorithms using representative synthetic and available real datasets to confirm that the accuracy can indeed be improved in comparison with convex algorithms. The accuracy and complexity of algorithms will be assessed depending on the signal-to-noise ratio (SNR) and the density of particles, two limitating factors. We will elaborate new screening methods for tomographic PIV when the SNR is moderate to low. Traditional screening techniques can efficiently detect inactive atoms in the dictionary for high SNRs only, whereas for lower SNRs, the amount of inactive atoms dramatically decreases. We suggest to revisit classical screening algorithms and extend them to the noisy context.

Tomographic PIV involves big datasets. Although real-time processing is not mandatory, the implementation of algorithms resorts to massive parallel computing on GPU graphical cards with dedicated software. Another important axis of Task 3 is devoted to the parallel implementation of the algorithms elaborated in Task 1.

 
 
 
Last updated: February 12, 2016 at 0:58 am