As an example of using repeated interpolation at the same place, this notebook performs a consistency-checking process, a simplified version of the method introduced by B. Lüthi [1]

Our first move is to open the dual (tracers + inertial particles) scene data. If you are not familiar with the DualScene class yet, the notebook `doc/hdf5_scene_analysis.ipynb`

has the introduction you need.

In [15]:

```
%cd ../data
from flowtracks.scene import read_dual_scene
scene = read_dual_scene('../data/seq_hdf.cfg')
```

In [16]:

```
from flowtracks.interpolation import Interpolant
interp = Interpolant('inv', None, radius=0.010, param=1.5)
```

In [17]:

```
import numpy as np
for frame, _ in scene.iter_segments(-1): # recall that iter_segments returns two consecutive frames.
if len(frame.tracers) == 0:
continue
# Here we start to use the repeated-interpolation machinery,
# By informing the interpolant of the current frame data,
# and then querying it about that data without having to repeat it.
interp.set_scene(frame.tracers.pos(), frame.particles.pos(),
frame.tracers.velocity())
neighb_base = interp.which_neighbours()
# Check that we have a particle with the desired number of tracers.
candidates = neighb_base.sum(axis=1) >= 10
if candidates.any():
break
```

In [18]:

```
frame.particles.time()
```

Out[18]:

In [19]:

```
selector = np.ones_like(candidates)
selector[candidates.nonzero()[0][0]] = False # The first with enough tracers.
interp.trim_points(selector)
```

In [22]:

```
from scipy.misc import comb
num_combs = min([50, comb(10, 4, exact=True)])
# Collect loop results:
samples = np.empty((num_combs, 3))
# All combinations are generated using these arrays, based on the
# initial full-neighbour selection.
neighb_base = interp.which_neighbours()
where_active = np.nonzero(neighb_base[0])[0]
neighb_comb = np.empty_like(neighb_base)
for cix in xrange(num_combs):
neighb_comb[...] = False
neighb_ix = np.random.permutation(where_active)[4]
neighb_comb[0, neighb_ix] = True
samples[cix] = interp.interpolate(neighb_comb)
# Finally, the statistics:
rms = np.linalg.norm(samples, axis=0) / np.sqrt(num_combs)
rel_std = np.std(samples, axis=0)/ rms # num_parts x 3
print "Relative standard deviation: " + str(rel_std)
```

Well, this particle seems to have relatively inconsistent fluid velocity interpolation, although in the Y coordinate prediction is more consistent than the others. Well then. Let's not get discouraged: there are many more particles in the data set, and surely by averaging over all of them, we can find the true consistency of the data set. But this is not for a short tutorial like this.

[1] B. Lüthi et al., Lagrangian multi-particle statistics, 2007, DOI: 10.1080/14685240701522927