Department of Mathematics, Machine VP
University of Western Australia
Co-Investigator David Watson
Department of Mathematics, University of Western Australia
Natural Neighbour Interpolation in Higher Dimensions
Any set of data in n-dimensional space defines a set of balls such that every portion of the convex hull of the data, except the data themselves, is contained within one or more balls. Any two data on the boundary of the same ball are natural neighbours, and each ball is the circumsphere of a Delaunay simplex whose vertices are data on the boundary of that ball. Natural neighbour interpolation is a weighted average of the functional values associated with the data which are members of the natural neighbour subset with respect to the interpolation location. The weights are the natural neighbour coordinates of the interpolation pointwith respect to the data in the natural neighbour subset, and are obtained by mensurating simplices defined by combinations of data and the interpolation point. This approach is computationally intensive, and especially so in higher dimensions.
The project was part of a long term effort to develop and refine the computational geometry algorithms required for natural neighbour interpolation. These include aspects of the simplicial tesselation, the extraction of natural neighbour coordinates from that tesselation, and the application of those coordinates as weights to effect the interpolation as a weighted sum of the functional values associated with the data. Because current approaches require considerable CPU time, there is strong motivation to improve computational techniques. In particular, this project concerned the extension of the techniques of natural neighbour interpolation in Euclidean space to the surface of the n-dimensional sphere.
What are the basic questions addressed?
The first question was whether a vector processor could provide a significant improvement in execution time for the tesselation and coordinate extraction steps. The second question was whether there were significant possibilities for adapting the algorithm by vectorizing. Thirdly was the possibility that experience with a vector processor could provide insights for parallelizing the algorithm.
What are the results to date and the future of the work?
We found that the vector processor did indeed reduce execution time significantly for the vectorized portions of the algorithm but that the advantage offered by this reduction was offset, to some extent, by the time-sharing arrangement. The time span from job submission to results was comparable to a work-station.
No useful opportunities for altering the structure of the algorithm to take additional advantage of the vector processor became apparent. Although our search for such opportunity was not totally exhaustive, the prospects were diminished to the point where we could infer a negative conclusion. It did become clear that while the matrix operations were accelerated, the total number of such operations could not be reduced on a vector processor.
What computational techniques are used and why is a supercomputer required?
Execution time for simplicial tesselation and coordinate extraction routines is mainly spent on matrix operations, and the matrix size is a function of the spatial dimension of the problem. The vector processor offered accelerated matrix processing and it was not clear before we began this project how significant that advantage would be. It was a possibility that we felt should not be overlooked.