Investigating the Feature Selective Validation (FSV) technique

One of our blog subscribers asked us why we still eyeball results for comparison, rather than using the Feature Selective Validation (FSV) technique. A good question indeed, and to be honest, one that we had never thought about! It is amazingly easy to do the things in the way in which you are accustomed?. (a.k.a the old way!) FSV looked like an elegant technique that could simplify one of the most common tasks we undertake, so I decided to do a bit of investigating.

For those in the dark about FSV ? it is a technique that attempts to compare two sets of data points (usually traces on a graph), and classify the comparison as either ?Excellent?,?Very Good?, ?Good?,?Fair?,?Poor? or ?Very Poor? – where the same categorisation would be given, on average, by a group of experts. It has been around for some time, and research has been done to test it. The method has shown great promise in EMC – in fact, it has been adopted in IEEE standard 1597.1 ?IEEE standard for Validation of Computational Electromagnetics Computer Modeling and Simulations?.

In contrast to the EMC data typically used for FSV validation, antenna data is visually a lot simpler. I decided to test FSV by doing two FSV comparisons on the simulated and measured data of the skeletal wire monocone from an earlier blog post. In the first test, I used FSV to compare the measured and simulated results. In the second, I manipulated the simulation data to remove the cone resonance in the upper region of the band by smoothing the data in this region. I used the software tool published by Antonio Orlandi at the University of L?Aquila to do the FSV calculations for me.

Three data series used for testing the FSV technique.

Three data series used for testing the FSV technique.

Unfortunately it appears that the FSV technique fails for this example ? both comparisons result in the validation being considered ?Fair?, with a neglibigle difference in the figure of goodness and confidence histograms. When modelling this antenna, the quality of the model is directly proportional to how accurately it can predict the unwanted cone resonance. As an antenna engineer (with some knowledge of the structure in question, what is important and what I am looking for), I would call the first comparison ?Good? and the other ?Poor? ? a two category jump!

FSV does seem to be a great tool for automatic validation ? especially for visually complex data that is typical of EMC measurements and when the person doing the comparison doesn?t have prior knowledge of what they should be looking for. We won?t make the call based on one test, but I don?t think that we will adopt FSV for our purposes any time soon. We do know prior information, which will lead to more intelligent comparisons, and more intelligent modelling adjustments.

Here are two screenshots from the software tool used for comparison between measured data and original simulated data (first image) and comparison between measured data and smoothed data (second image).

GDM software screenshot  for comparison between measured data and original simulated data.

Screenshot for comparison between measured and original simulated data.

  GDM software screenshot for comparison between measured data and smoothed data.

Screenshot for comparison between measured data and smoothed data.

Author: Sam Clarke

buy zocor

One Response to “Investigating the Feature Selective Validation (FSV) technique”

  1. Robert, excellent work as the way the tower climbing in the last blog post was quite scary, I am happy I work on antennas on cars…

    Might I recommend examining only the frequency range within the band of interest, surrounding the “unwanted cone resonance”? It would be interesting to see how that comparison works out.

    Buy the way, I agree with the FSV outcome. The similarity between the original and manipulated to the measured are ‘fair’.

    Also please keep in mind that FSV, provides an objective way for two people to compare data, meaning the comparison method is on paper, the comparison method can be reviewed, discussed, agreed upon and also adjusted or ‘tweeked’ to find and flag such “unwanted cone resonance” changes…if you so wish. There are internal knobs and switches that can be adjusted.

    Keep up the EXCELLENT work…

Leave a Reply

Your email address will not be published. Required fields are marked *