The Problem With Performance Testing

Credit to Author: Alex Samonte | Date: Mon, 14 Aug 2017 17:00:00 +0000

 

This is part of an ongoing blog series featured on the NetSecOPEN Website by a number of project participants ranging from tool vendors, network security vendors, and third-party test houses. 

Performance testing is used throughout our industry. It helps make decisions. It helps build infrastructure. It helps make sales. But where do we get the data for this performance testing? Should we trust it? How can we use it best?

The unfortunate reality of performance testing is it’s all different.  Every vendor, every enterprise, every analyst does their own thing.  You probably will never know all the details of the test methodologies, device configurations, or other conditions of the tests.  Yet all the results are presented in a similar language and format, which would lead you to believe you can compare different test results to each other.  These results might be shown on a datasheet, or in a publication, or even online.  Unfortunately, this provides a lot of differing (and possibly contradictory) information, which leaves the end consumer with the problem of trying to figure out what it all means.

A similar problem was encountered with car manufacturers prior to the mid 70’s.  There were no regulations or standards around what car manufacturers could claim for the MPG (Miles Per Gallon) for their vehicles.  What were the test conditions? What sort of fuel was used? How fast did they drive?  Only after increased pressure for more fuel-efficient cars was placed on auto manufacturers by the government, and the EPA was put in charge of enforcing this standard in 1975, were consumers really able to trust the data and compare vehicles.

Today, as far as network security equipment testing is concerned, it’s still the 1970s.  There are currently no regulations or relevant standards that dictate how a network security device’s performance should be measured.  Each vendor has their own method for coming up with performance measurements, usually expressed in megabits per second (Mbps).  Some vendors create tests that best show off, or even exaggerate the performance of their device. At the same time, other vendors may try to choose a test environment that is closer to what a consumer might see in the real world. The problem is, it’s hard to tell by the materials they provide who is telling you what.

Unfortunately, most consumers believe that vendor-published Mbps ratings should allow them to be compared, but the way each vendor comes up with these performance numbers is different.  They may adjust the size of the packets being processed, the type of traffic being fed through the device, the frequency at which connections are made, or manipulate a host of other variables, all to get the best performance numbers in ideal conditions.  These testing differences make the comparisons meaningless.  Unless a test is performed using the same methodology, similar configurations, and the same interpretations, then comparisons are usually not valid.

The EPA’s first standard, established in 1975 took two types of driving into account, city (less fuel efficient with many stops), and highway (most fuel efficient with no stops).  These tests were measured in a lab with a computer and were very repeatable.  This resulted in testing standards that allowed the EPA and automobile manufacturers to match their numbers very precisely.

Certainly today, any number of tests could be run to produce measurable and comparable results for security devices that meet some ideal testing conditions.  Unfortunately, there is not a governmental organization or regulation that requires vendors to perform a specific set of tests before publishing performance results.  There are, however, a number of third party testing houses that can conduct performance tests on devices.  These tests provide a lot more consistency across devices from different vendors, and can produce results that the end consumer can compare during that one test set.  Interestingly, it is rather rare for a vendor’s performance claims and those measured by a third-party testing house to be the same.  However, these third-party tests are not standardized either, which makes it impossible to compare results from different testing labs, or even the same one over time.

The original MPG estimate standards from 1975 were not as representative of real world MPG as consumers might have liked.  The mismatch between the EPA MPG estimate and what people were able to achieve in real life led to confusion as to whether the EPA measurements could be trusted.  In 1984, and again in 2008, the MPG estimation methodologies were modified to try to better match real world values.

Similarly, some of the simpler testing methodologies in use today (by various manufacturers) do not always represent real world values based on typical customer traffic.  Instead, they tend to be more ideal values designed to showcase best case performance.  Even though many vendors and third-party test houses are starting to use more complex traffic mixes, we are still not quite to the place where the test results fully matches reality.

NetSecOPEN’s aim is to build publicly available, open and transparent testing standards.  These include the test methodology, traffic mixes, and configurations that allow for cross vendor testing, and methods for the comparison of those results.  Because NetSecOPEN is an open standard, it enables any vendor, test lab, or enterprise to gather useful device metrics of devices that can then be compared.

I have been conducting testing for a large number of years in my career.  One thing that has almost always been consistent has been the lack of places to start.  In most cases, building something yourself, be it test methodologies, tools, or configurations, etc, was the only way to move forward.  I could not count on everyone’s datasheets to allow me to compare different devices, nor could I easily compare 3rd party testing results to my own test results.  In the end, I would have to have all the devices in my lab and run the same exact tests myself.

Not everyone has the time, resources, or ability to do this type of testing.  The goal of NetSecOPEN is to give people that ability up front.  It’s not going to be easy, but the process will be open and transparent.  The NetSecOPEN standards will provide guidelines and best practices for testing modern network security infrastructure. Additional guidance for the interpretation of results will be also created when needed.

https://blog.fortinet.com/feed