Follow The Safe Mac on Twitter to stay advised of the latest Mac security news!
Posted on January 10th, 2013 at 9:34 AM EDT
Yesterday, Lysa Myers posted some comments on Intego’s blog, in an article titled That Anti-Virus Test You Read Might Not Be Accurate, and Here’s Why, about my recent testing of Mac anti-virus software. Everyone is, of course, entitled to their own opinions, and this is a controversial topic. It is to be expected that there will be some disagreements whenever such testing is done. However, I do have some specific responses to her comments.
First, she criticized the lack of real-world testing. However, as she also points out, this kind of testing is unrealistically time-consuming. “Real-world testing” would involve trying to infect a machine while a particular anti-virus program was installed and active, and seeing if that program would block the infection. To do this with 16 different anti-virus programs (not 19, as Ms. Myers stated) would have required an enormous investment of time with only a small handful of malware, much less the 51 samples I used.
Further, this kind of testing would put all anti-virus software on unequal footing. Not all anti-virus software is equal. Some software has no active scanning capability whatsoever. Some rely on repeatedly scanning the entire hard drive on some timed interval, while other programs use techniques to scan a file when it is accessed by the system or the user, and still others use the concept of “watch folders,” to monitor new files in specific locations. Some install components that are capable of scanning the entire hard drive, while others are limited by the permissions system of Mac OS X to scan only in certain places. Some have additional components that would prevent the user from being exposed to particular kinds of malware in the first place.
The purpose of my testing was not to compare and evaluate all the various features of the numerous anti-virus products in some qualitative manner. Its intent was to make a quantitative measurement of what malware was recognized by the engine during a manual scan, as one metric only for comparing different products on equal footing. As I pointed out in the write-up of my results, there are other aspects of anti-virus software that must also be taken into consideration beyond just detection rate. As Ms. Myers points out, protecting yourself against malware requires a layered defense, and this test only examined one specific layer.
Secondly, she mentions an inconsistency in the results. I was contacted by an Intego representative shortly after I released my results about this. They stated that they could not reproduce my results, and suggested that I had scanned using outdated virus definitions. (During the testing, the methods involved installing the anti-virus software, updating the definitions – the definitions included in most anti-virus software when first installed tend to be old – and then scanning.)
Mistakes happen, and I cannot rule out that I made this particular mistake. However, when I repeated the test, I could not get a result that matched my results by scanning with the outdated definitions supplied by the copy of VirusBarrier that was available at that time. (I no longer had the copy that I tested with, which is something I plan to address when I repeat my tests later this month.) As a result, since there was no way to determine whether the source of the discrepancy was error on my part or a change to the virus definitions on Intego’s end, I did not feel that it was appropriate to change the data and thus invalidate all comparisons. I did note this fact in the results, however.
The difficulty with duplication of my results is that any such testing is extremely time-sensitive. Anti-virus companies are busily updating their definitions every day. This means that, from one day to the next, the results can easily be expected to be different. This is not a problem that can be solved, and as such, results of this kind of testing must be taken for what they are and no more.
Finally, she comments that Intego was not notified of my results before publication. I understand that this may be the way journalists do their reviews, but that does not make such contact a requirement. It is very important for me to emphasize that I am completely independent of all anti-virus companies. To avoid bias, it is important for my results to have no influence whatsoever from any anti-virus company.
Since my testing, I have received a number of critiques from anti-virus companies. A number of them told me that they would have done things in a certain way, or removed certain items from the malware sample list, and requested that I make such a change. The problem with taking such advice is that it would introduce the possibility of bias, if a suggested method would tend to have better results for one particular program than for others. As a concrete example, an item in my sample list was contested by a particular company, while it was recognized as malware by other companies’ anti-virus software. (There were actually several different cases of this following my testing.)
Of course, it’s important to understand that I do appreciate the criticisms of Ms. Myers and the other anti-virus representatives I have talked to. Although I may not always agree, and may not have been willing to make any changes to my results, their words have not fallen on deaf ears. In any study that hopes to be scientific, criticism is important. There are things that I learned from the various anti-virus companies who contacted me after my results were published, and my next round of testing (coming in the next few weeks) will hopefully address some of those criticisms.
I do wish, though, that Ms. Myers had not ended her comments with implications about “false information and outright scams.”