It is a sad reflection on the times, but it is becoming increasingly difficult to distinguish among true and false “facts,” accurate and misleading interpretations, and personal and politically-expedient beliefs.
In my November 11, 2019 BlogInfoSec column “Are Cybersecurity Intelligence and Security Metrics Statistically Significant?” I pointed out some of the limitations of commonly-quoted sources of cybersecurity intelligence. In no way did I assert that biased and inaccurate reports were intentionally misleading. Indeed, many of the researchers who collect, analyze and publish the data are highly reputable individuals and groups and they are quick to point out whole series of disclaimers as to the meaning and application of their results. However, it is generally left to the reader to assess the veracity, meaning and applicability of the findings.
Facts are facts … or are they?
Well, if one were able to guarantee provenance and measurement, then one might state that certain facts are completely objective and as accurate as modern methods can achieve. But, as I point out in my prior column mentioned above, statistical methodologies can be flawed. Also, facts are not incontrovertible. This is even more significant today.
As we thirst for facts about the coronavirus pandemic, we are increasingly aware that that the infection data in particular are subject to the number of individuals tested. The number of fatalities is likely to be much more accurate, presuming the reporting entities are being honest. But, since the overall number of cases is so dependent upon how many individuals have been tested, the resulting fatality rate is questionable. The numbers are further complicated by the lag of deaths to reported infections.by days and weeks so that the quoted ratio is much lower than if the two populations were in synch.
These factors can also be ascribed to cybersecurity risk. The population of cyber events is greatly understated as we try to extrapolate from findings among organizations that are often in some way related to the researchers—such as their customers—and are self-selecting to some degree. The actual population of entities that have been attacked is much greater by far. Many known incidents are not ever reported and many organizations and individuals do not know that they have been attacked. It depends somewhat on the nature of the attack … denials of service and ransomware are detected almost immediately, whereas attacks aimed at harvesting data can go on for months without detection. What we need are available and reliable tests that can be deployed and applied across all systems. Only then will we know, with reasonable accuracy, what we are dealing with.
The lags in cyberattack information is also a problem. So many attacks are only discovered when some third party finds out about them. Then they are reported to have been active for months, which is often too late for others to take protective actions. What we need is an early-warning system for cyberattacks and data breaches so that they can be addressed soon enough to allow others to take preventative measures. To some extent, sector ISACs (Information Sharing and Analysis Centers) and ISAOs (Information Sharing and Analysis Organizations) address this need, but they do not cover everyone, which is what we need to do.
Furthermore, such activities as a general monitoring of systems and networks and more timely information sharing require that those performing these functions are trustworthy. This is no trivial requirement, especially as reporting is often tainted with personal biases which, in turn, confuses the results and lowers the level of trust.
Nevertheless, if we are to make headway in reducing cybersecurity risk, we must do a better job of universal testing and timely reporting. Let us learn a lesson from the coronavirus pandemic and get ahead of the curve for cybersecurity risk.