Home News News Contact Us About Us Sign In
Megaphone

Data Collection: Garbage In-Garbage Out - Op-Ed

August 26, 2011 - 12:55:32 UTC
Share

In the realms of Maritime Domain Awareness, threat and risk assessments are the foundation of security, but needs accuracy, completeness and trustworthiness, not just compliance, to be truly useful and effective.

By Allan McDougall, Evolutionary Security Management

Maritime Domain Awareness (MDA) requires efficient information sharing that demands coordination among numerous participants at international, federal, regional, state, local, territorial and tribal levels of government, as well as with maritime industry and private sector partners, says U.S. MDA.gov. Situational awareness requires current and predictive intelligence through threat and risk assessments.

Threat and Risk Assessments (TRA) are the foundation of the security plan for any organization. Compliance is not enough, writes Allan McDougall.
Any security plan must take into account the knowledge, skills, abilities, resources, intent and commitment (KSARIC) of the adversary. It must take into account the value (in terms of importance) of the assets being protected. It must take into account the physical, procedural, technical and administrative vulnerabilities that allow the threat to bypass a preventative control, avoid detection or otherwise move faster than an effective response can be mounted. In short, if you do not understand your adversary and how they operate, you can only hope to guess that your security controls will be effective — and when people look to you to protect their lives, freedom, or means of living, you need to do better than just guessing.

Compliance
Compliance with set and published doctrine alone may provide limited benefits for an organization. It has other problems, however. The first is that standards cannot be written to cover all circumstances. Following the compliance model means that you run a risk of failing to address undetected vulnerabilities or having a security posture that fails to keep pace with changes in the environment (threat, operating, physical and virtual). The second problem with compliance-based security is that when the standards are published in such a way that the bad guys also have them. This then makes you that predictable; it also gives the adversary time to do its own thinking and planning on how to take you out. Ultimately, following a security regime that is solely based on compliance is little more than guessing that you`ve got all the bases covered (in your specific instance) and that the adversary is still acting like it was the last time it was looked at critically. To avoid guessing, we need data. This data is at the heart of the intelligence cycle—its foundation for collation, analysis and assessment. For this reason, each piece of data needs to possess two major characteristics. First, it must be accurate. It must represent facts. If not, its presence will skew results as the erroneous information is factored into the analysis and assessment. Second, it must be as complete as possible. A lack of details leads to conditions where the individuals providing the analysis and assessment cannot connect the dots, so to speak, and the risk of missing important trends increases significantly.

Understand the Environment This carries on into our understanding of the environment. Remember, applying a security posture requires effort—persons need to be able to maintain a level of vigilance, systems need to be charged and a host of other things. That effort will erode the capability of the team and the system over time. So, in order to maintain it, you need to know when you can dial back the security posture—let people rest, maintain equipment and let some of the tension out of the system.
There are really two sets of factors you want to be able to work with. The first set deals with our knowledge of the environment and can be described in the following:
• What are the threats doing in my environment?
• What are friendly forces in my environment doing?
• What am I (and my supporting friendly forces) not doing in my environment?
• What are the threats not doing in my environment?
• Do I have confidence in the completeness and accuracy of the information upon which I am answering these questions?
This leads to the second set of factors. This second set pertains specifically to the last question of the five above and can be described as the following:
• What do I know that I know (hard, factual, proven, trustworthy stuff)?
• What do I know that I don’t know (prioritize based on importance and go find out)?
• What do I not know that I know (critical examination from different perspectives)?
• What do I not know that I don’t know (you won’t likely be able to answer this)?

So let’s look at the state of incident reporting today in the anti-piracy domain somewhat critically. Let’s also look at it in an organized fashion, taking into account accuracy, completeness and trustworthiness.

Accuracy
Accuracy can be described in terms of the difference between the exact representation of the conditions on the ground and how that representation is communicated. Over the past month, just to provide a left and right of arc for study purposes, we have seen a number of instances where the same events have been identified as having taken place at different locations by different sources. This could be the result of rounding errors (where 12.6 degrees becomes 13 degrees, and so on). It could also be the result of people not recording the information coming into the system exactly as it was communicated. The second layer of problems with accuracy comes when the communicator and the audience come from different backgrounds. The communicator, if he or she is truly trying to get a point across, must ask whether or not the information being presented would be clear in the mind of his or her audience. Acronyms, poor writing skills and other forms of shortcuts factor highly here. Those who have had to write critically were always required to link the proper name for something with its acronym (eg. Southern Red Sea (SRS)). This factor becomes increasingly important as the information being communicated becomes more and more proprietary or specific to a single organization. The second involves comments like “a car known to park in this area.” This is the kind of statement that can be interpreted many different ways. It can be indicative of cars that wait in a certain area. It can be equally indicative of a single entity that has been identified as habitually parking in that area. Things that are not clear have little room at the adult table in the intelligence world.

Completeness The second element involves a lack of completeness. When you look at the map of reported attacks you cannot simply assume that there have been no attacks at that location. What you can reason is that there have been no reports submitted that have been accepted as an attack and finally recorded into the system that is represented on the map. In brief, a lack of reported threat does not mean a lack of threat. The train can be broken at any one of the steps described above.
We have seen this in spades over the past month. The clearest indication of this is the difference between the dates that reports are submitted and the dates that they are posted. In some cases this is unavoidable as it may take time to check the accuracy of the report. The question is how to handle the gap between the report and the final confirmation—is it better to leave no indication of the report or is it better to post the report and indicate that there are elements of the report that are causing sufficient concern for it to be followed up upon? To answer this question, I would propose that the four following factors be considered:
• The implications of reporting the incident and then contributing (however remotely) to the successful defence of a vessel because it was aware and prepared.
• The implications of reporting the incident and then determining that there was potentially wasted effort because there was no threat in the area.
• The implications of not reporting something and nothing happening.
• The implications of not reporting something and the vessel being successfully attacked.
In short, it may be prudent to adopt a standard where an initial report can be posted and communicated, but indicated as having to undergo further scrutiny before being trustworthy enough to act upon.
At this point, we have only touched on why accuracy and completeness are important, if not critical, in incident reporting. We have not touched on something that is much harder to manage. That is the concept of trustworthiness. Can I look at the report in front of me and find its source to be both credible and reliable? These two terms are chosen carefully. Credibility speaks to being able to corroborate the information and generally assess it, on its own accord, as being true (in the sense of factual). Reliability, on the other hand, speaks to the technical assessment as to the source, its past history in terms of information and similar factors.

Trustworthiness It is in the issue of trustworthiness that the system is broken. This is a result of people having to deal with the unknowns (remember the those questions from above) and deciding that where we do not know that we don’t know something is an intolerable condition—it is better to know that we don’t know (so we can go find out) or not know that we know (by checking with others in the community). Those, at least, have courses of action that are reasonably straight forward. For the TRA process today though, we are in a condition of we don’t know what we do not know. This is because the various reporting centers overlap but do not agree. So the next question is whether or not we believe one more than the other. Or do we have to assume that the organizations are working within their own mandates—and not necessarily reflecting conditions of interest on the water if they fall outside of that perceived mandate or mission?

Wake Up Call?
At this point, we have to look at the challenges to the utility of having a number of different data collection centers that cannot be trusted to push complete and accurate data back to those that need it. Announcing that a ship was attacked at a certain location is somewhat useful, but has little more value than basically waking everybody up. It fails to describe the who, what, where, when, how and why. Of course, some of this has to do with the initial data that is submitted—but that speaks to the need for centralized training and an understanding of what is a comprehensive and useful report from the field. The challenge here is that as organizations are limited to looking at reporting systems that are obviously incomplete, less than timely, apparently in conflict with other sources of information, and (from the purposes of detailed and proper risk assessment) less than completely useful, doubts creep into the system.

Systems
And this leads to the final set of questions that destroy the system. First, am I dealing with systems that have agendas? If so, what are they and do they affect the quality of information that I am getting? In some cases we know that this is the fact — even with nations and the IMO beginning to look at armed security in a different light, many of the “official sources” are still tied to national policies that block the sharing of that information with the security company. Second, can the information I receive be influenced by the mandates of those organizations? In short, can the information being provided be manipulated in terms of accuracy or completeness for some reason and must I accept the fact that maintaining the trust in the system is considered less important that some other factor? This is something else that private security and others on the water have had to learn to live with — particularly with the location of friendly forces and activities.

Private Sector
The logical answer is to put the incident reporting capability for shipping in the hands of the private sector and then support it, to an extent, with public funds. Why? Because the marketability of the private sector organization’s incident reporting is directly and irrevocably linked to five factors: accuracy, completeness, trustworthiness, timeliness and utility (usefulness). If it cannot demonstrate these characteristics — then companies do not waste their resources on it. The other reason for this is because there is no secondary interest in business that supersedes the delivery of a good product and getting paid for it. Politics and other factors are in the realm of the state and those kinds of entities. The private sector can operate outside of those spheres of influence and reduce the impact of them.

A Perspective
From one humble perspective, I would propose that this (OCEANUSLive) kind of system represents a viable future for incident reporting—much more so than nation states, alliances, or advocacy groups. This is not to say that those groups should be eliminated. They should not because they are vital. They are not vital from the information collection point of view, but rather from the analysis and assessment point of view—looking at the information to answer the “why” and the “so what” of this entire equation rather than the “who, what, when, where, and how.” For the purposes of information collection, however, they have fallen short of the mark with respect to the tests of accurate, complete, trustworthy, timely and useful, particularly when looking at the level of detail required for appropriate risk assessments.

Allan McDougall is the President of the International Association of Maritime Security Professionals (IAMSP), member of the ASIS International Transportation Security Council, Chair of the ATAB Transportation Security Committee and a contributor to a number of international and national security working groups.

He holds several professional designations and is the author of two works, both accepted as texts at the Masters University level, in Critical Infrastructure Protection. He has also worked at senior levels in government, industry and within the context of professional associations over the past ten years and in the Critical Infrastructure Protection domain for the past twenty including his military service.

The views stated are those of the contributor and do not necessarily reflect those of OCEANUSLive.org.


OCEANUSLive.org
Information, Security, Safety; Shared 

Submitted by Team@oceanuslive.org