01-10-14 | Blog Post

Check-Box Compliance vs. Compliance as Part of Your Culture

Blog Posts

Note: The following article is part of a shared content agreement between Online Tech and InfoSec Institute. This article, written by cyber-threat analyst Aaron Bossert,  illustrates perfectly the difference between check-box compliance and compliance as part of your culture. While many of the examples below relate to NIFT standards, they can easily correlate to PCI, HIPAA or other compliance frameworks. (View original post.)

“What’s in a name? that which we call a rose. By any other name would smell as sweet”

Shakespeare would probably turn over in his grave knowing that I have used one of his more famous passages from Romeo and Juliet in this context.

Shakespeare’s words can be used to draw two parallels:

First, compliance is directed by enabling standards that may include the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Payment Card Industry (PCI), the Sarbanes-Oxley Act (SOX), the Federal Information Security Management Act (FISMA), and many more. Though each of the aforementioned compliance models has a different target audience, all share the same goal. Compliance models are intended to provide a framework from which one can document the methods used to secure your specific Information System. In addition to documentation, a consequence of a documented compliance model is that anyone who uses the model can leverage the framework to avoid reinventing the wheel every time they secure a new Information System.

Second, Shakespeare’s passage also serves to illustrate a significant weakness within the compliance frameworks. Everything is open to interpretation at multiple levels. By this, I mean that the security engineer charged with implementing a checklist for the Redhat Linux server interprets the compliance check that states they must use a checklist in the first place. That particular control does not necessarily dictate which checklist to use, nor does it specify what percentage of the checklist items must be implemented in order to consider it effectively implemented. The Information System Security Officer (ISSO) must also interpret the same controls that were interpreted by the security engineer and determine if those controls were properly implemented. The Designated Approval Authority (DAA) must also interpret what the security engineer and the ISSO have decided and accept residual risk in approving the Information System for operational use. Finally, after all these people have read the same compliance requirements and made their decisions of what constitutes “compliant”, an auditor may come along and decide what compliant looks like and determine whether the DAA, ISSO, and security engineer properly implemented security controls in their Information System.

Now that Shakespeare has served his purpose, let’s move on. The compliance model that I will address is the Federal Information Security Management Act (FISMA). I will assume that most of you reading this article are familiar with FISMA requirements. For a quick refresher, feel free to read more about it on the National Institute of Standards and Technology (NIST) website (http://csrc.nist.gov/groups/SMA/fisma/index.html). FISMA is a risk management framework that is intended to provide a sensible and cost-effective way to protect systems and sensitive information based on the level of risk associated with the Information System in question. The FISMA framework is documented within many different NIST publications; however, for our purposes, we will focus on four: NIST FIPS 199, NIST FIPS 200, NIST SP 800-53, and NIST SP 800-60.

NIST FIPS 200 (http://csrc.nist.gov/publications/fips/fips200/FIPS-200-final-march.pdf) states who and what systems must abide by the FISMA framework and provides guidance on which NIST publications should be used.

NIST FIPS 199 (http://csrc.nist.gov/publications/fips/fips199/FIPS-PUB-199-final.pdf) tells us who must follow the FISMA framework, and categorizes their Information System based on the sensitivity and type of information that the system stores and/or processes. Further, FIPS 199 states that each affected system should be secured in accordance with controls that are documented in NIST SP 800-53.

NIST SP 800-60 (http://csrc.nist.gov/publications/nistpubs/800-60-rev1/SP800-60_Vol1-Rev1.pdf and http://csrc.nist.gov/publications/nistpubs/800-60-rev1/SP800-60_Vol2-Rev1.pdf) provides a standardized way to describe and categorize the different types of information stored or processed within an Information System.

NIST SP 800-53 (http://csrc.nist.gov/publications/nistpubs/800-53-Rev3/sp800-53-rev3-final_updated-errata_05-01-2010.pdf) describes how FISMA controls are documented.

Several pitfalls exist within the FISMA framework. Please remember that the stated goal of the FISMA framework is to set:

“Standards for categorizing information and information systems collected or maintained by or on behalf of each federal agency based on the objectives of providing appropriate levels of information security according to a range of risk levels” (NIST FIPS 200, section 1)

Notice that there is intent for security to be implemented based on the risk-level of the Information system. By risk-level, we are talking about the common-sense approach of protecting Information Systems based on the level of damage (monetary or otherwise) that their compromise would cause. For example, a public-facing website for a Federal Agency that contains nothing other than press releases does need to be protected. However, an agency should not expend the same amount of resources protecting this public-facing site as they should for their database that contains information for all of their undercover narcotics interdiction operations. Obviously, the damage caused by the compromise of the agency’s undercover operations would be significantly higher than the defacement of their public-facing website.

A common area in which the FISMA framework can be improperly implemented is inheritance. Inheritance accounts for the fact that not all Information Systems are self-contained. Many systems rely on security controls being implemented in some other system. An example of this is a web application that relies on an enterprise Active Directory infrastructure for its authentication mechanism. Certainly the system has a login page where each user must authenticate, but beyond the actual form on the login page, the rest of the authentication occurs within the Active Directory server. The web application simply passes the authentication credentials through to the Active Directory server and then relies on it to determine if the web application should grant access or not.

In our hypothetical web application, the security engineer looks at the authentication mechanism and determines that this control is actually inherited from the Active Directory infrastructure. If the security engineer does not dig any deeper, then they may simply get documentation for the Active Directory server, attach that documentation as proof that the authentication mechanism is properly implemented and then “call it a day”. Unfortunately, the security engineer may not look at other paths to authentication built in to the web application. Are there any administrator accounts active for the web server or the application that do not rely on Active Directory and implement local accounts? If so, how are those accounts protected? Are those accounts monitored in the same way that the Active Directory accounts are monitored? Are those accounts set up with password requirements that are the same as the Active Directory server?

On paper, this web application has a well-protected authentication mechanism. In practice, the application might not be so well protected and might be harboring some rather easy to exploit backdoors provided by less protected local accounts.

Let’s take a look at some controls that are required to be implemented by the FISMA framework and dissect how their improper implementation can lead to a false sense of security.

In NIST SP 800-53, control AT-2 requires that:

“The organization provides basic security awareness training to all information system users (including managers, senior executives, and contractors) as part of initial training for new users, when required by system changes, and [Assignment: organization-defined frequency] thereafter.”(http://web.nvd.nist.gov/view/800-53/control?controlName=AT-2&type=0)

Let’s dissect this control. First, the agency implementing this control has to fill in the information that NIST says is up to the agency to determine. In this case, it is the frequency with which the security awareness training is provided. So, is bi-annual training good enough? How about annual? Even better, how does the agency determine how effective the training is in order to determine how often it should be provided?

In most cases, I have seen that the frequency of security awareness training is based on the assumption that annual training is sufficient. Unfortunately, without having some context and some meaningful metrics in place, this decision is completely arbitrary. Many organizations require that their employees take some sort of assessment to gauge their level of understanding of the training material. The assessments generally consist of a few questions related to the main points of the training material (e.g. should you open an attachment in an e-mail from an unknown sender?). Now, when the ISSO and the security engineer are documenting their compliance with control AT-2, they will certainly view the control as properly implemented because they have reviewed the agency’s training and awareness program and found that the training is delivered annually to each user and that each user takes an assessment after completing the training.

In reality, this control has not necessarily been implemented in a way that ensures that the security awareness training is effective. In order to ensure effective training, the ISSO should measure the effectiveness by observing the agency’s incident response program’s reporting. To use the previous example related to e-mail attachments, are there fewer incidents related to malicious code using e-mail attachments as an infection vector? Is the agency’s anti-virus software effective? Are users simply ignoring the training they received and still opening suspicious attachments? Are there simply fewer attacks that use e-mail attachments?

If the agency sees a marked decrease in incidents related to malicious e-mail attachments and determines that the decrease is attributable to the security awareness training, then the agency should see how long the training remains in their users’ minds. The agency should check to see when the incidences of malicious e-mail attachments start to rise again. Armed with this information, the agency can improve their security awareness training program and rest assured that they have properly implemented control AT-2.

In NIST 800-53, control AU-2 requires that:

“The organization:

a. Determines, based on a risk assessment and mission/business needs, that the information system must be capable of auditing the following events: [Assignment: organization-defined list of auditable events];

b. Coordinates the security audit function with other organizational entities requiring audit- related information to enhance mutual support and to help guide the selection of auditable events;

c. Provides a rationale for why the list of auditable events are deemed to be adequate to support after-the-fact investigations of security incidents; and

d. Determines, based on current threat information and ongoing assessment of risk, that the following events are to be audited within the information system: [Assignment: organization- defined subset of the auditable events defined in AU-2 a. to be audited along with the frequency of (or situation requiring) auditing for each identified event].”(http://web.nvd.nist.gov/view/800-53/control?controlName=AU-2)

Once again, let’s dive right in with our hypothetical web-application. In my experience, this control is one that tends to be poorly implemented in many organizations. In many cases, implementation of this control involves turning on the default auditing features of an application, setting those logs up to be delivered to a central log server and then sitting back and calling it a day. In many cases, a sample of the audit logs are provided as evidence and then everybody moves on to the next control.

With just a little bit of effort, this control can be properly implemented. To begin, it is important to note that it is generally not the actual events selected for auditing records that are the problem. The problem is the source of the auditable events. It is easy for a system administrator to define the “usual suspects” in relation to auditable events:

  1. Successful/unsuccessful login
  2. Successful/unsuccessful privileged function access
  3. Configuration changes
  4. Database modification (ADD, UPDATE, INSERT, DROP, etc.)
  5. Account create/delete/modify
  6. Etc.

When these types of events are logged, one might think that their system is covered and that they will be able to reconstruct attack events based on the logged events. However, most applications do not function in a self-contained bubble. Rather, application logs that are required to reconstruct an attack might include logs from the operating system (e.g. Windows Server 2008, RHEL Linux, etc.), the database, the web-server (e.g. IIS, Apache, Lynx, etc.), the host anti-virus, and the host and network firewalls. Once all these logs have been set up to be sent to the central log server, it is also important to run the application and determine if each of the logging mechanisms are actually capturing events related to our web-application.

The final step in this process should include standing up the application in a development environment and performing a penetration test. The penetration test will test all of the security controls in place and let the system owner and designer know if all the security controls are functioning as intended and providing the expected level of security.

So, now that we have gotten everybody thinking a bit about the last C&A they either participated in or at least were exposed to and everybody is giving that “I’m guilty” grin…how do we do something about the current state of compliance within each of our respective organizations? I believe that if we take a step back, we will see the actual intent of each compliance framework which is to provide a standard for what is considered to be adequate security based on the risk-level of the information and system(s) being evaluated. Now that we have the end-goal in mind, it should not be a stretch to look at each control in a new light. Instead of viewing compliance as a game of “check the box” to keep the auditors happy, let’s actually approach each control in the following manner:

  1. Understand the intent of the control
  2. Determine if the control is applicable, then (assuming it does) tailor the control to the system being evaluated. By tailor the control, I mean that one should determine if the control in its current form satisfies the security requirements of the system. If the control is overdoing it, then back it off a bit; if the control is insufficient, then make the implementation more robust.
  3. Evaluate the system component(s) to which the control applies and determine what must be done in order to make the system compliant, if anything.
  4. Test the system component(s) to ensure that the control is properly implemented.
  5. Finally, document the testing and other artifacts that prove that the control is properly implemented. Keep in mind that even though the documentation will be used to show to an auditor that the control is properly implemented, it also serves to allow you to review the system at a later date and determine if the control is still properly implemented and if the implementation is still sufficient to protect the system from current threats that may not have been considered when the control was originally implemented.

I think we can all agree that documenting system compliance can be a giant pain. If we all take a few additional steps, we can implement much more secure systems without expending a great deal more effort. Done this way, systems are more secure, auditors are happier, and we can sleep better knowing that we did all that we could to ensure our systems are well protected. To abuse yet another cliché, “everybody wins”.

References:

NIST FIPS 199 http://csrc.nist.gov/publications/PubsFIPS.html

NIST FIPS 200 http://csrc.nist.gov/publications/PubsFIPS.html

NIST SP 800-53 rev. 3 http://csrc.nist.gov/publications/PubsSPs.html

NIST SP 800-60 vol. I, II http://csrc.nist.gov/publications/PubsSPs.html

Overwhelmed by cloud chaos?
We’re cloud experts, so you don’t have to be.

© 2024 OTAVA® All Rights Reserved