One of the many recommended practices in cybersecurity is to employ "integrity checking mechanisms" to verify software, firmware, and information integrity. As a cybersecurity consultant, clients often ask me for clarification, or in some cases, implementation guidance for this practice. This is not surprising given the many use cases and interpretations for integrity checking. And to make it worse, the answer can differ dramatically depending on the operating environment. Given these factors, I thought it might be helpful to "de-mystify" the subject of verifying software, firmware, and information integrity.
First a little background. For government agencies, this guidance has been around for decades in the National Institute of Standards and Technology (NIST) Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations. However, this is now a standard for non-government critical infrastructure industries as well. In 2014, NIST released the Framework for Improving Critical Infrastructure Cybersecurity, also known as the "Cybersecurity Framework". This framework includes a set of core practices and controls that are recommended for an organization's cybersecurity program through all of its functions, including Identification, Protection, Detection, Response, and Recovery. One of the Protection controls, labeled "PR.DS-6" defines this practice as "Integrity checking mechanisms are used to verify software, firmware, and information integrity."
Okay, that sounds good, but what does it really mean? At its core, integrity checking is simply a means to let you know if your asset (software, firmware, file, database) has been changed. “So what’s the big deal, you might ask?” Well, it depends. Take software or firmware for example. If we can determine that a piece of software or firmware has been changed since it was originally released, that can be a very good indication that it may have been maliciously modified as part of a supply chain attack, or as part of a yet-unknown malware attack. In this way, “zero-day” attacks can be detected even though your anti-malware company has no signatures yet to identify the malicious code. In other cases, the integrity of processed data is very important. Consider financial transactions. If a malicious party wanted to commit fraud, they might gain unauthorized access to processing systems, and make subtle changes to the data. Detecting whether the data has been modified since creation is a possible indicator of unauthorized activity. Either way, it boils down to whether you can trust that your information or code is accurate, genuine and safe to use.
There are many methods to accomplish integrity checking. Some are very basic, and others more elaborate. However, one very common trait is the use of some form of cryptographic technology to be able to detect even the smallest changes in your assets. Let’s discuss a few examples.
Cryptographic Hash Functions. Software integrity is often determined by employing a cryptographic hash function. A hash function is designed to perform mathematical calculations (an algorithm) on a source file, resulting in a unique fixed-size string of bits or characters comprising the hash. Some popular hashing algorithms include SHA-1, SHA-256, and MD5, and can be created or checked by employing free or open source hashing tools such as the “MD5 & SHA Checksum Utility” or the “Microsoft File Checksum Integrity Verifier”. The resulting hash can then be considered the “signature” of the file, and if even one bit is changed in the file, a new calculation of its hash will be completely different. For this reason, software developers often distribute their files accompanied by a calculated hash and encourage users to verify that the hash is the same on the received file. By verifying that the hash is the same as published by the author, the user has greater assurance of integrity.
Digital Signatures. Creating digital signatures, also known as “code signing”, serves much the same purpose as cryptographic hashes, but unlike a separately calculated hash, digital signatures are usually incorporated in the software binary or file itself. However, the presence of a digital signature alone does not guarantee the source of the code, which is why digital signatures are often generated using public key infrastructure (PKI) with a trusted certificate authority (CA). One advantage of a digital signature it that it is more easily detected through automated tools, which might include anti-malware software, or even the user’s operating system. Most popular anti-malware software can be configured verify the integrity of a downloaded or copied file by checking if it has a valid digital signature before allowing execution. An operating system can also prevent “unsigned” applications from running on the system. One example of this is the use of the “AppLocker” feature in Microsoft Windows that can be configured to restrict execution of any software that is not digitally signed.
File Integrity Monitoring. A more flexible and sophisticated method is employing a file integrity monitoring (FIM) application on the affected system. An FIM is typically an automated tool that constantly monitors the attributes of files and software. An FIM can restrict use of the files and/or send automated alerts of unauthorized changes. There are both commercial and open-source FIM tools available, each with their own unique features. Examples of open source tools include, but are not limited to, OSSEC (Open Source HIDS SECurity), AFICK (Another File Integrity CHecker), and AIDE (Advanced Intrusion Detection Environment). Like other methods, FIM often employs a cryptographic “checksum” or hash based on a calculation using multiple file characteristics. However, an FIM can also monitor changes to other file attributes such as dates, size, privileges and security settings. FIMs often include the ability to monitor other system attributes as well such as configuration values, credentials, or even content. A key advantage of an FIM solution is that it can give a real-time indication of an attack or other unauthorized changes to critical systems and software.
While it may be technically feasible to implement these controls in your environment, it may not be practical to implement on every asset. Instead, organizations will usually want to prioritize implementation based on how critical the asset is, and how critical the integrity of the data is. Examples of assets chosen for integrity checking controls might include, but are not limited to, key business application platforms, domain controllers or authentication servers, key service delivery systems, and systems that store or process confidential data.
As described above, many integrity monitoring capabilities may already be included in your current operating system or protection tools. Dedicated integrity monitoring tools are also available both in open source and commercial versions. These tools can provide assurances of the integrity of individual files, software binaries, data, and even system configuration settings. More importantly, detected changes to these assets can be an early indicator of a cybersecurity attack or other unauthorized activity. If that were not enough, implementation of integrity mechanisms is prescribed by, and is receiving additional scrutiny from many industry regulators. Regardless of the chosen solution, the addition of integrity monitoring for critical software, firmware, and information can be a powerful tool as part of your “defense in depth” strategy.