Exploring the Unexpected Results and Benefits of IT Security Initiatives

Wes Withrow, IT GRC Subject Matter Expert

When organizations begin to roll out their IT security initiatives, there’s no shortage of expected and unexpected results. It’s a simple cause and effect relationship, but with IT security initiatives, some of the unexpected results tend to surface in unique ways. For example, one organization might begin to tighten up Windows security in their environment only to realize a 10x increase in the amount of Macs in their environment over a one year period. They soon understand it isn’t because most of their staff lost their love for Windows, it is because they don’t want to have all of the new IT security tools bogging down their Windows systems.

This article will cover the highlights from the recent TraceSecurity webinar “Reap the Unexpected Results and Benefits of Properly Securing Your Organization against a Cyber Attack.” In the webinar, we give concise real-world examples of the unexpected results and benefits of a risk-based approach to information security management. Click here to view the webinar on-demand.

Application Whitelisting to Reduce Computer Rebuilds

For those unfamiliar with application whitelisting, you can think of it as a centrally-managed IT software tool that prevents employees from installing unapproved software on their systems. When you deploy application whitelisting software, you assume that you’ll see a dramatic decrease in the amount of malware in your environment, but the last thing the IT department assumes is that any new IT security solution will save them a lot of work and money.

One organization that implemented an application whitelisting solution reported a 99% reduction of malware in the enterprise almost immediately. That metric sounds impressive at first, but it received lukewarm reception by the business because reporting on something that you prevented from happening is tough to quantify in financial terms.

It wasn’t until three months after the implementation that the team responsible for rebuilding infected machines noticed a drop in their workload that ultimately resulted in a savings of about $400,000 year. Why? The very simple answer: they weren’t rebuilding 20 to 30 infected machines a month anymore. For the first time in the company’s history, the financial impact of malware was quantifiable, and the feedback came from an unexpected observation post.

Inventorying of Systems to Reduce Workload

When an organization’s IT infrastructure is sprawling, managing those systems can be a burden. Oftentimes, IT operations don’t keep inventory of the IT assets they support, and an organization can’t protect what it doesn’t know exists. Therefore, one of the inherent benefits of formal IT security initiatives is that you are forced to dig through your IT ecosystem. It’s not uncommon for organizations to identify 20% of their IT assets are unaccounted for, not owned by the company, or not in use and need to be disposed. It’s usually an eye-opening exercise when an organization inventories their IT assets and finds out that their device-to-user ratio is about 3x to 4x higher than they estimated.

Insider Threats Surface Themselves

The assumption in most organizations is that the “naïve user” or “rogue employee” presents the greatest security risk. Today’s IT security initiatives allow us to know what to look for and where to look for weakness. Using security tools, we now conclude that the riskiest flavor of personnel tends to be the brightest, most loyal and technically proficient staff members in the company. They are usually just trying to do their jobs and finding innovative ways to do so.

For example, in the application whitelisting use case mentioned earlier, the organization found a small group of staff members had figured out how to disable the application whitelisting software on their systems in a way that the vendor didn’t even realize was possible. They eventually were identified by the IT department and received a gentle nudge to shape up, but the organization never assumed some the riskiest staff were the most technically competent. Needless to say, the vendor also thanked them for finding the weakness.


The “doom and gloom” narrative that dominates the world of IT security today typically involves stories about highly-publicized cyber breaches and the rapid expansion of compliance requirements. We won’t see any major shifts in what is newsworthy in the IT security world anytime soon, but we are beginning to hear more positive stories from the boots on the ground. It’s important that companies who have identified successes in their IT security programs insert some balance into today’s narrative by communicating the expected and unexpected benefits that come with the implementation of security initiatives in their environments.

Unexpected Benefits Slideshare screen shotClick the image to download a SlideShare of these talking points to your desktop.




Posted in IT Security and Compliance | Tagged | Leave a comment

Your First Look into Trends and Topics at the 2015 RSA Conference (RSAC)

RSAC 2015 Word Cloud

This word cloud was provided by the RSA Conference during its December 15th, 2014 webinar and reflects the most frequent terms used across more than 1700 speaking submissions. The largest words are those most commonly cited in conference session titles that were submitted for consideration to be included in this year’s RSA Conference agenda. 

During this December 15th RSAC webinar, Britta Glade, Senior Content Manager and Hugh Thompson, Program Committee Chair, for RSA Conferences shared insight they derived from the submissions and how these trends will be reflected during the conference this April in San Francisco. TraceSecurity’s interpretation of that insight, as relevant to the TraceSecurity audience, is explored below:

Threat Intelligence

There were 4x more submissions over 2014. Coverage about proactive security and attack prediction will be apparent at the show. “Predictive” did not make the word cloud and “proactive” did not show growth over 2014. When looking at usage within the session title and description, “predictive” was used 50% more than in 2014.

Mobile and Cloud

These words are proportionately smaller than in 2014. This does not mean that they are less/not important to the security space. It means that they have become interspersed and integrated into almost every track of the conference and should be perceived as maturity in the market.


Vendor and end-user submissions that mention compliance delivered a dismissive tone that was more pronounced than previous years. The message that compliance does not equal security has proliferated throughout the market. The same downward trend occurred with individual standards. Because the security industry is putting compliance standards in their proper place, as a necessary component to meet bare security minimums, session discussions are becoming higher-level and more strategic in nature.


BYOD did not make the word cloud this year and is another example how organizations are changing their perception of the industry’s capabilities to make information security more manageable and strategic versus reacting to the latest security threat. It isn’t that the problem of BYOD has, or is, going away. It is the change in mind-set from a specific competency conversation to a maturing of the space into strategic and proactive information security.

Breach and Response

These words are proportionate to one another as well. There is growing emphasis among end-user and vendor submissions on the human element of security: how to effectively train employees and which tools are available and easiest to use. According to the submissions, security in 2015 is top of mind more often and at higher levels of the organization compared to past years.

The top trends mentioned above were pulled from a larger list and shown as highlights relevant for the TraceSecurity audience. Click here to read Britta Glade, Senior Content Manager for RSA Conference’s top ten countdown and blog, “RSAC Speaker Submissions Reveal What the Industry Cares About.”

TraceSecurity is poised for a great show at the 2015 RSA Conference. Visit us on the show floor in the South Expo hall, booth 2515. Register for a free exhibit hall pass using the TraceSecurity Expo Pass Code X5ETRCSEC. The deadline to redeem your Expo Pass Code is Monday, April 20, 2015.

Posted in Audit Management, Compliance and Regulatory Change Management, Governance, Incident Response Management, IT GRC, IT Risk Management and Assessments, IT Security and Compliance, Policy Development and Management, Social Engineering, User Awareness Training, Vendor Management, Vulnerability Management | Tagged , , , | Leave a comment

From “None” to “Won” – Effectively Managing Your Vulnerabilities

Mark Thorburn, Information Security Analyst and Security Services Manager

When opening a newly-generated vulnerability report, one’s focus immediately turns towards the “High” risk vulnerabilities on the organization’s critical devices. Next, focus shifts to examine the “Medium” and then “Low” vulnerabilities. And lastly, depending on resources available, perhaps the “Informational/None” vulnerabilities are attended to. This approach makes sense and is certainly not bad, but unfortunately, those “Informational/None” vulnerabilities may never be reviewed because the report indicates that the risk to the device under this categorization is, well, none.

The typical approach described can lead to dangerous oversight and ultimately weaken an otherwise strong information security program. This is not an uncommon experience, as many organizations focus solely on the “High” and “Medium” risk vulnerabilities and then on to others as time permits.

Recently, after reviewing one such report during an onsite engagement, it was found that the Security Officer had made a practice out of focusing on the “High” vulnerabilities. Upon seeing the first vulnerability listed as “Informational/None”, he would shift his focus to other items that required attention. During our engagement’s exit meeting, the Security Officer seemed surprised that during the Internal Penetration Testing portion of the assessment, TraceSecurity was able to compromise the network through one of the “informational” vulnerabilities on a web server and ultimately create a domain administrator account without his knowledge. What we did to accomplish the compromise wasn’t particularly crafty or an exotic exploit. It was, in fact, just a web server sitting on the network with no associated “High” vulnerabilities and only appeared within the “Informational/None” vulnerabilities section in the report.

Problems arose because this server had software that wasn’t vulnerable itself but that was hosting a help desk ticketing system that had weak password requirements and used Lightweight Directory Access Protocol (LDAP) for authentication. The vulnerability scanner used for testing could not detect these characteristics, and the only result that could be reported was that the server was online and port 80 was open. Our recommendation to this organization was to review the report in its entirety every other time it was run. This might have led him to question what this server was doing online or what purpose it served.

Again, this is not an uncommon occurrence. TraceSecurity has seen it happen more than once. The takeaway is that a false perception of “Informational/None” vulnerabilities can lead to oversight and, as in this case, domain compromise. In the example above, the creation of a domain administrator account was just one aspect of the weaknesses found in this organization. This particular finding demonstrated that the organization had gaps in its configuration management and/or system hardening procedures.

In addition, asset management can also come into play, as accountability for that device seemed to shift from one owner to another with no real ownership confirmed by the end of the engagement.

TraceSecurity would advise this organization to dedicate some time and resources to reviewing those “Informational/None” findings within a vulnerability report. The fact that you operate a DNS server may not be news to you, but the fact that your vulnerability scan shows five in operation probably is.


Posted in Uncategorized, Vulnerability Management | Tagged | Leave a comment

Protect Your Hashes

Joseph Key, SSCP, Information Security Analyst

More specifically, your NTLM domain hashes. “How is this possible?” Believe it or not, the explanation is quite simple and often overlooked.

Since the release of Vista in 2007, within every default Windows domain implementation lies the protocol Link-Local Multicast Name Resolution (LLMNR). This protocol is based on Domain Name System (DNS) packet structure and allows hosts to perform name resolution for other hosts on the same network. Its default use is to provide a secondary method for systems to locate one another if your network’s DNS servers fail. So, when your clumsy-fingered employees search for \\pintserver instead of \\printserver, and DNS fails to provide an IP address for the requested resource, LLMNR helps by asking every system on the network “who is \\pintserver”.

Sounds pretty nice, right? Not so much unfortunately, or fortunately depending on which color hat you wear. As a part of offensive security, seeing this protocol blasting through TCPDUMP often induces a maniacal grin because I know that I am only a few steps away from attaining unauthorized access to your systems and data. Before we get too deep into how much I love LLMNR or more importantly how much you shouldn’t, let’s go over exactly how it works.

How LLMNR Works

First, a user makes an attempt to request for a resource, such as an internally hosted Website or network drive, \\Storag-1 instead of \\Storage-1 for example.  That user’s computer sends requested host name to the internal DNS server and given the misspelling, the DNS server sends a reply that states the resource cannot be found.

Next, without any warning to the user, the computer falls back on LLMNR as the protocol to resolve \\Storag-1 into an IP address the computer can use. The problem with this is that it broadcasts the “who has” request to every system on the network. If the misspelled requested resource exists, then it will reply with a packet stating its location, DNS entries update and all is right in the world.

The Vulnerability

But what if I am on your network conducting a penetration test or I am an evil hacker plotting to steal all of your secrets? For starters, one could host a server that will reply to any LLMNR broadcast on the network, which can easily be done with a most excellent tool, Responder. Once that happens, your user’s system will automatically believe and then attempt to negotiate a domain session with that server, sending the user’s domain password in NTLMv2 hash format straight to me. All that is left is to use tools such as Hashcat or John the Ripper to crack the hash at my leisure offline. Worse still, I can attempt to downgrade the authentication method to receive the password in NTLMv1 format making hash cracking even easier. Work smarter, not harder is what my father always told me as a child.

How the Attack Works

Let’s take a look at how this attack works. First, we setup our malicious server to catch these mistyped resource requests. “-i” is the IP address of the operating system you are running Responder on and “-d” enables the tool to respond to domain suffix queries.

Figure 1: Server Setup

Joe Blog 1

Now we wait for a user to make a typo when attempting to connect to a NAS.  When a user finally makes a mistake our tool takes over and handles all of the hard work, responding to the victim and negotiation a session.

Figure 2: Resource Request Error

Joe Blog 2

Figure 3: Captured Password Hash

Joe Blog 3

If we take a look at the wireshark capture of the above events, we can gain a better understanding of what is going behind the scenes. In Figure 4, we see our DNS server failing to resolve the “Stor-1” hostname.

Figure 4: DNS Hostname Resolution Failure

Joe Blog 4

Once the DNS server fails to resolve the hostname, LLMNR takes over and starts broadcasting requests on the network. In Figure 5, you can see our malicious server at x.x.x.220 responding to our victim’s request at x.x.x.54. Once our malicious server poisons the LLMNR response, the victim starts the SMB session negotiation, resulting in the capture of the victim’s NTLMv2 password hash. The SMB session negotiation is show in Figure 6.

Figure 5: LLMNR Interaction between Attacker and Victim

Joe Blog 5

Figure 6: SMB Negotiation

Joe Blog 6

How the Mitigate Against the Attack

Now that we understand the risk associated with LLMNR, we are better equipped to protect our systems against this type of attack. The most effective way to stop LLMNR poisoning is by disabling the protocol through enabling the “Turn off Multicast Name Resolution” setting in the Local Group Policy Editor and disabling NetBIOS Name Service.

Figure 7: Enable “Turn off Multicast Name Resolution” setting

Joe Blog 7

Figure 8: Disable NetBIOS Name Service

Joe Blog 8

Posted in Network Protection, Uncategorized | Leave a comment

The Future of IT Security and Compliance Program Management? It’s In the Cloud…

Madeline Domma, Product Specialist

In recent years, organizations of all types, most notably within financial institutions, have started to transition from a reactive, scenario-based form of IT Governance, Risk and Compliance (GRC) management to specialized, regulation-based approaches which create holistic and realistic views of the overall IT security and compliance environment. The antiquated, reactive approach to IT GRC management has proven to be unsustainable in its focus on the “here and now” instead of developing an ongoing picture of an organization’s IT security and compliance program status. In parallel, market researchers have noticed a growing adoption of Software as a Service (SaaS), or cloud-based, platforms in IT GRC management. These platforms replace decentralized methodologies so that organizations can stay ahead of potential problems using a more focused and agile approach that fully integrates with previously established systems and workflows.

Regulatory compliance and overall risk management are two universal focuses of all organizations; yet not all organizations have wholly integrated compliance and risk management initiatives into their established information security, or IT GRC, programs. Compliance does not imply reduced risk nor does risk management ensure compliance to regulations, so historically, the two have been considered separate challenges for organizations to overcome. A strategic approach considers both factors as part of the organization’s universal information security posture and allows the institution to identify and maximize its assets.

Risk and Compliance Silos are Destined to Fail

In a traditionally reaction-based IT security and compliance management program, compliance with regulating bodies cannot easily be viewed in the context of day-to-day security practices. Often, especially in small to medium-sized organizations, compliance verification efforts are initiated when the organization must become compliant with certain regulations, perhaps after regulators have deemed the organization not in compliance and issued fines. Unless an organization can afford to perform ongoing internal audits or compliance analysis, maintaining compliance is not part of day-to-day operations.

Similarly, a reaction-based approach to overall IT security and compliance management will result in a decentralized compilation of documentation and scenario-specific risk management exercises to plan for various theoretical disasters. Practices and procedures are executed to mitigate hypothetical threats and, depending upon the size or structure of the organization, solutions vary from situation to situation. Moreover, compliance with regulating bodies may not be intentionally considered during the development of these operations.

A Unified Approach for Sustainable Program Management

Analyzing information security risk and compliance management simultaneously will allow your organization to build an information security program that is sustainable, consistent, efficient and agile. Encompassing information security and compliance management requires stakeholders and decision-makers across the institution (from the highest levels of executive management and risk managers to IT operations, internal auditors and compliance officers) to leverage a single set of data across their unique initiatives.  The data collected from this approach can range from policies describing the institution’s overall security posture, to detailed vulnerability information or specific compliance citation attestation, tracking and reporting.

When so many organizations have become accustomed to retaining disjointed documentation and scenario-specific protocols to address company-wide IT GRC challenges, how can a major program reform such as this be accomplished?”

Cue, “The Cloud”

Cloud-based IT GRC platforms offer dynamic management solutions for organizations of all sizes because, by design, they must be customized and individualized to meet the needs of a variety of IT environments. The benefits of cloud-based IT GRC systems become evident soon after deployment.

Cloud-based applications are designed to quickly and easily build information security programs via a shared workspace which multiple users may authenticate to and work within collaboratively. Since most users simply need access to the web to begin working in a cloud environment, these platforms can be integrated into an organization’s existing environment with little to no change in the company’s infrastructure. The collaborative nature of cloud-based workflow makes way for comprehensive IT GRC programs within organizations of all sizes because employees become equipped to contribute to the centralized, company-wide application.

These emerging platforms inevitably eliminate redundancy or gaps in workflow, replacing decentralized security-program-related efforts. Although organizations may develop infinitely different IT security and compliance management plans based on unique needs, well-maintained cloud-based solutions provide the medium for automation of information and fastidious tracking of both day-to-day and grand-scale operations so that accurate and up-to-date data is available for those who need it, from auditors, regulators, or internal management. By delegating the responsibility of IT security and compliance program development, maintenance, and management within a centralized user interface from which all employees may contribute, maintaining the program becomes integral to day-to-day operations.

The result of this implementation is increased awareness of the organization’s IT GRC plans and procedures and a secure organization from the inside, out. Cloud-based IT GRC software is fast becoming the future platform of IT security and compliance management because, ultimately, secure and agile IT environments liberate organizations to more intelligently focus company resources towards improving customer services and satisfaction.

Posted in IT Security and Compliance | Tagged , , , | Leave a comment

TraceSecurity Receives Value Award in IT GRC Management Category from Industry Analyst, GRC 20/20

TraceCSO has been honored with a 2014 GRC Value Award in the IT GRC Management category by GRC analyst firm GRC 20/20. The 2nd annual GRC Value Awards recognized real-world implementations for Governance, Risk Management and Compliance programs and processes that have returned significant and measurable value to an organization. One organization using TraceSecurity’s cloud-based IT GRC solution, TraceCSO, was confirmed to have realized a savings of more than 100 management hours each week on average and $500,000 annually. Click here to read the GRC 20/20 blog.

The Case Study

To validate TraceSecurity’s award, GRC 20/20 Principal Analyst, Michael Rasmussen, researched one organization who struggled with decentralized processes and documents for managing its IT security, risk, and compliance program. The organization evaluated their options and looked at IT GRC solutions to assist them with this problem. The result of their evaluation led them to engage and deploy TraceCSO from TraceSecurity, a Software as a Service (SaaS) solution that the organization found was easy to engage and deploy to meet the range of their IT GRC needs.

Click here to download the in-depth case study produced by Rasmussen.

The On-Demand Webinar

TraceSecurity and Michael Rasmussen held an interactive webinar that described how TraceCSO gives organizations the ability to measure, identify and remediate issues across their processes and operations more efficiently and at a much lower operational cost. During the webinar, attendees:

  • Explored the complexities that continue to hinder IT GRC within organizations
  • Became familiar with the use cases for an IT GRC platform adoption
  • Realized the value of a simplified approach to IT GRC management

To learn more, visit our Slideshare or watch our webinar on-demand.

Posted in IT GRC | Tagged | Leave a comment

Calculating the Cost of a Data Breach Today

In the wake of recent high-profile retail breaches, you are likely feeling the pressure to help keep your company’s name out of the headlines. In order to obtain approval and funding for security improvements, technologists often have to make their case by pointing to losses from recent security breaches; however, calculating those losses can be tricky. This articles leverages recent statistics to help you best estimate the direct and indirect costs of a data breach.

Filling in the Blanks with Reputable Metrics

According to the annual Ponemon Institute Study, it takes an average of 31 days at a cost of $20,000 per day to clean up and remediate after a cyber attack. The study analyzed 314 breaches, 61 of which were in the US, in 16 industry sectors including but not limited to financial, retail and healthcare. Some of the direct costs associated include audit and consulting services, legal defense, and public relations, communications with customers, etc. at a cost of $66 per record while indirect costs can entail lost business, increased cost to attract new customers, and in-house investigations, etc. at a cost of $135 per record.

There are several things that can increase the cost of a data breach. Lost or stolen devices increased breach costs by $18 per record, breaches involving third parties increased the costs by $25 per record, notifying stakeholders and customers too quickly increased the costs by $15 per record, and engaging consultants increased the costs by $3 per record this year. Fortunately, there are a few things we can do to decrease the cost of a breach such as having a strong security posture, having an incident response plan in place prior to the breach, having a business continuity plan in place prior to the breach, and employment of a CISO. Implementing these four controls would have reduced your data breach costs by $21, $17, $13, and $10, respectively, this year.

You may be asking yourself, what are the common causes of a data breach and what’s really at stake? Common causes include weak and stolen credentials, application vulnerabilities, malware, social engineering, inappropriate access, insider threats, physical attacks and user error. 44% of breaches involve malicious or criminal attacks and cost $246 per record, 31% involve “human error” or negligence by employees and cost $171 per record, and 25% involve system “glitches” and cost $160 per record. The average breach size affects 29,087 records with notification costs at $509,000. The average total cost of a data breach amounts to $5.85 million dollars which your business certainly cannot afford.

An Ounce of Prevention is Worth a Pound of Cure

Now that we understand the costs, let’s talk about how to mitigate the risks involved. All devices should be encrypted to protect your sensitive information from being maliciously accessed. Access control, monitoring and regular review should govern your sensitive information to prevent misuse by a third party vendor or negligent employee. Policies, procedures, standards, education and monitoring can help mitigate internal threats; your employees can be your biggest asset or your biggest risk. Lastly, having too much data in too many places can be controlled by data classification and retention policies and regular auditing.

The key takeaways can be summed up in just a few sentences. Designate a security officer or make security someone’s job. Have a strong information security program that includes performing regular risk assessments, having policies and procedures, evangelizing security awareness, knowing your compliance requirements and auditing regularly. Have an incident response and business continuity plan; don’t wait until it’s too late. Too many organizations read case studies with the kinds of powerful statistics mentioned above and still refuse to believe they will ever be affected. After seeing the costs involved, a proactive approach to information security should clearly be your only option.

To download a PDF of key metrics, visit our Slideshare or watch our webinar on-demand.

Posted in IT Risk Management and Assessments | Tagged | Leave a comment

Evaluate Cyber Liability Insurance in 3 Easy Steps

Brent Hobby, IT GRC Subject Matter Expert

We are often asked about the role that cyber liability insurance plays when an organization is developing a comprehensive information security program. We recommend cyber liability insurance be thought about in the context of an organization’s complete risk management program and as part of a company’s overall insurance package, rather than as part of an organization’s information security and compliance management program.

Step One: A Risk Assessment

Because many cyber liability policies now exclude “cyber risk,” evaluating the need for additional coverage should begin with a risk assessment. Speak with prospective insurers to make sure your assessment leverages a framework that they recommend. Depending on the size of the desired coverage, you may need to engage an approved third-party for your assessment.

Step Two: Risk Remediation or Risk Transference

Once you have a valid assessment, progress through the iterative process of reviewing risk remediation versus risk transfer. Get various quotes from insurers and repeat the review process. When complete, you will have a business-appropriate cyber risk coverage extension to your insurance coverage.

Step Three: Insure Based on Your Unique Business Need

Cyber liability insurance is relatively new, very flexible and costs can vary widely. Many organizations choose not to insure, others purchase coverage for specific breach response items, and some use it as a high-deductible umbrella coverage. Whichever your organization chooses, starting with a risk assessment will allow the business to drive the decision.

Posted in Governance, IT Risk Management and Assessments | Tagged , , , | Leave a comment

What You Should Know about Shellshock as an Ongoing Threat

Madeline Domma, Product Specialist

How Shellshock Stands Up to the Hype

Despite its clever name, many industry experts predict that Shellshock, originally released on September 25, 2014, is potentially the worst vulnerability to hit the Internet. NIST rates it a 10 out of 10 for severity, the US Department of Homeland Security has identified the vulnerability as “Critical”, and it is estimated to potentially affect nearly half of all websites.

Shellshock has proven to be an even worse threat than the heavily-reported Heartbleed vulnerability that made its debut earlier this year. Unlike Heartbleed, the Shellshock command sequence is alarmingly simple to execute remotely but can cause virtually incalculable damage to affected systems or networks of systems. The Shellshock vulnerability, nicknamed the “Bash Bug”, enables even the least skilled of hackers to exploit the extremely popular command line interpreter (or shell) utility, called GNU Bash. Commonly referred to as “Bash”, the utility was originally developed for Unix systems then later distributed to Linux and OS X systems about 25 years ago. Shellshock exploits weaknesses within Bash by injecting arbitrary code into the shell that reconfigures environment variables forcing injections of malicious code directly onto exposed systems.  Furthermore, Bash does not require authentication to execute these commands. The exposure affects a staggering number of all websites on the Internet because Bash operates in conjunction with CGI scripts on several different types of web servers, including the commonly-used Apache servers. Although patches and updates have been released and were widely available soon after the vulnerability was discovered, Shellshock remains a threat to networks everywhere for quite a few reasons.

Breadth and Scope of Shellshock Implications

Worldwide, Shellshock conversations have toned down to a dull roar despite the vulnerability existing as an ongoing threat to networks. By design, the sequence is simple to inject into an exploitable operating system. Determining whether or not a system has been exploited can be difficult, too, since the vulnerability consists of so few commands in Bash. However, the problems do not end with verification that the system has not been exploited. The degree to which Shellshock can cause harm is yet to be determined and experts are still unsure of what its full potential could be. A glimpse at the full scale of this issue both today and into the future brings with it a few main points that must be remembered:

  1. The Shellshock vulnerability affects not only Unix or Linux based systems. Android devices, OS X devices, a majority of DSL/cable routers, security cameras, standalone webcams, and other IoT (“Internet of Things”) devices that could get overlooked (such as “smart” TVs or appliances) most likely run an embedded version of Bash. Therefore, many devices will need to be updated and patched after the essential systems for business operation are secured. Most individuals, even those who remain well-informed, may not know which or how many of the devices they maintain use Bash or which version of Bash these devices are currently running.
  2. Speaking of Bash versions, Shellshock affects all versions of Bash up to version 4.3 – meaning twenty-five years’ of Bash versions are exploitable by the vulnerability.
  3. Since the vulnerability operates as a code injection attack, the depth of the malicious code is exacerbated when Bash continues to execute commands – as the utility was designed to – after the code has been injected onto the system (i.e. when Bash continues to operate as it was designed).  Hijacked systems can be affected in different ways depending upon the commands that attackers execute after gaining access. Once the system has been compromised, hackers have the ability to execute any commands they choose and, historically, hackers have proven to be nothing if not creative.
  4. The fundamental design of the command sequence implies that Shellshock will remain an issue for at least the foreseeable future. A system is considered vulnerable if an outdated version of Bash is installed and Bash can be accessed either directly from the web or via another service running on the system that is accessible from the web. Unfortunately, until systems are either taken down completely or patched and secured, the vulnerability remains a threat to networks everywhere.

Best Practices to Proactively Guard Your Information Systems

Shellshock appears as cataclysmic as a threat can be. Nevertheless, there are several actions that can be taken to guard systems from the Shellshock vulnerability. Because Shellshock is a wide-reaching threat, it has demanded proportionate levels of media and expert attention prompting network administrators and security personnel to quickly take action to secure exploitable systems. Determining whether or not a system is affected involves a straightforward process of simple commands and once vulnerable system are identified, patches, updates, and signatures are readily available to secure all platforms. Apple Inc. reported that most OS X and iOS users were not at risk despite running an exploitable version of Bash. This is due to the fact that other controls are in place on OS X. Android reported that devices are not at risk for similar reasons. The good news continues because, while Windows has historically been riddled with a myriad of weaknesses to serious threats, Windows devices are not immediately affected if Bash is not installed on Windows-based systems. Since Bash is not a native utility for Windows Operating Systems, Windows-based systems only become vulnerable when they share a network with or are serviced by systems or VMs running exploitable operating systems.

TraceSecurity suggests a number of actions for those who have systems on their networks that are susceptible to the Shellshock vulnerability:

  1. Most importantly, all firmware, operating systems, Bash versions, and security policies in company IPS programs for all exposed devices should be updated immediately.
  2. Management and IT personnel should stay informed on the Shellshock issue as the scope of this vulnerability is yet to be determined and will be a serious threat well after Shellshock is no longer the topic of conversation.
  3. Maintaining a working knowledge of the organization’s IT environment is essential to a secure network. For example, knowing that websites hosted within the network use CGI, confirms that the host systems are exposed. Contrastingly, if none of the company websites use CGI, disabling CGI functionality on network devices can be simple action taken to protect systems from potential hacks that exploit CGI.
  4. Cautiously tracking network activity at all times can prove to be a useful practice if an attacker enters the IT environment, inevitably causing inconsistencies within network traffic.
  5. Firewalls, IDS, IPS, and other controls in place to compensate for open ports in system applications must be verified on a regular basis.
  6. TraceCSO customers with contracts that include network scanning functionality can run a dedicated network scan that will identify all network devices vulnerable to Shellshock. This scan can serve as the first step towards comprehensively patching all affected systems and quickly securing your network against Shellshock.

As always, TraceSecurity is proud to serve as a resource to those who have questions or concerns about how to protect IT environments from this vulnerability as well as any other potential threats. If you have any questions please contact your Delivery Director or your Business Development Manager.

Posted in Network Protection, Vulnerability Management | Leave a comment

Tools for Your Vulnerability Management Program

Bobby Methvien, Information Security Analyst and Security Services Manager

The largest threats to complex networks are those unknown to IT personnel. As a first line of defense against system and security-related vulnerabilities and as part of an organization’s on-going vulnerability management program, IT must conduct assessments of its information systems. The goal of a vulnerability management program is to reduce risk within an organization by identifying and resolving vulnerabilities to your IT systems and internal/external network.

Bring IT System Vulnerabilities into View

Vulnerability scanners are a tool that IT personnel use to scan many remote systems using thousands of vulnerability signatures in a short period of time. Results of a scan enable IT to coordinate a resolution for any vulnerabilities identified. Over time, as IT resolves identified vulnerabilities, only a handful of new vulnerabilities will be identified with additional scans. This is the point where IT personnel become confident in the security of the network and need to put it to the test.

Pen Test Your Internal and External Network  

Once IT personnel have significantly reduced the number of vulnerabilities identified through scans, a penetration test should be performed. The penetration test acts as an additional control and is used to identify system and security-related risk that affect an organization’s internal and external network.  Penetration tests work to compromise an organization’s host, web application, the network, or sensitive data.

Penetration tests have short and long-term benefits. In the short term, organizations are able to take action against findings in the assessment, and over the long term, organizations are able to update their processes so that similar risk do not reoccur.

Penetration tests should be performed by someone who is not responsible for the daily management of the network and its information systems.  The reason is due to one’s understanding or explanation behind why a system or group of systems were configured a particular way.  We often hear IT personnel say, “I was told it has to be this way so that’s the way I configured it.” One common example is, “Our software vendor requires that we configure all users as “Local System Administrator.” As a result, IT personnel will make a key information security mistake and assign the “Domain Users” group to the “Local System Administrators” group.


Vulnerability scanning and penetration tests are both services used to identify risks that may affect an organization’s information systems from its internal and external network.  In addition, these services help organizations meet compliance regulations from authorities such as FFIEC, PCI-DSS, and other regulatory authorities.

Posted in Network Protection, Vulnerability Management | Tagged , | Leave a comment