Sizing Up Your Cyberrisks

Every year the damage done by cyberbreaches seems to grow bigger. It looks as if hackers just keep getting more effective. But cybersecurity experts Parenty and Domet think there’s another reason companies aren’t successful at thwarting cyberattacks: They don’t truly understand their cyberrisks, because they focus only on their technological vulnerabilities and relegate responsibility for managing threats entirely to security and IT staff. And because of that, they don’t muster the right defenses.

A better approach involves getting input from a broad range of employees and, especially, board members and company leaders. It starts with identifying the company’s critical business activities and the risks to them. Instead of asking how your computer systems could be attacked, for instance, ask how an attack could disrupt your supply chain, expose your trade secrets, or make you fail to meet your key obligations. Next, inventory the systems supporting those activities, examine their vulnerabilities, and identify potential attackers.

This information, which should be gathered into a cyberthreat narrative for each activity, will allow you to plan, prioritize, and make more-targeted cybersecurity investments.

Despite the billions spent on cybersecurity, the damage done by breaches keeps growing—to a large extent because companies don’t recognize or understand their critical cyberrisks.

Too many firms focus only on technological vulnerabilities. Responsibility for cybersecurity then defaults to IT specialists, yielding an ill-prioritized list of possible attacks. Tech jargon dominates discussions of risk, and senior leaders and boards can’t participate meaningfully in them.

A more fruitful approach is to identify your critical business activities, the risks to them, the systems supporting them, those systems’ vulnerabilities, and potential attackers. Leaders and staff throughout the firm can participate in this process, and overall responsibility for cybersecurity shifts to senior executives and boards.

Over the past decade the costs and consequences of cyberbreaches have grown alarmingly. The total financial and economic losses from the 2017 WannaCry attack, for instance, were estimated to reach $8 billion. In 2018 Marriott discovered that a breach of its Starwood subsidiary’s reservation system had potentially exposed the personal and credit-card information of 500 million guests. Hackers seem to keep getting more effective. But in our experience as consultants to clients across the globe, we’ve found another reason that companies are so susceptible to threats from hacking: They don’t know or understand their critical cyberrisks, because they’re too focused on their technological vulnerabilities.

When cybersecurity efforts address only technology, the result is company leaders who are poorly informed and organizations that are poorly protected. Discussions of cyberthreats end up being filled with specialized tech jargon, and senior executives can’t participate meaningfully in them. The responsibility for addressing risks then gets relegated entirely to cybersecurity and IT staff, whose attention falls mainly on corporate computer systems. The outcome tends to be a long, ill-prioritized list of mitigation tasks. Since no company has the resources to fix every cybersecurity problem, important threats can go unaddressed.

A more fruitful approach is to adopt the view that cybersecurity should focus more on threats’ potential impact on a business’s activities. Say you’re an executive at a chemical company. Instead of asking what cyberattacks might be possible on your computer systems, ask, How could a cyberattack disrupt your supply chain? Or expose your trade secrets? Or make you fail to meet your contractual obligations? Or cause a threat to humanity? That adjustment might seem minor, but when leaders start with crucial activities, they can better prioritize the development of cyberdefenses.

A CEO we worked with, Richard Lancaster of CLP, Asia’s third-largest electricity provider, described the shift in mindset this way: “Initially, we viewed cyberrisks primarily as an IT issue. Over time we realized that what was really vulnerable was our electric grid and generating plants. Now we recognize that cyberrisk is really business risk—and my job as CEO is to manage business risk.” With this perspective, responsibility shifts from IT to senior executives and boards, who must take an active role and ensure that cybersecurity teams focus on the right threats.

Identifying and fixing cyberrisks is a social process. To accurately assess where the most important ones lie, you must consider the viewpoints and opinions of a wide range of employees. By involving a broad group, you’ll build a common understanding of the critical facts and details early on, which will enable you to reach consensus when you subsequently need to manage the risks.

Attacks aren’t always sophisticated or technically complex.

To help companies organize and share the relevant information with a wide audience, we’ve developed a tool we call a cyberthreat narrative. It addresses the four parts of the story of a potential cyberattack: a key business activity and the risks to it; the systems that support that activity; the potential types of attacks and possible consequences; and the adversaries most likely to carry attacks out. Outlining details about all four will help companies recognize and prioritize their risks and prepare remedial actions.

The people in your cybersecurity group should take charge of developing cybernarratives, but they should solicit contributions from:

Let’s look now at each element of a cyberthreat narrative, how to develop it, and who should be involved.

To identify these, the cybersecurity team should interview the company’s leaders; examine its written statements of risk tolerance, such as those found in annual reports; and consider company objectives, such as revenue targets or growth in new markets. A revenue goal, for example, could be dependent on new-product development or expanded service offerings. Entering a new country might be essential to increasing the customer base. Critical activities can be outside the organization, relate to internal operations, or involve the company’s strategic future. For the chemical company, a critical activity could be, say, the manufacture of polyester resins, a specialty product in high demand.

How important an activity is will vary by industry and company. Customer support is a low-risk activity in some industries, such as consumer software or discount retail. But in others, like the gaming industry, customer relationships are paramount. Casinos in Macao, for example, rely on a small segment of VIP customers for more than 54% of their combined gross gaming revenue. Risks to customer relationship management also threaten the casinos’ bottom lines.

The number of critical business activities a company has, and therefore the number of cyberthreat narratives it should develop, also vary from company to company.

To size up your organization’s risks, think about how each key activity could fail in a way that damages your company. For example, in the case of the chemical maker, a disruption to its plant operations could prevent it from producing resins, which could reduce its revenue. Also examine the risk of collateral damage to your customers or other stakeholders—a release of poisonous chemicals into the environment, for example, or the loss of confidential customer information such as passwords and credit-card data.

Because boards represent the fiduciary interests of the company’s owners and are charged with adopting a long-term view of the company, we believe that they have the authority and responsibility to oversee efforts to identify cyberrisks. When they do, it makes a material difference: Simply by raising questions about the four elements of cyberthreats, directors can prompt companies to pay more attention to critical risks.

For example, if you’re a board member, ask for assurance that the company has identified and documented its most critical business activities, the benefits they provide, and the most significant risks they face. You should also confirm that the company’s leaders have participated in this process.

Similarly, you will want assurance that the company has up-to-date inventories of the computer systems those business activities rely on. Though you don’t need to review the details of the inventories yourself, at your discretion you may seek your own independent validation of them or instruct senior executives to arrange their own review. You should also confirm that the company has processes and tools for keeping the inventories current and ask for a few examples of updates that prove those processes are in use.

And so on down the line: Directors should press company leaders about whether they understand the types of attacks that could threaten critical business activities, what the possible impact to both the company and its stakeholders might be, and who potential cyberadversaries are and what capabilities and motivations they might have. The company should brief the board regularly on its current cyberrisk posture for each critical business activity.

Bear in mind that what poses a significant risk for one company may not for another. Although the disruption of resin production might be harmful to one chemical firm, it may not be for another chemical manufacturer whose resins aren’t in demand, make only a negligible contribution to profits, or can be produced at alternative plants.

Your company can’t mount an effective cyberdefense if it doesn’t know what it needs to protect. So you have to catalog your computer systems and the services and functionality they provide for each activity in question. This process should be kick-started by the operational employees involved in the activity, because they know what software they use and what the consequences will be if those programs malfunction. The employees who maintain the computer systems should also participate, since they have a broader picture of the technology supporting this software. For general-purpose computers, this typically means IT staff; for industrial control systems, it means engineers. The inventory should note the physical locations of the systems so that cyberincident-response staffers know where to go to fix things in the event of an attack.

While there are products that assist IT departments with automated inventorying of computers and software, they can’t identify which assets are most important. By cataloging them on the basis of business activity specifically, a company can prioritize computer-vulnerability remediation and enhanced protections effectively.

Next the team should outline all the types of attacks that could disrupt each critical activity, describing what would be required for attacks to succeed and what the potential consequences might be.

At the most basic level, cyberattacks exploit vulnerabilities in computer systems. Malware attacks, for instance, use malicious software to take advantage of programming mistakes in applications. (This was the technique hackers used in the WannaCry incident.) Your cybersecurity staff can and should identify the kinds of techniques that could target vulnerabilities in your crucial computer systems.

It’s important to note that attacks aren’t always sophisticated or technically complex. One vulnerability common to every computer system is an administrator’s almost complete control over the information and applications on it. This power is necessary for proper operation and maintenance, but any administrator can abuse it.

How could a cyberattack disrupt your supply chain? Or expose your trade secrets?

A cyberattack can be executed in countless ways, and it isn’t practical or useful to enumerate them all. If you identify basic types of attack, such as an external hack that installs malware or an employee who misuses computer privileges, that’s enough.

Understanding what an adversary needs to pull off a cyberattack is vital to building your defenses. Your cybersecurity group and the operational staff involved in the critical activities can identify the specific requirements, but most fall into one of these three types:

One Southeast Asian bank we worked with had previously suffered a cyberattack that resulted in a massive debit-card fraud. The investigation revealed the hackers had to have known the format of Visa and Mastercard authorization codes and how to configure a credit-card terminal. The tools they needed included several of the terminals and a database of debit-card account numbers. They also had to be in the local vicinity to coordinate with merchants who were complicit with the fraud, but they didn’t necessarily have to work at the bank or physically enter the bank building.

Executive leadership and senior management are well positioned to identify the fallout from disruptions to the key business activities and should guide your cybersecurity group on this task. Operations and systems staff can point out additional consequences; specialists from other departments, such as legal, finance, and compliance, can spot potential collateral damage. A simple set of “What if…?” questions will make conversations with all these players fruitful. For example, What would happen to the provision of care if a hospital’s patient records were no longer accessible because of a ransomware attack? For England’s National Health Service in the aftermath of WannaCry, the answer was the cancellation of thousands of appointments and operations.

Some consequences go beyond direct financial costs. The NotPetya cyberattack of 2017 interrupted operations at numerous large companies worldwide, causing total estimated losses of up to $300 million at AP Moller-Maersk and $400 million at FedEx. Pharmaceutical giant Merck estimated NotPetya’s impact to its business at $870 million because of both direct costs and lost revenue. In addition, the shutdown resulted in low inventories of a Merck vaccine that prevents certain types of cancer.

Who’s out to get you? Identifying potential perpetrators, as well as their motivations and capabilities, will help you assess the likelihood of an attack and develop the controls needed to thwart it. Your adversaries could be countries, criminal organizations, competitors, disgruntled employees, terrorists, or advocacy groups. Don’t underestimate their sophistication: Advanced hacking tools are widely available to many.

Company leaders and the operations staffers involved in critical business activities are best at identifying possible adversaries, because they’re most familiar with what might motivate attackers and what they could gain. A good place to start is to ask what your company has that could be of value to someone else. For example, a competitor could be interested in your R&D and trade secrets, whereas a criminal organization would be more interested in stealing customer financial records to sell on the black market.

Companies also must consider potential adversaries’ larger commercial context. Casino managers in Macao had decided not to encrypt the network connections they used to transmit their VIP client data to a centralized operations center. The need to do so had not been identified. But when we asked the casino managers to think through who might gain from an attack, they realized that the telecommunications network itself was owned by a conglomerate that included the casino’s largest competitor.

Even customers could be cyberadversaries. Executives at AMSC, a developer of software for controlling wind turbines, were stunned when Sinovel, one of its largest customers, suddenly canceled all payments on current and future contracts, which were worth some $800 million. An investigation revealed that Sinovel had managed to steal AMSC’s software and deploy it with more than a thousand new turbines. As a result of this intellectual property theft, AMSC reported a loss of more than $186 million in fiscal year 2010. The company’s total losses from the theft came to $550 million—only a fraction of which has been recovered. The scheme also cost AMSC’s shareholders $1 billion in equity and forced the organization to shed 700 jobs—over half its global workforce. The company’s operations haven’t yet returned AMSC to profitability.

Sometimes a company’s industry or the way a firm conducts business provokes an attack. Environmental groups might target companies with a bad record of polluting. Edward Snowden, a former NSA contractor, stole information from the agency to expose surveillance programs that it had publicly denied. Actions such as layoffs or plant closures can also motivate employees to retaliate by misusing their computer privileges. And there will always be random individuals with a personal agenda to pursue or a reputation to enhance.

Your company can suffer consequences from an attack even if it is not the direct target. Infrastructure, for example, is increasingly the object of cyberattacks. Consider the attacks on the power infrastructure in Ivano-Frankivsk in 2015. If, as suspected, Russia was behind the incidents, its motivations probably had nothing to do with the power companies themselves—or with their customers, who suffered as a result of interruptions to the power supply. The companies most likely were targeted simply because they’re located in the Ukraine, a country with which Russia has a hostile relationship. When evaluating who might want to disrupt your systems, you must look beyond your company to the broader commercial and political world in which it operates.

Let’s now look in depth at one cyberattack and how the four elements of the cyberthreat narrative could have captured relevant information that would have allowed the organization involved to avoid it.

The shire of Maroochy is a tourist destination about a hundred kilometers north of Brisbane, Australia. The area is one of outstanding natural beauty and ecological significance, with long white beaches and subtropical rain forests with creeks and waterfalls.

At the turn of the millennium the shire’s water and sewage system was overseen by Maroochy Water Services, which collected, treated, and disposed of 35 million liters of wastewater daily. In late January 2000 the system that managed its wastewater pumping stations started losing control over the pumps and issuing false alarms. By the time a vendor concluded that the system’s computers were under attack, raw sewage had backed out of the stations and flowed throughout the shire, including into the PGA championship golf course of what had previously been a five-star Hyatt Regency resort. In local parks, waterways reeked and turned black, and marine life died. The stench was horrible. The attacks went on for three months, until the police apprehended the perpetrator after a car chase near one of the pumping stations.

Hindsight makes it easy to see what went wrong, but let’s reconstruct what a cyberthreat narrative for the utility might have been.

In 2000 a cyberattack on the local water and sewer system wreaked havoc in Australia’s Shire of Maroochy. If officials at the utility managing the system had created a narrative outlining the four elements of a possible attack, they would have been better positioned to prevent it.

For the organization, treating wastewater was a critical business activity. Maroochy’s systems had 142 stations pumping sewage to the treatment facility. Because of elevation variations in the shire, if the pumps malfunctioned, there was a good chance wastewater would back up into the area’s pristine parklands and tourist areas.

The supporting computing systems for the pumps included a central operations-management system and the control equipment inside the pumping stations. From the central system, operators could turn individual pumping stations on or off and change the pumping rates. Pumping stations could also be managed locally using equipment that could manipulate the central system as well.

These supporting systems had two cybersecurity vulnerabilities. The first was that anyone could set up a network connection to the equipment; and the second was that once you were connected, a password wasn’t required to log on.

To be successful, a potential attacker would need an understanding of how the equipment worked, which could be gained through experience or by reading the manuals, and the radio frequency used to communicate with the equipment, which was easy to find in company documentation. The attacker would need several computers (including one that was the same variety as those used in the pumping stations), network cables, and two-way radio equipment. And to connect to the control systems of an individual pumping station, he or she would need to be within radio range but would not need to physically break into a pumping station.

Environmental groups might target companies with a bad record of polluting.

The cyberadversary in this case turned out to be a former employee of the vendor that provided the pumping-station control equipment. After a contentious stint at the vendor, he had applied twice for a job with Maroochy Water Services but wasn’t hired. As a result, he became disgruntled and resentful and wanted revenge on both companies. He had stolen one of the pumping-station computers, and because he knew how the control system worked, he was able to use it and radio equipment to communicate with the individual pumping stations. He then manipulated them to take over the central operations-management system and wreak havoc.

If Maroochy Water Services executives had worked with their staff to develop a narrative for wastewater treatment, they would have discovered and understood the significant risks they faced. While we can assume that they would have recognized that wastewater treatment was a critical activity, the investigation and analysis involved in creating a narrative would have given them and their IT security staff a clear idea of how cyberattacks could undermine supporting computer systems and cause the pumps and other wastewater treatment equipment to fail. And they would have had an idea of what it would take for cyberattacks to succeed and who might be behind them.

They also would have had a foundation for mitigating those risks. The Maroochy Water Services IT security staff members would have known the two vulnerabilities that needed to be addressed to prevent the wastewater crisis, and they could have explained them to executives and the board in one jargon-free sentence: “To prevent a cyberattack on our pumping stations, we need to require that people log on to station computers and restrict who can connect to their networks.”

Identifying cyberrisks is an ongoing process: As your business evolves, and as the computing systems that underlie it change, it will face new vulnerabilities. To spot these, your company must have well-defined checkpoints within its change management processes at which it evaluates cyberrisk.

But identifying your company’s most important cybervulnerabilities is only the first step. Knowing what the risks are will allow you to prioritize potential attacks, identify controls that will help prevent them, and create and, if needed, execute a practical remediation plan. Good digital stewardship requires moving business risks—and business leaders—to the center of all these conversations.

Thomas J. Parenty is an international cybersecurity expert who has worked at the National Security Agency and advised other organizations across the globe. He is a cofounder of the cybersecurity firm Archefact Group and a coauthor of A Leader’s Guide to Cybersecurity (Harvard Business Review Press, 2019).

Jack J. Domet is a management expert who focuses on helping multinational corporations adapt to shifts in technology, globalization, and consumerism. He is a cofounder of the cybersecurity firm Archefact Group and a coauthor of A Leader’s Guide to Cybersecurity (Harvard Business Review Press, 2019).

Sizing Up Your Cyberrisks

Research & References of Sizing Up Your Cyberrisks|A&C Accounting And Tax Services
Source

error: Content is protected !!