Starting from an Excel workbook, risk data is imported and run through a simulation model to estimate the expected losses for each scenario. The results of these simulations are used to create a detailed analysis and a formal risk report. A starter analysis report, overview dashboard, and a sample Shiny application are all included in the toolkit.

Evaluator takes a domain-driven and framework-independent approach to strategic risk analysis. For information security analysis – where Evaluator originated – ISO, COBIT, HITRUST CSF, PCI-DSS or any similar model may be used. If you are able to describe the domains of your program and the controls and threat scenarios applicable to each domain, you will be able to use Evaluator!

Instructions

While not required, a basic understanding of the OpenFAIR methodology and terminology is highly recommended.

Follow these steps to complete an analysis:

  1. Define your controls and risk scenarios
  2. Import and validate the scenarios
  3. Encode the qualitative labels into quantitative parameters
  4. Run the simulations
  5. Summarize the simulation outputs
  6. Analyze the results

Define Your Controls and Security Domains

Evaluator needs to know the domains of your risk universe. These are the major buckets into which you divide your program. Examples of domains include Physical Security, Strategy, Policy, Business Continuity/Disaster Recovery, and Technical Security. Out of the box, Evaluator comes with a demonstration model based upon the HITRUST CSF.

To start a fresh analysis using the default starter files, run create_templates("~/evaulator"). This creates an evaluator directory in your home location with an inputs subdirectory containing a survey tool (Excel), a comma-separated file defining the domains used in the survey tool, and a file defining the risk tolerance levels for your organization.

If you have a different domain structure (e.g. ISO2700x, NIST CSF, or COBIT), you will need to edit the inputs/domains.csv file to include the domain names and the domain IDs, and a shorthand abbreviation for the domain (such as POL for the Policy domain).

Identifying the controls (or capabilities) and risk scenarios associated with each of your domains is critical to the final analysis. The elements are documented in the Excel workbook. The workbook includes one tab per domain, with each tab named after the domain IDs defined in the previous step. Each tab consists of a controls table and a threats table.

Controls Table

The key objectives of each domain are defined in the domain controls table. While the specific controls will be unique to each organization, the sample spreadsheet included in Evaluator may be used as a model. It is best to avoid copying every technical control from, for example, ISO 27001 or COBIT, since most control frameworks are too fine-grained to provide the high level overview that Evaluator delivers. In practice, 50 controls or less will usually be sufficient to describe most organizations. Each control must have a unique numeric ID and should be assigned a difficulty (DIFF) score that ranks the maturity (effectiveness) of the control on a CMM scale from Initial (lowest score) to Optimized (best of class).

Threats Table

The threats table consists of the potential loss scenarios described by each domain. Each scenario is made up of a descriptive field that describes who did what to whom, the threat community that executed the attack (e.g. external hacktivist, internal workforce member, third party vendor), how often the threat actor acts upon your assets (TEF), the strength with which they act against your assets (TCap), the losses incurred (LM) and a comma-separated list of the control IDs that prevent the scenario.

Optional Quick Start

As an alternative to manually performing the subsequent steps steps in this document, a run_analysis.R script placed in the ~/evaluator directory by the create_templates() function can be used to run all of these steps automatically. To use this bootstrap script, simply set a base_dir variable to the ~/evaluator directory and source the script. The run_analysis script will generate a lot of console output. Performing at least a read through of the rest of this document to familiarize yourself with the steps being executed is strongly recommended.

Using the quick start (run_analysis) script:

base_dir <- "~/evaluator"
source("~/evaluator/run_analysis.R")

Import and Validate the Scenarios

To extract the spreadsheet into data files for further analysis, run import_spreadsheet(). Evaluator will extract the data and save two comma separated files in the inputs directory with the results.

domains <- readr::read_csv("~/evaluator/inputs/domains.csv")
import_spreadsheet("~/evaluator/inputs/survey.xlsx", domains, output_dir = "~/evaluator/inputs")

Afer importing, you are strongly encouraged to run validate_scenarios() to verify there are no data integrity issues. If there are errors detected, the validation process will abort and a message summarizing the problem is displayed. Correct the errors displayed, reimport, and repeat the validation process until there are no errors reported.

qualitative_scenarios <- readr::read_csv("~/evaluator/inputs/qualitative_scenarios.csv")
mappings <- readr::read_csv("~/evaluator/inputs/qualitative_mappings.csv")
capabilities <- readr::read_csv("~/evaluator/inputs/capabilities.csv")
validate_scenarios(qualitative_scenarios, capabilities, domains, mappings)

Encode the Data

quantitative_scenarios <- encode_scenarios(qualitative_scenarios, 
                                           capabilities, mappings)

Run the Simulations

Once the quantitative scenarios are ready for simulation, run run_simulations(quantitative_scenarios). By default, Evaluator puts each scenario through 10,000 individual simulated years, modeling how often the threat actors come into contact with your assets, the strength of the threat actors, the strength of your controls, and the losses involved in any situation where the threat strength exceeds your control strength. This simulation process is computationally intense and can take some time to run.

simulation_results <- run_simulations(quantitative_scenarios, 
                                      iterations = 100L)
saveRDS(simulation_results, file = "~/evaluator/results/simulation_results.rds")

Summarize Results

The raw simulation results typically require summarization for the default reporting functions. These summarized data files, performed at both the per scenario and per domain level are in addition to the previously generated full results. An analyst can always access the full simulation results if desired.

In the following code block, Evaluator produces scenario_summary.rds and domain_summary.rds files for the reporting in the final section.

summarize_to_disk(simulation_results = simulation_results, 
                  results_dir = "~/evaluator/results")

Analyze the Results

Several analysis functions are provided, including a template for a technical risk report. The risk report can be generated via generate_report(). This creates a pre-populated risk report that identifies key scenarios and generates initial plots for the analyst to prepare a final report.

Other included report tools include risk_dashboard(), which produces a single-page static web dashboard view of all the scenarios and their results.

For interactive exploration, use explore_scenarios() to launch a local copy of the Scenario Explorer application. The Scenario Explorer app may be used to view information about the individual scenarios and provides a sample overview of the entire program.

For more in depth analysis, the following data files may be useful to exploration and analysis from either within R or with a R-compatible external business intelligence program such as Tableau:

Data File Purpose
simulation_results.rds Full details of each simulated scenario
scenario_summary.rds Simulation results, summarized at the scenario level
domain_summary.rds Simulation results, summarized at the domain level
# Explorer
explore_scenarios(input_directory = "~/evaluator/inputs", 
                  results_directory = "~/evaluator/results")

# Risk Dashboard
risk_dashboard(input_directory = "~/evaluator/inputs", 
               output_directory = "~/evaluator/results", 
               "~/evaluator/risk_dashboard.html")

# Sample Report
generate_report(input_directory = "~/evaluator/inputs", 
                results_directory = "~/evaluator/results", 
                "~/evaluator/risk_report.html") %>% rstudioapi::viewer()

To view that same report as a Word document for editing, use generate_report(input_directory = "~/evaluator/inputs", results_directory = "~/evaluator/results", "~/evaluator/risk_report.docx", format = "word").