Technical Tests
Theme Technical Testing
This page contains tests of technical aspects of the theme I am using and Jekyll in general.
Michael McDonnell
Michael McDonnell/ The Cybersecurity Librarian
404.html_pages/detection-engineering-notes.md_pages/guardduty-to-splunk-cim-alert.mdindex.html_pages/learning-podcasts.md_pages/learning-writing.md_pages/learning.md_pages/moro-and-mike.md_pages/moro-mike-podcast.xml_pages/test.mdassets/css/style.scssassets/minima-social-icons.liquidfeed.xmlblog/page2/index.htmlblog/page3/index.htmlblog/page4/index.htmlblog/page5/index.htmlblog/page6/index.htmlsitemap.xmlrobots.txt _pages/detection-engineering-notes.md_pages/learning.md_pages/moro-and-mike.md
404
Page not found :(
The requested page could not be found.
Last Update: 2023-05-13
I want to better understand the industry landscape and emerging practices in Detection Engineering and Threat Hunting. These are my notes on DE perspectives, frameworks, processes, tools, and people to learn from.
What is Detection Engineering?
Detection Engineering is the populized term to describe the practice of designing, developing, and maintaining systems for the detection of cyber threats. It is closely related to Threat Hunting and shares frameworks and processes. It is a not a discipline of Engineering as defined by many regulatory bodies, but a term populized through wide-accepted usage in cybersecurity. It is used to distinguish between the roles of Threat Intelligence and Incident Response in Security Operations. The specific definition varies, but the term itself is now widely used.
My working definition (subject to change):
Modern Detection Engineering and Threat Hunting are agile threat-informed defense practices that develop and operationalize threat detection analytics. These practices require threat analysis, data engineering, design and management of detection systems, development of detection analytics, and operation/execution of threat detection. They are DevOps practices driven by threat intelligence, enabled by data modeling on large datasets, and increasingly requiring the application of statistical techniques. The product of these practices are analytics: rules, signatures, dashboards, reports, searches, data models, visualizations, and/or “enrichments”. They are both functions of Security Operations and distinct from Incident Response, Vulnerability Management, and Threat Intelligence.
Some perspectives on how to define Detection Engineering/Threat Hunting:
Threat hunting and detection engineering are different specializations, but are closely related. They have common goal of finding attackers using available data, whether its the attackers that got past your detections (threat hunting) or the next ones through (detections). – Mark Simos
Detection engineering is the process of identifying threats before they can do significant damage. Detection engineering is about creating a culture, as well as a process of developing, evolving, and tuning detections to defend against current threats. – CrowdStrike
Detection engineering transforms information about threats into detections…. Detection engineering transforms an idea of how to detect a specific condition or activity into a concrete description of how to detect it. – Florian Roth
Enter Threat hunting - the proactive practice of ferreting out those sneaky cyber-rodents. Or, if you insist on a more formal definition, “any manual or machine-assisted process intended to find security incidents missed by an organization’s automated detection systems.” Either way, hunting is a great way to drive improvement in automated detection and help you stay ahead of the attackers. – David Bianco
When I teach threat hunting, I say “The purpose of threat hunting is not to find new incidents. It’s to drive improvement in automated detection.” Put simply, threat hunting is detection R&D (at least at the higher levels of hunting maturity)…. our hunting outputs are not only detections. We update playbooks and other documentation for our detection engineers and (especially) our response teams. Our primary goal is to improve automated detection, but we see IR improvements as important secondary goals. – David Bianco
Detection engineering is by no means limited to the detection of events (activity). It also includes detecting conditions (states), often used in digital forensics or incident response. – Florian Roth
A Threat Detection Engineer is someone who applies domain knowledge on designing, building or maintaining detection content in the form of detections generating alerts; or interfaces in the form of dashboards or reports supporting the security monitoring practice within an organization. – Alex Teixeira
Detection engineering is a process—applying systems thinking and engineering to more accurately detect threats. The goal is to create an automated system of threat detection which is customizable, flexible, repeatable, and produces high quality alerts for security teams to act upon. – Laura Kenner, uptycs
Detection engineering functions within security operations and deals with the design, development, testing, and maintenance of threat detection logic. – Mark Stone, panther
Detection engineers design and build security systems that constantly evolve to defend against current threats. – Josh Day, gigamon
Threat hunting is an active means of cyber defense in contrast to traditional protection measures, such as firewalls, intrusion detection and prevention systems, quarantining malicious code in sandboxes, and Security Information and Event Management technologies and systems. Cyber threat hunting involves proactively searching organizational systems, networks, and infrastructure for advanced threats. The objective is to track and disrupt cyber adversaries as early as possible in the attack sequence and to measurably improve the speed and accuracy of organizational responses. Indications of compromise include unusual network traffic, unusual file changes, and the presence of malicious code. Threat hunting teams leverage existing threat intelligence and may create new threat intelligence, which is shared with peer organizations, Information Sharing and Analysis Organizations (ISAO), Information Sharing and Analysis Centers (ISAC), and relevant government departments and agencies. – NIST SP 800-53 v5: RA-10
Detection Engineering sits at the intersection of InfoSec, Cloud Infrastructure, DevOps, and Software Development. – Jack Naglieri
If you want to scale your detection program, you need to hire a Detection Engineering team that can complement each other in the following areas: 1. Subject matter expertise in security, 2. Software engineering, 3. Statistics – Zack Allen
Detection engineering is a new approach to threat detection. More than just writing detection rules, detection engineering is a process — applying systems thinking and engineering to more accurately detect threats. Detection Engineering involves the research, design, development, testing, documentation, deployment and maintenance of detections/analytics and metrics. - Sohan G
Detection Engineering is a capability that researches and models threats in order to deliver modern and effective threat detections. What is Detection Engineering? Goal is to provide automated analytical capabilities {hunts} that can capture and detect the behaviours and TTPs of adversaries. – Atanas Viyachki, Senior Threat Hunter@Weatlthsimple
Detection engineering is the continuous process of deploying, tuning, and operating automated infrastructure for finding active threats in systems and orchestrating responses. Indeed, both the terms “detection” and “engineering” carry important connotations when it comes to the new approaches to security we’re discussing. – Jamie Lewis
Are Threat Hunting and Detection Engineering the Same Thing?
A number of emerging frameworks and defitions for Threat Hunting overlap with Detection Engineering. For this reason, I will include Threat Hunting models, methods, and frameworks. The newest thinking provides fewer differences, mostly around goals and who-does-what.
There appear to be more similarities that differences in processes, with differences proposed depending on the size of the organization. For example, large MSPs see a greater distinction than a mid-size financial institution would: the MSP has entire departments who much detect threats, and labour must be divided. A single organiztaion with an Information Security department of 20 people might want a one or two people to take on all detection development tasks.
- Cyborg Security. (2023, May 19). Guarding the Gates: The Intriccasies of Detection Engineering and Threat Hunting.
- Zendejas, D. (2023, May 16). Detection Engineering vs Threat Hunting. Danny’s Newsletter.
- Teixeira, A. (2023, February 25). The dotted lines between Threat Hunting and Detection Engineering.
- Kostas, T. (2023, Feburary 21). Detection Engineering VS Threat Hunting. Threat Hunting Series.
- Wickramasinghe, S. (2023, February 21). Threat Hunting vs. Threat Detecting: Two Approaches to Finding & Mitigating Threats. Splunk Blogs.
- Delgado, M. (2021, September 02). 4 Differences Between Threat Hunting vs. Threat Detection. WatchGuard Blog.
Perspectives
- Florian Roth : About Detection Engineering
- Alex Teixeira : What does it mean to be a threat detection engineer?
- Mark Simos : Typical SecOps Role Evolution
- Dave Bianco :
- CrowdStrike : What is Detection Engineering?
- GitHub : Awesome Detection Engineering
- Uptycs : What Is Detection Engineering?
- Panther : A Technical Primer in Detection Engineering
- Gigamon : So, You Want to Be a Detection Engineer?
- Red Canary : Behind the Scenes with Red Canary’s Detection Engineering Team
- Secureworks : Threat Hunting as an Official Cybersecurity Discipline
- NIST SP800-53 v5 : RA-10: Threat Hunting
- Jack Naglieri : Think Like a Detection Engineer
- Zack Allen : Table Stakes for Detection Engineering
Can I get certified as a Detection Engineer?
- ATT&CK Threat Hunting Detection Engineering Certification Path – Training is part of MITRE MAD which is USD$500/year.
- GIAC Certified Detection Analyst (GCDA)
How can I learn more about Detection Engineering?
Maturity Models
- The DML Model, Ryan Stillions
- Detection Engineering Maturity Matrix, Kyle Bailey
- Detectin Engineering Maturity Matrix, Blog post by Kyle Bailey.
- The Hunting Maturity Model (HMM), Sqrrl Threat Hunting Reference Model (no longer maintained)
Reading
Articles
- Roth, F. (2022, September 11). About Detection Engineering. Medium Blog.
- The dotted lines between Threat Hunting and Detection Engineering
- Prioritization of the Detection Engineering Backlog
- Detection Engineering with MITRE Top Techniques & Atomic Red Team
- How to Improve Security Monitoring with Detection Engineering Program
- The Evolution of Security Operations and Strategies for Building an Effective SOC (ISACA, Lakshmi Narayanan Kaliyaperumal)
- Kenner, L. (2022, July 14). What is Detection Engineering. Uptycs Blog.
- Threat-Informed Defense Ecosystem by Micah V.
- Bastidas, L. (2023, April 11). On the road to detection engineering. TrustedSec Blog.
Blogs
- Detection Engineering Weekly from Zack Allen
- Blog Posts Tagged “Detection Engineering” on Medium
- Florian Roth
- Alex Teixeira: When Data speaks, are you ready to listen?
- MITRE ATT&CK Blog
- Anton Chuvakin
Books
- 11 Strategies of a World-Class Cybersecurity Operations Center
- Malware Analysis and Detection Engineering
- Agile Security Operations
Listening (Podcasts)
Watching (Videos)
- Detection Engineering Methodologies
- Threat Hunting SANS: What is Detection Engineering? Avigayil Mechtinger
- Resilient Detection Engineering
- Detection as Code: Detection Development Using CI/CD
- Threat-Informed Detection Engineering
- Leveling Up Your Detection Engineering
- Measuring Detection Engineering Teams
- Security Onion Essentials - Detection Engineering
Courses
Events (Conferences)
What are the core Detection Engineering Processes?
TODO. See articles above for now.
What tasks should a Detection Engineering Program document?
Appendix C.3 of the MITRE book 11 Strategies of a World-Class Cybersecurity Operations Center outlines a framework for Detection Engineering/SOC Systems Administrator documentation. In a past job, I worked together with a SOC Syadmin, collaborated on documentation that was similar. I was delighted when I read this appendix and found it was a strong match for what we did. It drove new effeciencies, supported better understanding by incident handlers, and ensured our systems were well maintained and worked.
This key document types for SOC Engineering are:
- Monitoring Architecture
- Internal Change Management Processes
- Systems and Sensors Maintenance and Build Instructions
- Operational, Functional, and Systems Requirements
- Budget and current spending (capital and operational expenditures)
- Unfunded Requirements
- Sensor and SIEM Detections/Analytics/Content Lists(s)
- SOC System Inventory
- Network Diagrams
I like to focus more on the documentation of use-case development. In the MITRE book that would be “Sensor and SIEM Detections/Analytics/Content List(s)” as well as “Internal Change Management Processes” primarily. Below is my own framework for the medium-grained tasks a Detectin Engineer would carry out. You can think of each item below as being an artifact or task documented in Jira or Confluence etc.
- Document use-case – Taking as input some need, define the use-case so that it maybe reviewed and prioritized and added to the backlog
- Develop use-case – Input is a documented need for the use-case. Perform in-depth requirements analysis, data wrangling, iterative development of detection and data sources, and full documentation. Output is a test detection in non-production ready to be reviewed for acceptance by stakeholders, and for final implementation.
- Implement use-case – Input is an developed use-case that has passed acceptance. Implement it in production and remove it from the backlog.
- Monitor use-cases – Input is all production use-cases. Monitor use-cases and periodically review them for relevancy and effectivness. Output is requests to retire, enhance, or maintain the use-cases
- Retire use-case – Input is a request from monitoring of all use-cases. Ensure the use-case is disabled tracking anything that has dependencies on the use-case. If necassary create a new use-case to replace this one if others depend on it but it needs to be retired. Output is confirmation that retirement has not caused adverse impact.
- Plan threat-hunt – Input is demand for a new use-case. Generate a hypothesis and test plan. Describe data sources needed, effort and resources required and a schedule. Output is a plan and schedule ready for approval.
- Execute threat-hunt – Input is a threat hunt plan that has been approved. Gather the required team, and on schedule execute the hunt. Output is documented findings, and possibly escalation to incident response.
- Develop metric – Input is a deman from a stakeholder or a documented use-case ready for development. Develop a way to measure the effectiveness of a use-case, or some aspect of the DE program. Output is the logic/process for a scheduled report dashboard or some data.
- Implement metric – Input is a developed metric. Implement it and remove it from the backlog.
- Report metrics – Input is all developed, implemented metrics. Operationize reporting. Output is feedback into the use-case development process or advise to stakeholders outside DE.
- Document datasource
- Implement datasource
- Monitor datasource
What are popular Detection Engineering Standards and Frameworks?
Frameworks
Frameworks for Detection Engineering/Threat Hunting
- MITRE TTP-Based Hunting (TCHAMP)
- Splunk SURGE PEAK
- Open Detection Engineering Framework
- MaGMa: a framework and tool for use case management – The MaGMa Use Case Framework (UCF) is a framework and tool for use case management and administration on security monitoring. MaGMa’s tool is decprecated and not maintained but the methodology remains sound well aligned with current practices. It is documented where other practices are often shared word-of-mouth. The primary author works at Splunk which now offers the Entprise Security Content library, with MaGMa like features.
- TaHiTI: Threat Hunting Methodology – Aligned with MaGMa, the TaHiTI methodology for threat hunting is created with real hunting practice in mind and provides organization with a standardized and repeatable approach to their hunting investigations. The methodology uses 3 phases and 6 steps and integrates threat intelligence throughout its execution.
Standards for Implementing Detection Engineering Processes
- MITRE ATT&ACK
- MITRE DETT&CT
- The Cyber Kill Chain – There are many variants of the killchain model. Lockheed Martin’s is often cited.
- The Pyramid of Pain
- Detection Engineering Maturity Matrix – See also Kyle Bailey’s post Detection Engineering Maturity Matrix
- The DML Model
- Purple Team Exercise Framework (PTEF) – This is compatible with and includes a role for Detection Engineering
Naming Conventions
- From LASCON talk by – Primary Key:SCOPE:TTP:Short name – Scope is servers, workstations, or something more granular
Detection Specification Languages/Formats
- Sigma
- YARA
- Splunk SPL
- Microsoft KQL
- Snort Rules
- GraphQL
- YAML
Managing a Backlog of Work
- JIRA
Processes
- Agile Use Case Detection Methodologies
- DevOps CI/CD
What tools are popular for Detection Engineering?
EDR
- Wazuh
- CrowdStrike
- Microsoft Defender for Endpoint
SIEM
- Microsoft Sentinel
- Splunk Enterprise Security
SOAR
- Splunk SOAR
- LogicHub
- Palo Alta Cortex
- CrowdStrike Fusion
Analytics
- MITRE Cyber Analytics Repository (CAR)
- Python (Pandas)
- Jupyter Notebooks
- Splunk Enterprise Security CIM Datamodels
- Microsoft Excel
Data Sources (Event Logs)
- MITRE ATT&CK datasource mapping
- Sysmon
- Linux auditd
- Filebeat
- Windows Events
- syslog
- Firewall Logs
- Zeek (network events)
- DNS logs
- Anti-virus Alerts
- Active Directory changes
- AWS CloudTrail
Malware Analysis
- VirusTotal
- Any.run
- Hybrid Analysis
- Cisco Malware Analytics
- IDA Pro
Who are the leaders in Detection Engineering?
These are leaders in the sense, that they are people I follow! I have quite a few more to add to this list, many quoted earlier this these notes.
- Florian Roth
- David J Bianco
- Roman Daszczyszak
- Alex Teixeira
- Kyle Bailey
- TBD.. what about the folks at MITRE who designed MITRE TTP-Based Hunting etc?
- Rob van Os. Primary author of MaGMa, TaHiTI, and SOC-CMM.
Where does Detection Engineering fit into the NIST Cybersecurity Framework
That’s complicated. The NIST CSF has an entire category called Detect but various activities that are part of Detection Engineering and Threat Hunting are found in other CSF categories as well.
It can be modeled as a control to address the Identity category. For example, through NIST 800-53v5 RA-10: Threat Hunting. Have they confused the role that Threat Intelligence has in informing Threat Hunting? No. The control definition clearly outlines a requirement to establish a capability to monitor for and detect threats. Multiple controls, including threat hunting must be applied toward this requirement.
What is the relationship between Detection Engineering and Incident Response?
Incident Response is the key stakeholder in the development of detection analytics. In the past, or in small organizations, they may also be the developer of detection analytics.
The creation of detection rules and their “tuning” to eliminate false-positives has often been described as an activity carried about by incident responders. For example, a corporate security team, the incident response manage and use the SIEM for detection. The rules exist for them, and they create those rule in response to past incidents or from a library of pre-defined rules that they customize for their environment.
This approach may be considered “historic” and is not emphasized in modern frameworks. It is not that incident responder cannot or should not be involved, it is that their role is operational and should be the consumer of good analytics, the driver of the development of new analytics, and not the developers of analytics. They are a stakeholder, perhaps the most important stakeholder!
What is the relationship between Detection Engineering and Threat Hunting?
Threat Hunting and Detection Engineering go hand-in-hand. Threat Hunting methods are used by Detection Engineers to validate their detection logic, data sources, and measure effectiveness. That said, Threat Hunting has additional outcomes unrelated to Detection Engineering goals, and may trigger incident response.
Most Detection Engineering use-cases begin with a Threat Hunt to validate and priorize the use-case. If the threat hunt proves difficult, it helps estimate the effort of developing the use-case for detecting that threat. If the threat hunt yeilds few false positives, it indicates that the use-case could be highly effective and should get higher priority. If the threat hunt fails due to lack of data, it may indicate that developmen to the use-case should be deferred until the datasource can be developed.
What is the relationship between Detection Engineering and Threat Intelligence?
The need for new detections is often driven by threat intelligence. We want to detect threats before they become incidents and the earliest needs may come from threat intelligence analysis. For example, if Qakbot has changed their methods but your organization has not yet encountered them, threat intelligence may be able to provide information on how to detect the new methods days before you are attacked.
A good detection engineer reads threat reports differently than a malware or TI analyst. He/she discovers detection opportunities, pivots and writes rules for any trace the reported threat may have left. – Florian Roth
What is the relationship between Detection Engineering and Offensive Security?
What is the relationship between Detection Engineering and IT Asset Inventory?
Detection Engineering both consumes and produces asset inventories. Detection Engineering crucially requires quality inventory of assets, identifies, and configurations.
If you want to measure the converage of your detections for an specific threat, you will need to have an inventory of assets targeted by, exposed to, or vulnerable to the threat. If you don’t have a good inventory, you will not know how effective your detection will be. For example, if you have a detection for exploitation of a vulnerability in MS SQL Server, but you don’t know how many SQL Servers you have, or their addresses, you cannot determine if your detection will actually work.
Detection Engineers often have specific inventory requirements that others do not. For example, knowing which security agents are present, knowing how assets are configured, knowing what permissions an identity has. These all can be used to enrich detections to priotize alerts by priority of the asset or severity of the detected threat. Without this additional information, you can detect a threat, but not detemine how urgent a response to that threat is.
What is the relationship between Detection Engineering and Malware Analysis?
In some larger organizations, especially security product vendors and MSP, a core activitity is analyzing new malware samples to extract useful indicators and to turn those into detection rules. The continuous deployment of new detection signatures is driven by malware analysis, and malware analysis might be considered a core skill required of those in a Detection Engineering role. Given that malware hashes are trivially changed, this activity involves more in-depth understanding of malware behaviour and the identification of invariant observables as well as behavior-based detections.
Last Update: 2023-07-16 16:43 UTC (Draft)
A method for mapping AWS GuardDuty Findings to Splunk’s CIM Alert Datamodel. While the common GuardDuty Finding fields are easy to map to the CIM Alert datamodel, it is hard to map the actor and target of a finding. This method provides a dynamic, easy to maintain, way of performing that mapping.
Background
TODO: write an introduction
Why not provide this as a Splunk Add-on? I have provided this to support understanding the two different alert formats. Understanding our data is important because it changes but the patterns for performing this type of mapping are less variant. While much of what I have documented is better implemented in Splunk as specific knowledge objects and settings, the purpose of the document is to understand how to normalize a complex detection event to CIM’s generic format.
By practicing this kind of analysis and design we can apply this method to other formats from other detection engines in the future.
AWS GuardDuty Findings
Splunk CIM Alert Datamodel
Undertanding the CIM Alert Fields
Our goal is to map AWS GuarDuty Finding fields to Splunk CIM Alert datamodel fields. Here is a summary of the
Parsing the GuardDuty Finding Format
The general JSON structure of a GuardDuty finding from CloudWatch looks like this. But we only care about what is in the detail field.
{
"version": "0",
"id": "cd2d702e-ab31-411b-9344-793ce56b1bc7",
"detail-type": "GuardDuty Finding",
"source": "aws.guardduty",
"account": "111122223333",
"time": "1970-01-01T00:00:00Z",
"region": "us-east-1",
"resources": [],
"detail": {GUARDDUTY_FINDING_JSON_OBJECT}
}
You can see a sample of the details object on the AWS GuardDuty Response Syntax page. Here is a sample:
{
"account: ...,
"detail": {
"id": ...,
"type": ...,
"resource": {},
"service": {},
"severity": 3.3,
"createdAt": ...,
"updatedAt": ...,
"title": ...,
"description": ...
}
"detail-type": "GuardDuty Finding",
"id": ...,
"region": "us-east-1",
resources: [],
source: "aws.guardduty",
time: ...,
version: 0
}
You can see an example from the aws-samples/amazon-guardduty-waf-acl GitHub repository. It shows all the fields you might expect for one specific type of finding. The fields vary from finding type to finding type.
Parsing the Finding Overview
The GuardDuty Finding overview contains descriptive metadata to help us understand the type of finding, how it was detected, who the actor was, and was resource was affected. The finding overview data fills in most of the CIM Alert fields.
CIM Alert Field | GuardDuty Finding Field | Description |
---|---|---|
app | source | When you have many alert sources in the CIM datamodel, you need this to find the GuardDuty ones. |
description | detail.description | A human readable description of the details of the alert. |
dest | TBD | The thing affected by detected activity |
dest_type | TBD | The type of thing affected by the activity detected |
id | detail.id | A unique ID for this specific occurence of the detection type. Updates or related details will share this ID |
mitre_technique_id | TBD | We could create a lookup table to perform this mapping |
severity | TBD | The Splunk prescribed severity values: critical, high, medium, low, informational, unknown |
severity_id | detail.severity | The original detection’s severity rating |
signature | detail.title | A human readable label of the detection |
signature_id | detail.type | A machine readable label for the detection |
src | TBD | The thing that caused the detected activity |
src_type | TBD | The type of thing that caused the detected activity |
tag | “alert” | These will determine which datamodels this event is mapped to |
type | “alert” | The alert type prescribed by the Splunk Alert datamodel: alarm, alert, event, task, warning, unknown |
user | TBD | A user involved in the detected activity in the format used by the Splunk ES Identity inventory. This can the actor or target: we have to choose what makes sense for our use-case. |
user_name | n/a | A username invovled in the detected activity. Human readable. |
vendor_account | account | The AWS where the activity took place |
vendor_region | region | The AWS region where the finding was generated |
Parsing the Finding Details
This is the tricky part. GuardDuty has many different types of findings and each one provides different details with different field names. There are thousands of them! How do we know which fields to parse?
For our purpose, we do not need every detail provided in a GuardDuty finding, we only need a few to map to the CIM Alert datamodel. Specifically:
- The source of the finding. Who was the actor? Was it an IP? An account?
- The target of the finding. What was affected by the actor? Was it an EC2 instance? An S3 bucket?
We may need a few other details to help provide us context and generate a human readable description.
The field names follow a well structured format defined by an API and easily parsible when exported in JSON. If we parse out relevent details about the type of finding and the resources affected, we can determine which detial fields we want to parse out as well.
Parsing the Finding Type/Signature ID
GuardDuty provides a Finding Type field in a machine parseable format that we will use as the CIM Alert signature_id. It contains a lot of detail that we may want to use during our mapping process. I use the following Splunk Regular Expression in rex to parse out it’s parts.
| rex field=detail.findingType "(?<ThreatPurpose>[^:]+):(?<ResourceTypeAffected>[^/]+)/(?<ThreatFamilyName>[^\.]+)\.(?<DetectionMechanism>[^!]+)!(?<Artifact>)"`
This will produce the fields ThreatPurpose, ResourceTypeAffected, ThreatFamilyName, DetectionMechanism, Artifact and we may or may not want to use these later when determining what other fields to map to the CIM Alert datamodel.
Mapping GuardDuty Severity to CIM Alert Severities
GuardDuty severities to map completely to Splunk CIM’s Alert datamodel. You will need to make a design decision and below you can see my choices.
GuardDuty currenlty uses a 0-10 numeric value and maps these to low, medium, high. However, they do not use the numeric values 0-1 or 9-10. Further, GuardDuty’s documentation recommends “that you treat any High severity finding security issue as a priority and take immediate remediation steps to prevent further unauthorized use of your resources.” CIM Alert uses critical, high, medium, low, informational, and unknown.
How should we map GuardDuty severity to CIM Alert severity? I use the following table and only deviate from GuardDuty’s mapping of numeric ratings to word labels to put the highest GuardDuty findings as Splunk CIM “critical”. I make an assumption that GuardDuty will only use 1 decimal place.
GuardDuty | CIM Alert | Comment |
---|---|---|
0-0.9 | informational | Unused by GuardDuty |
1-3.9 | low | Failed attempts |
4 - 6.9 | medium | Suspicious and anomylous activity |
7-7.9 | High | We will stick with the GuarDuty definition here |
8-10 | Critical | Based on my experience |
A simple Splunk eval to acheive this:
| eval severity=case(
detail.severity > 7.9, "critical",
detail.severity > 6.9, "high",
detail.severity > 3.9, "medium",
detail.severity > 0.9, "low",
detail.severity > -1, "informational"
)
Dynamically Mapping GuardDuty Targets to CIM Destinations
The CIM uses dest to represent the thing that is affected by an event. GuardDuty findings call these targets. In the CIM Alert datamodel there is only one dest field, but we can describe it with the dest_type field.
There are several CIM Alert fields that are still To Be Determined (TBD). We must use additional details from the GuardDuty finding to determine the values we need.
CIM Alert Field | GuardDuty Finding Field | Description |
---|---|---|
dest | TBD | The thing affected by detected activity |
dest_type | TBD | The type of thing affected by the activity detected |
mitre_technique_id | TBD | We could create a lookup table to perform this mapping |
severity | TBD | The Splunk prescribed severity values: critical, high, medium, low, informational, unknown |
src | TBD | The thing that caused the detected activity |
src_type | TBD | The type of thing that caused the detected activity |
user | TBD | A user involved in the detected activity in the format used by the Splunk ES Identity inventory. This can the actor or target: we have to choose what makes sense for our use-case. |
user_name | n/a | A username invovled in the detected activity. Human readable. |
The values we want are usually found in the GuardDuty resource details. Some of them might be in the service details. However, these are complex data objects with many fields. We may need to combine them or compare other field values to determine the specific one we need.
Understanding GuardDuty Resource Details
Each GuardDuty finding detail will contain a resource but is this the thing that was the target or the source of the detected activity? Is the threat actor or the victim?
We can determine this by look at the service details. This is contains details of what the GuardDuty service observed and how to interpret them.
Relevant parts can be
detail.service.resourceRole == TARGET
detail.service.action.actionType = <actionTypeParseable> (eg. NETWORK_CONNECTION)
map actioneTypeParseable to actionType
detail.service.action.<actionType>Action
detail.resource.resourceType == <resourceType>
detail.resource.<resourceType>Details
detail.resource.accessKeyDetails.accessKeyId
detail.resource.accessKeyDetails.principalId
detail.resource.accessKeyDetails.userName
detail.resource.accessKeyDetails.userType
detail.resource.containerDetails
detail.resource.ebsVolumeDetails
detail.resource.ebsVolumeDetails.scannedVolumeDetails.deviceName
detail.resource.ebsVolumeDetails.scannedVolumeDetails.volumeArn
detail.resource.ecsClusterDetails
detail.resource.eksClusterDetails
detail.resource.instanceDetails
detail.resource.instanceDetails.imageDescription
detail.resource.instanceDetails.instanceId
detail.resource.instanceDetails.networkInterfaces{}.privateIpAddress
detail.resource.instanceDetails.networkInterfaces{}.publicIp
detail.resource.lambdaDetails
detail.resource.rdsDbInstanceDetails
detail.resource.rdsDbuserDetails
detail.resource.s3BucketDetails
detail.resource.s3BucketDetails.arn
detail.resource.s3BucketDetails.name
detail.resource.s3BucketDetails.owner.id
detail.resource.s3BucketDetails.type
We can easily map the dest_type and src_type from detail.service.resourceRole and detail.resource.resourceType.
| eval dest_type=case(detail.service.resourceRole=="TARGET", detail.resource.resourceType, True, "unknown")
| eval src_type=case(detail.service.resourceRole=="ACTOR", detail.resource.resourceType, True, "unknown")
It is more challenging to determine the specific value we should put in src or dest however. The following table can be used to describe how we choose values to put into src, dest, user, and user_name. It should have one value for each combination of resourceType and resourceRole. We may need to include a column for the finding type, threat family name, detection mechanism as well. If so, this will be quite long and may need to be updated overtime as we encounter findings we have not handled.
Resource Type | Resource Role | dest field | src field | user field |
---|---|---|---|---|
accessKey | TARGET | detail.resource.accessKeyDetails.accessKeyId | TBD | detail.resource.accessKeyDetails.principalId |
instance | TARGET | detail.resource.instanceDetails.instanceId | TBD | |
ebsVolume | TARGET | detail.resource.ebsVolumeDetails.scannedVolumeDetails.deviceName | TBD | TBD |
Enriching the Alert Description
We have an opportunity to enrich the CIM Alert description field by combining the GuardDuty description with other information. The description field provided by GuardDuty is fine, but relatively generic. Most of the details GuardDuty provides are in other fields. For any important information we want our Splunk users to see but for which CIM Alert has not field, we should consider adding it to the description field.
Another approach is to noralize GuardDuty events to multiple CIM datamodels, some of which can represent the details of GuardDuty alerts better than others. This is discussed in the Conclusion of this article.
What could we add to the description?
- Attack Details
- Malware family details
- Quarantine status
- Threat intel used to determine the finding
- Additional ‘src’ and ‘dest’ values we chose not to use in our mapping
- AWS identifiers that would help an analyst quickly lookup more details
- Links to GuardDuty
Conclusion
Mapping a complex detection alert to a the CIM Alert fields requires us to make design decisions that interpret the intention of both systems. For example, what is the intended meaning of critical in labeling an alert? Our design choices will affect how the normalized data is used later.
Further, it takes analysis to determine which data represents the threat actor, action, and impacted asset. In this excercise, both models have presentations for these, but mapping this is not straight forward. We have learn about how each format intends to represent them and transform one into the other, often comparing and combining multiple fields.
Should we map GuardDuty Findings to other Splunk CIM Datamodels?
Yes. While all GuardDuty findings can be usefully modeled as Splunk Alerts, some GuardDuty findings can be mapped to other CIM Datamodels.
For examples:
- GuardDuty can data malware which maps to the CIM Malware datamodel.
- GuardDuty can detect network attacks which maps to the CIM Intrusion Detection datamodel
- GuardDuty can detect exfiltration which could map to the CIM DLP datamodel
If you use Splunk Enterprise Security (Splunk ES) then you will have immediate benefits. Splunk ES used many datamodels automatically in dashboard, correlation searches, and investigative tools. By mapping specific GuardDuty findings to relevant CIM datamodels used automatmatically improve your ability to gain insights.
However, if you are using Splunk Enterprise without Splunk ES, I think you have to carefully consider if you have a use case. You may get value from the datamodels, but only if you are already mapping other datasources to them, or if you are building your own add-ons, dashboards, alerting, or reporting.
References
This is a list of high-quality audio-only podcasts
Top 3
The Cybersecurity Librarian recommends these 3 podcasts. They have these attributes:
- Original Content
- Quality Analysis
- Minimal or Stated Bias
- The Cyberwire
- I believe this is the gold standard for general daily cybersecurity news. The content is timely. The producers actively minimize, disclose, or state bias. The information is accurate and authoritative. The sources they choose are well selected and authoritative. I have seen them state when a source was not primary. The analysis is insightful. The style of the primary host (Dave Bittner) is charming, wry, and still effecient and professional. The guests are well choosen and diverse. While the revenue model they have (advertising/sponsorship) does bias their selection of guests, the interviews themselves appear to be far less bias than other similar shows.
- The Cyberwire has a number of spin-off podcasts on the topics of Social Engineer, Cybersecurity Law, Security and Vulnerability Research, and Security Management. Each strikes its own balance of entertainment, education, and original content. Each relies on unique and authoritative guests.
- Malicious Life
- An extraordinary documentary-style podcast. The host Ran Levi is an engaging presenter and selects worthy topics from the history of cybercrime. What makes this podcast worth listening too is how the producers take complicate timelines of events, balance the detail required, and tell the story of major historical cybersecurity events. There is occaisional bias, but the hosts are good at stating it (mostly). The accuracy and historical detail of the content are impressive. They manage to balance the level of historical and technical detail and tell an entertaining and educational story.
- Darknet Diaries
- Darknet Diaries presents stories of recent cybercrimes and interviews with cybercriminals, hackers, and penetration testers. Despite the title, the stories are not about the Darknet per se, but about criminal hacking and world of those that compromise security. The topics are diverse, the storytelling is compelling, and interviewed guests are unique. This will give you more than just an entertaining look at cybercrime, it allows us a window into the minds of the people behind many well known security incidents. This is not fact-checked journalism: these are excellent stories. You will hear first hand accounts from criminals and here them state their motivations, tell their life stories, and explain their actions.
News / Threat Intelligence
- Discarded
- Proofpoint has an amazing Cyber Threat Intelligence team. They are especially well known for tracking email-based threats. This podcasts gives you a behind-the-scenes look into the work of Proofpoint’s intelligence analysts. Typical episodes introduce you to a few analysts, their backgrounds, and the focus of their intelligence work. Then there is a discussion that follows about notable threat actors or analysis methods. If your work involves reporting on any of the “TA” actors (TA505, TA577, TA570), then this podcast is for you. While this is sponsored by a security vendor it is not marketing oriented, and seems to be driven by the analysts themselves giving it an authentic feel: quality content instead of shiny production values.
- Click Here
- Recorded Future’s newest podcast takes a journalistic style that is different than many other security podcasts. The topics are typically similar to what you might see in the news, but coving the “cyber” side: cyber-espionage, cyber-crime, or cyber-intelligence. The host, Dina Temple-Raston, was formerly part of NPR’s Investigation team and the podcast takes on a serious and more intriguing tone: The format is documentary journalism not round-table discussion.
- Recorded Future Podcast
- Recorded Future is a company that offers Threat Intelligence services. Their podcast is hosted by Cyberwire host Dave Bittner, and presents interviews with professionals involved in Cyber Threat Intelligence work. Unlike many other vendor podcasts, this one does not focus exclusively on interviewing their own staff and includes many people throughout the industry. It is not a sales-focused marketing initiative and the treatment of topics and selection of guests does not appear to be overly biased.
Privacy, Law, and Policy
- Caveat
- Caveat is hosted by Cyberwire’s Dave Bittner and Lawyer Ben Yelin. You do not have to be a lawyer to enjoy or learn from this podcast. It discusses recent cybersecurity news and events that are impacted by law.
- Privacy Insider
- Hosted by Justin Antonipllai, the former Under Secretary for Economic Affairs at the US Department of Commerce, this podcast takes a serious look at law, policy, and social issues related to privacy. The Cybersecurity Librarian has yet to render a verdict on bias. It is sponsored, but the content seemed more “privacy geek” than marketing.
Management and Leadership
- Dev.Sec.Lead
- While this podcast is no longer produced, it is still available on most platforms. Hosted by Threat Intelligence author Wilson Bautisa Jr., this podcast focuses on leadership development. It is of interest not just to CISOs and managers, but also for the every-day professional. The interview and topics vary greatly and the depth the topics are treated is refreshing. These guests are positive role models focused on improving our profession. This
Writing is a vital skill in cybersecurity. Even those in highly technical roles will be required to write clear concise technical documentation, procedures, and playbooks. Those involved in the assessment of risk, threats, and vulnerabilities will benefit from strong report writing skills. Managers and Consultants have the greatest need to develop effective communication and persuasive writing abilities.
The resources listed on this page will help you develop your writing skills, no matter what your role and need. Please share with us anything that you found helpful. The most useful, clear, and authoritative resources will be added to this list.
Top 3
- Ten Steps to Help you Write Better Essays & Term Papers
- This book by Neil Sawer is concise and practical. It doesn’t make you learn theory, it tells you what actions to take, right now, to start writing. Then it tells you want you can do to edit your writing and improve it. While this book is focused on students, the advice applies generally to anyone suffering from writers block, or who finds themselves challenged to write more clearly or briefly.
- How to write Proposals, Sales Letters & Reports
- Also from Neil Sawer, this book uses some of the same writing advice from “Write Better Essays” and applies it to the business world. There is more emphasis on persuasive writing and on communicating with visuals, charts, etc.
Writing for Penetration Testers and Vulnerability Assessment
If you have additional or better examples, templates, or writing guides for pentration testers, please let us know!
Penetration Testers rarely start as excellent writers. Your observations and discoveries need to be communicated and understood if they are to be valued. If you have felt frustrated trying to find good resources on writing pentest reports, you are not alone. Standards for writing pentest reports are emerging and so is advince on good writing. If writing is new to you, remember it just takes practice, just like pentesting does.
Start with learning how to write a narrative report: the most common and easiest type of pentest report.
- Penetration Test Report
- Offensive Security has provided this template for use by their OSCP penetration testing students for years. It is intended to capture what activities you carried our in your pentest and the order you did them. While it does include recommendations the main focus is on capturing evidence.
Your clients will probably want more than a narrative report. Most want documented observations, risk assessment, and actionable recommendations. When you get good at writing your narrative reports, and consistently include verifiable proof of testing as well as verifiable findings, it will be time to practice writing more complete reports.
- Writing Penetration Testing Reports
- This is a paper from the SANS Institute’s Reading Room, submitted by a GIAC candidates paper for “GOLD” certification. It presents a fuller view of what a penetration testing report should look like. You will notice that it does not bear much resemblance to the Offensive Security “narrative” template. A narrative report would be an appendix to this type of report. This is what a client is looking for from a vulnerability report: background, risk assessment, and actionable recommendations.
Project Propoals and Statements of Work
If you work as a consultant you will need to write Statements of Work (SOWs) frequently. These are brief summaries that contain a Work Breakdown Structure (WBS) and estimated effort. They do not fully describe a Scope of Work, but are enough to authorize work when a client has trust and clear understanding.
Consultants and employees with initiative will have to write Project Proposals or Plans. These are larger detailed documents that explain the background and need for a project, the detailed scope, a Work Breakdown Structure, estimated effort, requirements for the project, roles of the parties involved, estimates of cost, and more.
- How to write Proposals, Sales Letters & Reports
- This book uses some of the same writing advice from “Write Better Essays” and applies it to the business world. There is more emphasis on persuasive writing and on communicating with visuals, charts, etc.
The Cybersecurity Librarian maintains a list of use references for helping you to learn more about cybersecurity, to keep up to date, and to develop your skills and knowledge. There are seperate pages for major categories of reference material.
Resource Categories
Do you have a great book, video, blog, article, magazine, journal, podcast, or course that helped you? Let us know. The most compelling, useful, concise, and clear resources will be added to the lists!
“Moro and Mike” was a weekly livestream discussing the cybersecurity profession practice. Our topics included leadership, management, job hunting, career development, emotional intelligence, threat intelligence, situational awareness and more. We go beyond the technology to discuss the professional practice of cybersecurity and IT.
Podcasts
The RSS feed for the Moro and Mike Podcast is https://cyberlibrarian.ca//moro-and-mike/podcast.rss
The Podcast is the audio-only portion of the Moro and Mike YouTube Livestream
Moro and Mike is recorded live on YouTube, but past episodes are available in podcast (audio-only) format on:
- iTunes
- Spotify,
- and Google Podcasts
Just search for “Moro and Mike”.
Past Livestreams
<?xml version=”1.0” encoding=”UTF-8”?>
Theme Technical Testing
This page contains tests of technical aspects of the theme I am using and Jekyll in general.
{{ site.author.name }}
{{ site.author.name | escape }} |
{%- assign default_paths = site.pages | map: “path” -%} | ||
{%- assign page_paths = site.header_pages | default: default_paths -%} | ||
{%- assign titles_size = site.pages | map: ‘title’ | join: ‘’ | size -%} |
{{ “/” | relative_url }} |
{{ site.title | escape }} |
{{ default_paths }} {{ page_paths }} {{ site.pages }} {{ site.pages | map: ‘title’ | join: ‘’ }} {{ titles_size }} @import “minima/skins/{{ site.minima.skin | default: ‘classic’ }}”, “minima/initialize”;
<?xml version=”1.0” encoding=”utf-8”?>{% if page.xsl %}<?xml-stylesheet type=”text/xml” href=”{{ ‘/feed.xslt.xml’ | absolute_url }}”?>{% endif %}<feed xmlns=”http://www.w3.org/2005/Atom” {% if site.lang %}xml:lang=”{{ site.lang }}”{% endif %}>
{% if doc.last_modified_at or doc.date %}
{% if page.last_modified_at %}
</url> {% endfor %}</urlset> Sitemap: {{ “sitemap.xml” | absolute_url }}
Detection Engineering NotesMapping AWS GuardDuty Finding to Splunk CIM Alert DatamodelRecommended PodcastsWriting SkillsLearning ResourcesMoro and MikeMoro and Mike PodcastTechnical Tests 187