This article describes the steps on interpreting and tuning multiple behavior models, filtering the behavior models on various metrics, disabling, enabling, and deleting the behavior models from the behavior analytics home page.
Model Recommendations
Resolution Intelligence Cloud recommends multiple models to start detecting user behavior. Now you can define the behavior models based on the recommendations with ease on the behavior analytics home page. The AI suggests new models based on the combination of log sources and existing models. This functionality is available at the tenant level only.
There is also a refresh button next to the model recommendations, allowing you to generate five new recommendations each time it is refreshed. This ensures you always have the latest and most relevant suggestions for model creation.
Model generation using AI
AI integrated with behavioral analytics will create models based on tailored recommendations. When you select a recommendation to create a model, the AI gathers the necessary specifications and provides a draft of the initial model. However, you must manually review and complete any missing data before publishing the model for use.
Generating a Model
Use this procedure to create the draft version of a model using AI and publish the model manually.
1. At the top left, hover over breadcrumb menu.
2. Under Security, click Behavior Analytics. The Behavior analytics page appears.
3. Click Generate Model corresponding to the Model recommendation for which you want to generate a model.
The opens the New ML model form where the AI generates the specifications for the model and are these defaulted in the respective fields on the NEW ML form.
5. Check the form and fill in the missing mandatory fields to generate the model.
6. Click Send for Review to submit the model for review. To save the model as a draft without sending it for review, use the Save as Draft option. This allows you to make changes and submit it for review later.
7. Click Approve or send the model to request changes. Clicking Approve will display a dialog box. Only the user with the publisher role can approve models.
8. Click Approve. This changes the status from Under Review to Approved.
You can now view the content packs. Models can only be published when they are associated with content packs.
9. Click Associate Packs. This opens the side panel where you can view the list of content packs available for the tenant.
10. Select the pack with which you want to associate the model.
11. Click Associate & Publish to publish the pack associated with the models. You can also click Associate to link the model and publish it later. This only changes the status to Ready to Publish. Until the model is published, you can keep adding the packs to which you want to associate the model.
Once published, the model's status changes to Published after being sent to Bigquery. It is now available and can be found in the generated list of models
Managing the Models listing page
The Behavior Analytics Listing Page displays a collection of models. The visibility of models depends on your operating environment:
Multi-Tenant Environment
- Domain Level:
You can view models created by you and other users within your domain. - Organization Level:
You can view models created by you, as well as models inherited (percolated) from the domain account. - Tenant Level:
You can view models created by you, as well as models inherited from the domain or organization accounts.
Single-Tenant Environment
If you are operating in a single-tenant setup, you can only view models created by you and other users within your tenant.
You can reach any behavior model to drill down further from this listing page.
The columns given in the below image are described as follows:
Model Name | The name was given to a model while configuring. You can not rename it. |
Version | The model version. |
Status | Denotes the state of a model. Possible values are Enabled, Disabled, and Failed. |
Model Type | A type of model is chosen while configuring. You can change this field. |
Dimensions | Lists UDM fields tracked for behavior observation. |
Outlier Score | The score set during model creation to determine the threshold for considering a behavior anomalous. |
Confidence Score | The score set during model creation to control signal generation based on data strength. A higher confidence score indicates signals require sufficient past behavior data. |
Signals generated | The cumulative number of signals generated since model creation. |
Total behaviors tracked | The total count of behaviors tracked since model inception. |
Last Run Status | The model always runs always and shows the status of the last run. Possible values are: Success or Failure. |
Data Completeness | Denotes quality data without missing information or duplicates. |
Behaviors Over Variance Limit (Last Run) | Total number of behaviors exceeding the defined variance limit in the last run. |
Behavior Analyzed (Last Run) | Total number of behaviors analyzed in the last run. |
Total Behaviors over Variance Limit | The total number of behaviors exceeding the defined variance limit for the whole period since the model was created. |
Avg Data Completeness (30d) | The percentage of available data that is complete or lacks missing values within a given dataset for a period of 30 days. |
Last Modified Time | The date and time at which the last modification was done in a model. |
Last Modified By | The personnel who modified the model recently. |
Manage Columns filter
The Manage Columns filter allows you to select your desired columns to display on the listing page. This simplifies the tuning of a model by picking the best parameters from these columns.
Procedural filter
On the right side of the listing page (highlighted in red) is an aggregated overview of models within the listing page, categorized by Dimensions, Model Type, Created by, Tactics, and Techniques. This allows for a broader trend analysis of model types and facilitates filtering to view models with specific attributes/values.
Viewing a behavior profile offers you information beyond a traditional chart. You can create what-if Situations and tune the model, interact with the data (slice and dice) using the histogram/interactive filter on the right, or simply ask your question about the behaviors to Resolution Intelligence Cloud's conversation AI.
The page consists of the following components, categorized based on the outcomes you can achieve:
Model Details
This page provides a comprehensive list of details about the model, including the selected attributes, aggregation type, model type, alerting conditions, associated tactics, and all metadata associated with the model.
Model type | The model type that you have selected while configuring a model is shown here |
Dimensions | The type of dimensions you have selected from the UDM fields. |
Time aggregation | The model aggregates the data based on the time. |
Signal generation | Shows enabled or disabled |
Alerting mode |
Possible values are: Flatten: No signal grouping is done based on the selected UDM fields. It is displayed when more than one UDM field is selected for signal generation. Group: Signals grouped by the specific UDM field. It considers only one UDM field out of all fields. |
Confidence Score | The score is set during model creation to control signal generation based on data strength. A higher confidence score indicates that signals require sufficient past behavior data. |
Higher than base line | A signal is generated if the deviation crosses the base line score. |
Behavioral baseline for | Shows the type of selected dimensions are compared with base line behavior. |
Filtering and analyzing a Behavior
Outliers are visible only when both signal generation and the model are enabled. If the model is enabled but signal generation is disabled, outliers will still be detected, but the corresponding events or outliers will not be sent to Chronicle. When both the signal and model are enabled, outliers are sent as raw data to Chronicle, where parsers normalize the data into UDM format. The processed signals are then forwarded to Resolution Intelligence Cloud and can be viewed in the Resolution Signals page.
On the model details page, you can examine the behavior of any model by including or excluding certain attributes shown on the left of the page.
To analyze the trend of attributes,
- Select the duration for which you want to view the behaviors and outliers generated. You can only select a date range of up to 30 days. If an outlier occurs for the first time within the selected 30-day period, it will be marked as a first occurrence. You can filter the signals by first occurrences, outliers, and inline behaviors.
- Under Behavior Distribution over time, click on any bar in the graph. The selected dimensions and signals appear for the selected date range.
- In the right panel, hover over each attribute (signal or IP address) and click on any of the following:
--> includes the attribute value in the analysis.
--> excludes the attribute value in the analysis.
--> include only this value and exclude others in the analysis.
- Selected or excluded attribute values appear on the top of the bar chart. Under Aggregate Insights, you can view the total unique values and the total values for each dimension.
Reading charts and tables
The default landing page for selecting a behavior profile is the behavior analysis tab. In the behavior analysis tab, you can
-
Learn more about the behaviors the model is tracking
-
Investigate the behavior data using the interactive filtering on the right
-
Create what-if analysis on the model and tune the model for better performance
-
Ask questions to the behavior model using Resolution Intelligence’s conversational AI
The first histogram – highlighted in yellow in the image below – visualizes the total behaviors tracked for each day for the last 30 days. By default, the last day is selected in the histogram. You can modify the anomaly score given under the histogram anytime, and it ranges from 0 to 1. The total behaviors are updated according to the given anomaly score.
What if? By moving the slider across the distribution, you can visualize when certain behaviors shift from being non-anomalous to anomalous. This feature allows for a more granular and dynamic understanding of how the system tracks and scores behaviors, using which you can tune the model for better performance and false positives.
- Aggregated Activity Count: Total count of activities performed on a specific day. e.g. number of login failures for the day is 5.
- Baselined Targets for Source (specific to Enumeration model): Informed by the training data, this column highlights the targets a source normally communicates with.
- Baselined Unique Target Count for Source (specific to Enumeration model): The distinct count of targets a source normally communicates with. Learned during the training period, it evolves as the behavior of entities changes over time.
- Unique Target Count for Day (specific to Enumeration model): Exhibited behavior on a given day. Highlights the distinct number of targets the source communicated with.
- Variance Score Reason (specific to Enumeration model): Reasons for the Variance Score (anomaly score). Reasons include rarity (communicating with a new target) or deviation in activity (communication with a known target exceeding identified levels).
- First Occurrence: The first observation of a behavior in the environment.
- Last Seen Date: Date of the most recent observed behavior.
- Baseline: Default behavior learned from historical data.
- Deviation from Baseline behavior: Ratio of observed activity in a behavior on a given day against the learned normal behavior.
- Variance Score: Also known as anomaly score, it is a score normalized to -1 to +1. -1 indicates reduced activity, and +1 signifies elevated activity compared to normal. The Variance score operates on a tailored algorithm that considers multiple dimensions of behavior.
- Variance Certainty: The confidence level associated with the variance score. Influenced by the number of data points (data strength) observed for a behavior in the past and the distribution of data at the dimension level.
Note: Variance certainty help fine-tune false positives.
For instance, a behavior model could be configured to trigger a signal only when a behavior's Variance (anomaly) score surpasses 0.8 and its Variance certainty exceeding 0.9. - Alert Status: The status '1' indicates that the anomaly score exceeds the set abnormal condition
The Interactive filtering on the right can help you control/filter the data you want to see in the table. But the interactive filtering also serves another purpose. It also highlights patterns in the selected data, sorted in descending order.
Thus far, data analysis has primarily involved histograms, tables, and interactive filters. However, with Resolution Intelligence Cloud, you can achieve the same outcomes simply by asking questions through conversational AI.
With conversational AI, you can benefit from unparalleled convenience and functionality. You can ask any question about the behavior model in plain language, and the system will identify and execute the necessary steps to find the answer.
Behavioral Insights
The following column names in a behavior analytic table show how each model identifies anomalies based on the baseline values for the source dimensions involved in the model.
Variance Score
Also known as the anomaly score, this score is normalized to -1 to +1. -1 indicates reduced activity, and +1 signifies elevated activity compared to normal. The variance score operates on a tailored algorithm that considers multiple dimensions of behavior.
Variance Certainty
The confidence level is associated with the variance score. Influenced by the number of data points (data strength) observed for a behavior in the past and the distribution of data at the dimension level.
Note: Variance certainty help fine-tune false positives.
For instance, a behavior model could be configured to trigger a signal only when a behavior's variance (anomaly) score surpasses 0.8 and its variance certainty exceeds 0.9.
Identifying the model to tune from the listing page
A challenging task when creating behavior models is understating which models are generating false-positives and how to go about tuning them.
When architecting behavior analytics, we realized its importance and incorporated a lot of metrics when modelling data.
Metrics for identifying a model to tune:
- Data completeness/average Data completeness: The completeness of the data used for model training or alerting is critical. If the model operates on limited or partial data, it may result in an increased number of false-positives. Continuous monitoring of the events under analysis helps identify gaps in the ingested data, allowing for adjustments to improve data completeness
- Total Behaviors over Variance Limit (since Model creation): This metric indicates whether a model has a high number of outlier behaviors compared to the variance limit set during its creation. An excessive number of outlier behaviors suggests that the variance limit may need adjustment to better reflect the expected behavior patterns.
- Behaviors Over Variance Limit (Last Run): While Total Behaviors Over Variance Limit provides an overall view, this metric focuses on identifying the number of outliers observed during a specific time frame, typically the last run of the model. It offers insights into the immediate performance of the model and helps pinpoint any sudden spikes in outlier behaviors.
Using simulation to tune a model
Simulation plays a vital role in fine-tuning behavior models. By simulating various scenarios and input data, you can assess the model's performance under different conditions and identify potential areas for improvement. This iterative process enables you to adjust parameters, thresholds to optimize the model's accuracy and reduce false-positive alerts.
In summary, leveraging a combination of metrics for model evaluation and employing simulation techniques for tuning allows for continuous improvement in behavior analytics, ultimately enhancing the reliability and effectiveness of the Resolution Intelligence Cloud.
Disable a Model
Disabling allows you to deactivate any model that is in an active state. You can disable any model using the two methods mentioned below:
To disable a model,
- In the behavior analytics home page, scroll right and click the ellipses icon
to next of each model.
- Select Disable from the drop-down menu.
(Or)
3. Open a model that you would like to disable and click Disable at the top of the model.
Enable a Model
Enable allows you to activate any model that is in an inactive state. You can enable any model in two ways:
To enable a model,
- In the behavior analytics home page, scroll right and click the ellipses icon
to next of each model.
- Select Enable from the drop-down menu.
(Or)
3. Open a model that you would like to enable and click Enable at the top of model
Clone a Model
Cloning allows you to create a copy of a specific model with all the defined parameters.
Edit a Behavior Profile
Edit enables you to modify the profile details, - adding/removing filters, dimensions, and tags and increasing or decreasing the signal generation conditions.
To edit a profile,
- At the top left, hover over breadcrumb
menu.
- Under Security, click Behavior Analytics.
- From the models listing page, click on the profile that you would like to edit.
A behavior profile page opens on the screen. - Click Edit at the top of your profile.
The profile opens in the edit mode. - Modify the details wherever required.
- Click Submit.
Activity Log
Activity logs track all activities performed by users on each model.
Run History
The Run history allows you to see the status or outcomes of the behavior jobs that run every day. You can download the outcome of a job as a .csv file.
Delete a Model
You can delete any model that is no longer required. Once the model is removed, the model analysis stops immediately, and you cannot undo this action.
To delete a model,
- In the behavior analytics home page, scroll right and click the ellipses icon
to next of each model.
- Select Delete from the drop-down menu.
Creating a new model version
To create a new version of a model, ensure the model is in the Published state and was created by you or another user at your level. If the model at your level has been inherited (percolated) from a parent account, you cannot edit or create a new version, as only the parent account holds the permission to do so. However, if you are the original creator of the model, you can create a new version and make necessary updates as needed.
To create a new version of the model,
-
In the Behavior Analytics home page, locate and click on the model for which you want to create a new version. New versions can be created at the domain, org, or tenant levels.
Note: You can only create a new version for models that are in the Published state and were created by you or another user at the same level.
-
Select the version on the model to view the Create a New Version button.
- Click Create a New Version to open the model page, which is a copy of the previous version.
-
Modify the model's description, filters, conditions, abnormality score, and metadata as needed in the new version.
Note: You cannot modify the model name, model type, model dimensions, or data aggregation parameters in the new version of the model. -
Click Send for Review to submit the model for approval. To save the model as a draft without submitting it for review, click Save as Draft. This allows you to make changes and submit it later.
-
Click Approve or request changes for the model. Clicking Approve will open a dialog box. Only users with the Publisher role can approve models.
-
Click Approve to change the status from Under Review to Approved. Once approved, you can view the associated content packs. Models can only be published when they are linked to content packs.
-
Click Manage Packs to open the side panel displaying the content packs associated with the model.
-
Click Edit Associated Packs to add new packs to the model.
Note: You cannot deselect content packs that are already associated with the model. -
Click Associate & Publish to publish the model along with the associated content packs. Alternatively, click Associate to link the model to the packs and publish it later. This changes the status to Ready to Publish. Until the model is published, you can continue adding packs to associate with it.
Once published, the model’s status updates to Published after being sent to BigQuery. It becomes available in the generated list of models, with the newer version set as active and in the published state, while the older version is automatically disabled. You can switch between versions anytime using the versioning drop-down on the model page.
Comments
0 comments
Please sign in to leave a comment.