add_alert_comment | Add a comment to a Panther alert. Comments support Markdown formatting. Args:
alert_id: The ID of the alert to comment on
comment: The comment text to add
Returns:
Dict containing:
- success: Boolean indicating if the comment was added successfully
- comment: Created comment information if successful
- message: Error message if unsuccessful |
disable_rule | Disable a Panther rule by setting enabled to false. Args:
rule_id: The ID of the rule to disable
Returns:
Dict containing:
- success: Boolean indicating if the update was successful
- rule: Updated rule information if successful
- message: Error message if unsuccessful |
execute_data_lake_query | Execute custom SQL queries against Panther's data lake for advanced data analysis and aggregation. This tool requires a p_event_time filter condition and should only be called five times per user request. For simple log sampling, use get_sample_log_events instead. The query must follow Snowflake SQL syntax (e.g., use field:nested_field instead of field.nested_field). WORKFLOW:
1. First call get_table_schema to understand the schema
2. Then execute_data_lake_query with your SQL
3. Finally call get_data_lake_query_results with the returned query_id
Returns a dictionary with query execution status and a query_id for retrieving results. |
get_alert_by_id | Get detailed information about a specific Panther alert by ID |
get_alert_events | Get events for a specific Panther alert by ID.
We make a best effort to return the first events for an alert, but order is not guaranteed.
This tool does not support pagination to prevent long-running, expensive queries.
Args:
alert_id: The ID of the alert to get events for
limit: Maximum number of events to return (default: 10, maximum: 10)
Returns:
Dict containing:
- success: Boolean indicating if the request was successful
- events: List of most recent events if successful
- message: Error message if unsuccessful |
get_bytes_processed_per_log_type_and_source | Retrieves data ingestion metrics showing total bytes processed per log type and source, helping analyze data volume patterns. Returns:
Dict:
- success: Boolean indicating if the query was successful
- bytes_processed: List of series with breakdown by log type and source
- total_bytes: Total bytes processed in the period
- from_date: Start date of the period
- to_date: End date of the period
- interval_in_minutes: Grouping interval for the metrics |
get_data_lake_query_results | Get the results of a previously executed data lake query. Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- status: Status of the query (e.g., "succeeded", "running", "failed", "cancelled")
- message: Error message if unsuccessful
- results: List of query result rows
- column_info: Dict containing column names and types
- stats: Dict containing stats about the query
- has_next_page: Boolean indicating if there are more results available
- end_cursor: Cursor for fetching the next page of results, or null if no more pages |
get_global_helper_by_id | Get detailed information about a Panther global helper by ID Args:
helper_id: The ID of the global helper to fetch
Returns:
Dict containing:
- id: Global helper ID
- body: Python code for the global helper
- description: Description of the global helper |
get_panther_log_type_schema | Get detailed information for specific log type schemas, including their full specifications.
Limited to 5 schemas at a time to prevent response size issues. Args:
schema_names: List of schema names to get details for (max 5)
Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- schemas: List of schemas, each containing:
- name: Schema name (Log Type)
- description: Schema description
- spec: Schema specification in YAML/JSON format
- version: Schema version number
- revision: Schema revision number
- isArchived: Whether the schema is archived
- isManaged: Whether the schema is managed by a pack
- isFieldDiscoveryEnabled: Whether automatic field discovery is enabled
- referenceURL: Optional documentation URL
- discoveredSpec: The schema discovered spec
- createdAt: Creation timestamp
- updatedAt: Last update timestamp
- message: Error message if unsuccessful |
get_permissions | Get the current user's permissions. Use this to diagnose permission errors and determine if a new API token is needed. |
get_policy_by_id | Get detailed information about a Panther policy by ID including the policy body and tests Args:
policy_id: The ID of the policy to fetch |
get_rule_alert_metrics | Gets alert metrics grouped by detection rule for ALL alert types, including alerts, detection errors, and system errors within a given time period. Use this tool to identify hot spots in alerts and use list_alerts for specific alert details. Returns:
Dict:
- alerts_per_rule: List of series with entityId, label, and value
- total_alerts: Total number of alerts in the period
- from_date: Start date of the period
- to_date: End date of the period
- interval_in_minutes: Grouping interval for the metrics
- rule_ids: List of rule IDs if provided |
get_rule_by_id | Get detailed information about a Panther rule, including the rule body and tests Args:
rule_id: The ID of the rule to fetch |
get_sample_log_events | Get a sample of 10 log events for a specific log type from the panther_logs.public database. This function is the RECOMMENDED tool for quickly exploring sample log data with minimal effort.
This function constructs a SQL query to fetch recent sample events and executes it against
the data lake. The query automatically filters events from the last 7 days to ensure quick results.
NOTE: After calling this function, you MUST call get_data_lake_query_results with the returned
query_id to retrieve the actual log events.
Example usage:
# Step 1: Get query_id for sample events
result = get_sample_log_events(schema_name="Panther.Audit")
# Step 2: Retrieve the actual results using the query_id
events = get_data_lake_query_results(query_id=result["query_id"])
# Step 3: Display results in a markdown table format
Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- query_id: ID of the executed query for retrieving results with get_data_lake_query_results
- message: Error message if unsuccessful
Post-processing:
After retrieving results, it's recommended to:
1. Display data in a table format (using artifacts for UI display)
2. Provide sample JSON for a single record to show complete structure
3. Highlight key fields and patterns across records |
get_scheduled_rule_by_id | Get detailed information about a Panther scheduled rule by ID including the rule body and tests Args:
rule_id: The ID of the scheduled rule to fetch |
get_severity_alert_metrics | Gets alert metrics grouped by severity for rule and policy alert types within a given time period. Use this tool to identify hot spots in your alerts, and use the list_alerts tool for specific details. Keep in mind that these metrics combine errors and alerts, so there may be inconsistencies from what list_alerts returns. Returns:
Dict:
- alerts_per_severity: List of series with breakdown by severity
- total_alerts: Total number of alerts in the period
- from_date: Start date of the period
- to_date: End date of the period
- interval_in_minutes: Grouping interval for the metrics |
get_simple_rule_by_id | Get detailed information about a Panther simple rule by ID including the rule body and tests Args:
rule_id: The ID of the simple rule to fetch |
get_table_schema | Get column details for a specific datalake table. IMPORTANT: This returns the table structure in Snowflake/Redshift. For writing
optimal queries, ALSO call get_panther_log_type_schema() to understand:
- Nested object structures (only shown as 'object' type here)
- Which fields map to p_any_* indicator columns
- Array element structures
Example workflow:
1. get_panther_log_type_schema(["AWS.CloudTrail"]) - understand structure
2. get_table_schema("panther_logs.public", "aws_cloudtrail") - get column names/types
3. Write query using both: nested paths from log schema, column names from table schema
Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- name: Table name
- display_name: Table display name
- description: Table description
- log_type: Log type
- columns: List of columns, each containing:
- name: Column name
- type: Column data type
- description: Column description
- message: Error message if unsuccessful |
list_alert_comments | Get all comments for a specific Panther alert by ID. Args:
alert_id: The ID of the alert to get comments for
limit: Maximum number of comments to return (default: 25)
Returns:
Dict containing:
- success: Boolean indicating if the request was successful
- comments: List of comments if successful, each containing:
- id: The comment ID
- body: The comment text
- createdAt: Timestamp when the comment was created
- createdBy: Information about the user who created the comment
- format: The format of the comment (HTML or PLAIN_TEXT or JSON_SCHEMA)
- message: Error message if unsuccessful |
list_alerts | List alerts from Panther with comprehensive filtering options Args:
start_date: Optional start date in ISO 8601 format (e.g. "2024-03-20T00:00:00Z")
end_date: Optional end date in ISO 8601 format (e.g. "2024-03-21T00:00:00Z")
severities: Optional list of severities to filter by (e.g. ["CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"])
statuses: Optional list of statuses to filter by (e.g. ["OPEN", "TRIAGED", "RESOLVED", "CLOSED"])
cursor: Optional cursor for pagination from a previous query
detection_id: Optional detection ID to filter alerts by. If provided, date range is not required.
event_count_max: Optional maximum number of events that returned alerts must have
event_count_min: Optional minimum number of events that returned alerts must have
log_sources: Optional list of log source IDs to filter alerts by
log_types: Optional list of log type names to filter alerts by
name_contains: Optional string to search for in alert titles
page_size: Number of results per page (default: 25, maximum: 50)
resource_types: Optional list of AWS resource type names to filter alerts by
subtypes: Optional list of alert subtypes. Valid values depend on alert_type:
- When alert_type="ALERT": ["POLICY", "RULE", "SCHEDULED_RULE"]
- When alert_type="DETECTION_ERROR": ["RULE_ERROR", "SCHEDULED_RULE_ERROR"]
- When alert_type="SYSTEM_ERROR": subtypes are not allowed
alert_type: Type of alerts to return (default: "ALERT"). One of:
- "ALERT": Regular detection alerts
- "DETECTION_ERROR": Alerts from detection errors
- "SYSTEM_ERROR": System error alerts |
list_database_tables | List all available tables in a Panther Database. Required: Only use valid database names obtained from list_databases
Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- tables: List of tables, each containing:
- name: Table name
- description: Table description
- log_type: Log type
- database: Database name
- message: Error message if unsuccessful |
list_databases | List all available datalake databases in Panther. Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- databases: List of databases, each containing:
- name: Database name
- description: Database description
- message: Error message if unsuccessful |
list_global_helpers | List all global helpers from Panther with optional pagination Args:
cursor: Optional cursor for pagination from a previous query
limit: Optional maximum number of results to return (default: 100) |
list_log_sources | List log sources from Panther with optional filters. Args:
cursor: Optional cursor for pagination from a previous query
log_types: Optional list of log types to filter by
is_healthy: Optional boolean to filter by health status
integration_type: Optional integration type to filter by (e.g. "S3") |
list_log_type_schemas | List all available log type schemas in Panther. Schemas are transformation instructions that convert raw audit logs
into structured data for the data lake and real-time Python rules. Note: Pagination is not currently supported - all schemas will be returned in the first page.
Args:
contains: Optional filter by name or schema field name
is_archived: Optional filter by archive status
is_in_use: Optional filter used/not used schemas
is_managed: Optional filter by pack managed schemas
Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- schemas: List of schemas, each containing:
- name: Schema name (Log Type)
- description: Schema description
- revision: Schema revision number
- isArchived: Whether the schema is archived
- isManaged: Whether the schema is managed by a pack
- referenceURL: Optional documentation URL
- createdAt: Creation timestamp
- updatedAt: Last update timestamp
- message: Error message if unsuccessful |
list_panther_users | List all Panther user accounts. Returns:
Dict containing:
- success: Boolean indicating if the query was successful
- users: List of user accounts if successful
- message: Error message if unsuccessful |
list_policies | List all policies from Panther with optional pagination Args:
cursor: Optional cursor for pagination from a previous query
limit: Optional maximum number of results to return (default: 100) |
list_rules | List all rules from your Panther instance. Args:
cursor: Optional cursor for pagination from a previous query
limit: Optional maximum number of results to return (default: 100) |
list_scheduled_rules | List all scheduled rules from Panther with optional pagination Args:
cursor: Optional cursor for pagination from a previous query
limit: Optional maximum number of results to return (default: 100) |
list_simple_rules | List all simple rules from Panther with optional pagination Args:
cursor: Optional cursor for pagination from a previous query
limit: Optional maximum number of results to return (default: 100) |
put_rule | - |
summarize_alert_events | Analyze patterns and relationships across multiple alerts by aggregating their event data into time-based groups. For each time window (configurable from 1-60 minutes), the tool collects unique entities (IPs, emails, usernames, trace IDs) and alert metadata (IDs, rules, severities) to help identify related activities. Results are ordered chronologically with the most recent first, helping analysts identify temporal patterns, common entities, and potential incident scope. Returns a dictionary containing query execution details and a query_id for retrieving results. |
update_alert_assignee_by_id | Update the assignee of one or more alerts through the assignee's ID. Args:
alert_ids: List of alert IDs to update
assignee_id: The ID of the user to assign the alerts to
Returns:
Dict containing:
- success: Boolean indicating if the update was successful
- alerts: List of updated alerts if successful
- message: Error message if unsuccessful |
update_alert_status | Update the status of one or more Panther alerts. Args:
alert_ids: List of alert IDs to update. Can be a single ID or multiple IDs.
status: The new status for the alerts. Must be one of:
- "OPEN": Alert is newly created and needs investigation
- "TRIAGED": Alert is being investigated
- "RESOLVED": Alert has been investigated and resolved
- "CLOSED": Alert has been closed (no further action needed)
Returns:
Dict containing:
- success: Boolean indicating if the update was successful
- alerts: List of updated alerts if successful, each containing:
- id: The alert ID
- status: The new status
- updatedAt: Timestamp of the update
- message: Error message if unsuccessful
Example:
# Update a single alert
result = await update_alert_status(["alert-123"], "TRIAGED")
# Update multiple alerts
result = await update_alert_status(["alert-123", "alert-456"], "RESOLVED") |