Collect BeyondTrust Endpoint Privilege Management logs
This document explains how to ingest BeyondTrust Endpoint Privilege Management
(EPM) logs to Google Security Operations using AWS S3. The parser focuses on
transforming raw JSON log data from BeyondTrust Endpoint into a structured format
conforming to the Unified Data Model (UDM). It first initializes default values for various
fields and then parses the JSON payload, subsequently mapping specific fields
from the raw log into corresponding UDM fields within the
event.idm.read_only_udm
object.
Before you begin
Make sure you have the following prerequisites:
- Google SecOps instance
- Privileged access to AWS
- Privileged access to BeyondTrust Endpoint Privilege Management
Configure AWS IAM for Google SecOps ingestion
- Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select the Security Credentials tab.
- Click Create Access Key in the Access Keys section.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download CSV file to save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select the Permissions tab.
- Click Add permissions in the Permissions policies section.
- Select Add permissions.
- Select Attach policies directly.
- Search for and select the AmazonS3FullAccess policy.
- Click Next.
- Click Add permissions.
Configure BeyondTrust EPM for API access
- Sign in to the BeyondTrust Privilege Management web console as an administrator.
- Go to System Configuration > REST API > Tokens.
- Click Add Token.
- Provide the following configuration details:
- Name: Enter
Google SecOps Collector
. - Scopes: Select Audit:Read and other scopes as required.
- Name: Enter
- Save and Copy the token value (this will be your BPT_API_TOKEN).
- Copy your API base URL; it is typically
https://<your-epm-server>/api/v3
or/api/v2
, depending on your version (you'll use this as BPT_API_URL).
Create an AWS S3 Bucket
- Sign in to the AWS Management Console.
- Go to AWS Console > Services > S3 > Create bucket.
- Provide the following configuration details:
- Bucket name:
my-beyondtrust-logs
. - Region: [your choice] > Create.
- Bucket name:
Create an IAM Role for EC2
- Sign in to the AWS Management Console.
- Go to AWS Console > Services > IAM > Roles > Create role.
- Provide the following configuration details:
- Trusted entity: AWS service > EC2 > Next.
- Attach permission: AmazonS3FullAccess (or a scoped policy to your bucket) > Next.
- Role name:
EC2-S3-BPT-Writer
> Create role.
Optional: Launch and configure your EC2 Collector VM
- Sign in to the AWS Managmenet Console.
- Go to Services.
- In the search bar, type EC2 and select it.
- In the EC2 dashboard, click Instances.
- Click Launch instances.
- Provide the following configuration details:
- Name: Enter
BPT-Log-Collector
- AMI: Select Ubuntu Server 22.04 LTS
- Instance type: t3.micro (or larger), and_then click Next.
- Network: Make sure the Network setting is set to your default VPC.
- IAM role: Select the EC2-S3-BPT-Writer IAM role from the menu.
- Auto-assign Public IP: Enable (or make sure you can reach it using VPN) > Next.
- Add Storage: Leave the default storage configuration (8 GiB), and then click Next.
- Select Create a new security group.
- Inbound rule: Click Add Rule.
- Type: Select SSH.
- Port: 22.
- Source: your IP
- Click Review and Launch.
- Select or create a key pair.
- Click Download Key Pair.
- Save the downloaded PEM file. You will need this file to connect to your instance using SSH.
- Name: Enter
Connect to your Virtual Machine (VM) using SSH:
chmod 400 ~/Downloads/your-key.pem ssh -i ~/Downloads/your-key.pem ubuntu@<EC2_PUBLIC_IP>
Install collector prerequisites
Run the following command:
# Update OS sudo apt update && sudo apt upgrade -y # Install Python, Git sudo apt install -y python3 python3-venv python3-pip git # Create & activate virtualenv python3 -m venv ~/bpt-venv source ~/bpt-venv/bin/activate # Install libraries pip install requests boto3
Create a directory & state file:
sudo mkdir -p /var/lib/bpt-collector sudo touch /var/lib/bpt-collector/last_run.txt sudo chown ubuntu:ubuntu /var/lib/bpt-collector/last_run.txt
Initialize it (for example, to 1 hour ago):
echo "$(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ)" > /var/lib/bpt-collector/last_run.txt
Deploy the Armis Collector Script
Create a project folder:
mkdir ~/bpt-collector && cd ~/bpt-collector
Export the required environment variables (for example, in
~/.bashrc
):export BPT_API_URL="https://<your-subdomain>-services.pm.beyondtrustcloud.com" export BPT_CLIENT_ID="your-client-id" export BPT_CLIENT_SECRET="your-client-secret" export S3_BUCKET="my-bpt-logs" export S3_PREFIX="bpt/" export STATE_FILE="/var/lib/bpt-collector/last_run.txt" export PAGE_SIZE="100"
Create
collector_bpt.py
and enter the following code:#!/usr/bin/env python3 import os, sys, json, boto3, requests from datetime import datetime, timezone, timedelta # ── UTILS ───────────────────────────────────────────────────────────────── def must_env(var): val = os.getenv(var) if not val: print(f"ERROR: environment variable {var} is required", file=sys.stderr) sys.exit(1) return val def ensure_state_file(path): d = os.path.dirname(path) if not os.path.isdir(d): os.makedirs(d, exist_ok=True) if not os.path.isfile(path): ts = (datetime.now(timezone.utc) - timedelta(hours=1))\ .strftime("%Y-%m-%dT%H:%M:%SZ") with open(path, "w") as f: f.write(ts) # ── CONFIG ───────────────────────────────────────────────────────────────── BPT_API_URL = must_env("BPT_API_URL") # for example, https://subdomain-services.pm.beyondtrustcloud.com CLIENT_ID = must_env("BPT_CLIENT_ID") CLIENT_SECRET = must_env("BPT_CLIENT_SECRET") S3_BUCKET = must_env("S3_BUCKET") S3_PREFIX = os.getenv("S3_PREFIX", "") # for example, "bpt/" STATE_FILE = os.getenv("STATE_FILE", "/var/lib/bpt-collector/last_run.txt") PAGE_SIZE = int(os.getenv("PAGE_SIZE", "100")) # ── END CONFIG ───────────────────────────────────────────────────────────── ensure_state_file(STATE_FILE) def read_last_run(): with open(STATE_FILE, "r") as f: ts = f.read().strip() return datetime.fromisoformat(ts.replace("Z", "+00:00")) def write_last_run(dt): with open(STATE_FILE, "w") as f: f.write(dt.strftime("%Y-%m-%dT%H:%M:%SZ")) def get_oauth_token(): resp = requests.post( f"{BPT_API_URL}/oauth/connect/token", headers={"Content-Type": "application/x-www-form-urlencoded"}, data={ "grant_type": "client_credentials", "client_id": CLIENT_ID, "client_secret": CLIENT_SECRET } ) resp.raise_for_status() return resp.json()["access_token"] def fetch_events(token, start, end): headers = {"Authorization": f"Bearer {token}"} offset = 0 while True: params = { "startTime": start, "endTime": end, "limit": PAGE_SIZE, "offset": offset } resp = requests.get( f"{BPT_API_URL}/management-api/v1/Audit/Events", headers=headers, params=params ) resp.raise_for_status() events = resp.json().get("events", []) if not events: break for e in events: yield e offset += PAGE_SIZE def upload_to_s3(obj, key): boto3.client("s3").put_object( Bucket=S3_BUCKET, Key=key, Body=json.dumps(obj).encode("utf-8") ) def main(): # 1) determine window start_dt = read_last_run() end_dt = datetime.now(timezone.utc) START = start_dt.strftime("%Y-%m-%dT%H:%M:%SZ") END = end_dt.strftime("%Y-%m-%dT%H:%M:%SZ") print(f"Fetching events from {START} to {END}") # 2) authenticate and fetch token = get_oauth_token() count = 0 for idx, evt in enumerate(fetch_events(token, START, END), start=1): key = f"{S3_PREFIX}{end_dt.strftime('%Y/%m/%d')}/evt_{int(end_dt.timestamp())}_{idx}.json" upload_to_s3(evt, key) count += 1 print(f"Uploaded {count} events") # 3) persist state write_last_run(end_dt) if __name__ == "__main__": main()
Make it executable:
chmod +x collector_bpt.py
Schedule Daily with Cron
Run the following command:
crontab -e
Add the daily job at midnight UTC:
0 0 * * * cd ~/bpt-collector && source ~/bpt-venv/bin/activate && ./collector_bpt.py >> ~/bpt-collector/bpt.log 2>&1
Configure a feed in Google SecOps to ingest Beyondtrust EPM logs
- Go to SIEM Settings > Feeds.
- Click Add new.
- In the Feed name field, enter a name for the feed (for example, BeyondTrust EPM Logs).
- Select Amazon S3 as the Source type.
- Select Beyondtrust Endpoint Privilege Management as the Log type.
- Click Next.
Specify values for the following input parameters:
- Region: The region where the Amazon S3 bucket is located.
- S3 URI: The bucket URI (the format should be:
s3://your-log-bucket-name/
). Replace the following:your-log-bucket-name
: the name of the bucket.
- URI is a: Select Directory or Directory which includes subdirectories.
- Source deletion options: select the deletion option according to your preference.
- Access Key ID: the User access key with access to the S3 bucket.
- Secret Access Key: the User secret key with access to the S3 bucket.
- Asset namespace: the asset namespace.
- Ingestion labels: the label to be applied to the events from this feed.
Click Next.
Review your new feed configuration in the Finalize screen, and then click Submit.
UDM mapping table
Log field | UDM mapping | Logic |
---|---|---|
agent.id | principal.asset.attribute.labels.value | The value is taken from the agent.id field in the raw log and mapped to a label with key agent_id within the principal.asset.attribute.labels array in the UDM. |
agent.version | principal.asset.attribute.labels.value | The value is taken from the agent.version field in the raw log and mapped to a label with key agent_version within the principal.asset.attribute.labels array in the UDM. |
ecs.version | principal.asset.attribute.labels.value | The value is taken from the ecs.version field in the raw log and mapped to a label with key ecs_version within the principal.asset.attribute.labels array in the UDM. |
event_data.reason | metadata.description | The value is taken from the event_data.reason field in the raw log and mapped to the description field within the metadata object in the UDM. |
event_datas.ActionId | metadata.product_log_id | The value is taken from the event_datas.ActionId field in the raw log and mapped to the product_log_id field within the metadata object in the UDM. |
file.path | principal.file.full_path | The value is taken from the file.path field in the raw log and mapped to the full_path field within the principal.file object in the UDM. |
headers.content_length | additional.fields.value.string_value | The value is taken from the headers.content_length field in the raw log and mapped to a label with key content_length within the additional.fields array in the UDM. |
headers.content_type | additional.fields.value.string_value | The value is taken from the headers.content_type field in the raw log and mapped to a label with key content_type within the additional.fields array in the UDM. |
headers.http_host | additional.fields.value.string_value | The value is taken from the headers.http_host field in the raw log and mapped to a label with key http_host within the additional.fields array in the UDM. |
headers.http_version | network.application_protocol_version | The value is taken from the headers.http_version field in the raw log and mapped to the application_protocol_version field within the network object in the UDM. |
headers.request_method | network.http.method | The value is taken from the headers.request_method field in the raw log and mapped to the method field within the network.http object in the UDM. |
host.hostname | principal.hostname | The value is taken from the host.hostname field in the raw log and mapped to the hostname field within the principal object in the UDM. |
host.hostname | principal.asset.hostname | The value is taken from the host.hostname field in the raw log and mapped to the hostname field within the principal.asset object in the UDM. |
host.ip | principal.asset.ip | The value is taken from the host.ip field in the raw log and added to the ip array within the principal.asset object in the UDM. |
host.ip | principal.ip | The value is taken from the host.ip field in the raw log and added to the ip array within the principal object in the UDM. |
host.mac | principal.mac | The value is taken from the host.mac field in the raw log and added to the mac array within the principal object in the UDM. |
host.os.platform | principal.platform | The value is set to MAC if the host.os.platform field in the raw log is equal to macOS . |
host.os.version | principal.platform_version | The value is taken from the host.os.version field in the raw log and mapped to the platform_version field within the principal object in the UDM. |
labels.related_item_id | metadata.product_log_id | The value is taken from the labels.related_item_id field in the raw log and mapped to the product_log_id field within the metadata object in the UDM. |
process.command_line | principal.process.command_line | The value is taken from the process.command_line field in the raw log and mapped to the command_line field within the principal.process object in the UDM. |
process.name | additional.fields.value.string_value | The value is taken from the process.name field in the raw log and mapped to a label with key process_name within the additional.fields array in the UDM. |
process.parent.name | additional.fields.value.string_value | The value is taken from the process.parent.name field in the raw log and mapped to a label with key process_parent_name within the additional.fields array in the UDM. |
process.parent.pid | principal.process.parent_process.pid | The value is taken from the process.parent.pid field in the raw log, converted to string and mapped to the pid field within the principal.process.parent_process object in the UDM. |
process.pid | principal.process.pid | The value is taken from the process.pid field in the raw log, converted to string and mapped to the pid field within the principal.process object in the UDM. |
user.id | principal.user.userid | The value is taken from the user.id field in the raw log and mapped to the userid field within the principal.user object in the UDM. |
user.name | principal.user.user_display_name | The value is taken from the user.name field in the raw log and mapped to the user_display_name field within the principal.user object in the UDM. |
N/A | metadata.event_timestamp | The event timestamp is set to the log entry timestamp. |
N/A | metadata.event_type | The event type is set to GENERIC_EVENT if no principal is found, otherwise it is set to STATUS_UPDATE . |
N/A | network.application_protocol | The application protocol is set to HTTP if the headers.http_version field in the raw log contains HTTP . |
Need more help? Get answers from Community members and Google SecOps professionals.