Skip to content

Getting Investigations data via S3/Athena

S3

Two additional buckets are needed: one for Sophos Liunux Sensor (SLS) Metaevents and one for query results. Note the bucket names and regions.

SLS configuration

You can configure Sensors to emit data to support investigations by editing /etc/sophos/runtimedetections-rules.yaml where SLS is running.

Here's an example:

cloud_meta: auto

# blob_storage_create_buckets_enabled adds the ability for the sensor to create buckets
# if the bucket doesn't exist. By default this field is set to false.
blob_storage_create_buckets_enabled: true

investigations:

    # enable_incremental_flush adds the ability to flush row groups rather than 
    # writing files during each flush event. Enabling this will result in larger files
    # being created. By default this field is set to false.
    #
    # Minimum chunksizes for an incremental flush:
    # GCP: 256KB
    # AWS: 5MB
    # AZURE: 1MB
    enable_incremental_flush: true

    # reporting_interval sets a time interval for forced flushes. 
    reporting_interval: 5m

    # timeout set the amount of time allowed for investigations data to be written
    # to a sink. By default the timeout is 1/3 of the reporting_interval duration.
    timeout: 90s

    # sinks are a list of destinations where investigations data should be sent.
  sinks:
    - name: <investigations-metadata-bucket-name>
      backend: aws
      automated: true
      type: parquet
  flight_recorder:
    enabled: true
    tables:
      - name: "shell_commands"
        enabled: true
      - name: "tty_data"
        enabled: true
      - name: "sensors"
        enabled: true
      - name: "sensor_metadata"
        enabled: true
      - name: "connections"
        enabled: true
      - name: "process_events"
        enabled: true
      - name: "container_events"
        enabled: true

IAM

SLS permissions

You will need to grant the Sensors s3:CreateBucket and s3:PutObject permissions to the buckets you created. Grant access to all objects within the buckets.

Authentication options to S3

Users can be authenticated to S3 using the following options.

  • EC2 Instance credentials
  • EKS service account roles
  • Environment variables
  • In file credentials

EC2 instance credentials

SLS will automatically detect EC2 instance credentials and use those to authenticate to s3.

EKS service account roles

When using EKS, SLS will automatically detect and use credentials provided by AWS IAM roles for Service Accounts.

Environment variables

Another way of authentication is to define the following environment variables:

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN (optional)

In file credentials

In file credentials can be defined in the sensors runtimedetections.yaml file under the credentials field.

Here's an example:

sinks:
- name: <investigations-metadata-bucket-name> 
  backend: aws
  automated: true
  type: parquet
  credentials:
    blob_storage_access_key_id: XXX
    blob_storage_secret_access_key: XXXX
    blob_storage_session_token: xxx (optional)
    blob_storage_region: us-east-1