Snowplow BDP product

This product was formerly known as Snowplow Insights. The name was changed on 11 November 2021, however, all product SLAs and support levels set out in this document remain the same.

Snowplow BDP provides behavioral data management. Companies use Snowplow BDP to collect and operationalize behavioral data.

Behavioral data is generated in / collected from different digital touchpoints, typically including websites and mobile apps (referred to as “Sources”). Sources can include third party SaaS solutions that support webhooks (e.g. Zendesk, Mailgun).

Snowplow BDP processes and delivers this behavioral data to different endpoints (referred to as “Destinations”). From there, companies use that data to accomplish different use cases. Destinations include data warehouses (AWS Redshift, GCP BigQuery, SnowflakeDB), data lakes (AWS S3, GCP GCS) and streams (e.g. AWS Kinesis, GCP Pub/Sub).

As part of Snowplow BDP we provide standard models (e.g. web or mobile models) that transform the data in the data warehouse to make it easier to consume by downstream applications and systems (e.g. business intelligence).

Key features

Data processing-as-a-Service

Snowplow BDP is provided as a service. Snowplow takes responsibility for the setup, administration and successful running of the Snowplow behavioral data pipeline(s) and all related technology infrastructure.

Please note that the Snowplow team can only do the above, for BDP customers deployed with Private Managed Cloud service, subject to the customer providing Snowplow with the required access levels to their cloud infrastructure, and compliance with all Snowplow Documentation and reasonable instructions.

A UI and APIs are provided to facilitate pipeline management and monitoring

Snowplow BDP customers can manage the setup and configuration of their Snowplow pipeline via a UI and API on console.snowplowanalytics.com. This provides functionality to:

  • View and update pipeline configuration, including testing changes in a development environment before pushing them to production
  • Manage and evolve event and entity definitions
  • Monitor and enhance data quality

Data residency

For all data processed and collected with Snowplow BDP, the customer decides:

  • What data is collected
  • Where it is processed and stored (e.g. what cloud and region)
  • What the data is used for and who has access to it
  • How long the data is retained

Each of customer’s and Snowplow’s obligations with respect to data protection and privacy are set forth in a Data Protection Agreement.

For the Private Managed Cloud service, all data processed and collected with Snowplow BDP is undertaken within  the customer’s own cloud account (e.g. AWS, GCP). It is the customer’s obligation (and not Snowplow’s) to maintain and administer the cloud account.

^ Snowplow has maximum retention limits for BDP customers using the Cloud service.

Product Options

Pipeline types

Snowplow BDP customers may choose to deploy one or more production pipelines. Three types of production pipeline are offered, with differing service levels.

Basecamp Ascent Summit
Collector uptime SLA* 99.9% 99.99% 99.99%
Data latency SLA*
Data latency: BigQuery** 15 mins 15 mins
Data Latency: Redshift / Snowflake 30 mins 30 mins
Single Sign On
Fine Grained User Permissions
AWS Infra Security bundle**
Outage Protection
Self-help support website, FAQs and educational materials
24/7/365 Support through email / Help Centre
Support SLA first response time: Severity 1 (Urgent)* 1 hour 1 hour 30 mins
Support SLA first response time: Severity 2 (High)* 8 hours 2 hours 1 hour
Support SLA first response time: Severity 3 (Normal)* 24 hours 24 hours 8 hours
Support SLA first response time: Severity 4 (Low)* 24 hours 24 hours 24 hours
Infrastructure Management**
Regular Infrastructure Reviews**
Deferred Upgrades**
Provision of News / Updates / Ideas / Customer Stories
Success Management
Data Strategy and Consultation Sessions

*More information on SLAs is provided below

** Applies only to BDP customers deployed with Private Managed Cloud Service

In addition to production pipelines, all Snowplow BDP customers are provided with one Snowplow Mini to use as a development platform.  Customers also have the option to purchase additional development, “complete” (non-mini) Snowplow pipelines.

Self-help support website, FAQs and educational materials

All customers have access to self-help websites and product documentation.  This includes access to our online training materials, which customers can use for their own training or to create bespoke materials for their employees. These are branded with Snowplow logos and feature all standard functionality. They do not include customer-specific branding or customizations.

24/7/365 Support through email / Help Centre

Answers to  ‘How-to’ questions, and help to resolve product issues.

Non-Snowplow tooling support, industry best-practice advice, performing implementation or configuration work is not included.

Customer is required to be running on one of the last three recommended stacks, as described here

Infrastructure management

Management of your Snowplow Infrastructure, including:

  • Proactively ensuring uptime
  • Managed upgrades, including compliance and security releases
  • Performance optimisation and set-up tuning

Regular infrastructure reviews

A review of your existing set-up with recommendations.  These might include, but are not limited to security, functionality, performance, reliability and cost management.  Reviews are limited to quarterly and are undertaken upon request at a mutually convenient time.

Deferred upgrades

Snowplow fully manages your technology stack, ensuring all components are on the latest and recommended releases. Deferred upgrades for non-critical releases are available for specific business needs (e.g. avoiding peak period). To defer an upgrade by a maximum of 4 weeks, we require a week’s notice in response to our notification of the upgrade.

Provision of news / updates / ideas / customer stories

Regular information, news, product features, best practices and customer stories. For privacy reasons, customer will need to register to receive this information.

Success management

Customers will be allocated an appropriately-skilled Snowplow team member to help them get value from the product, assist with adoption, provide best-practise and answer queries.  This individual may be a shared resource with other customers.

Data strategy and consultation sessions

Time on your site or remotely with key stakeholders to provide advice, ideas, training and similar collaboration.  Future use-cases can be mapped out and agreed upon to assist in maximizing ROI.  Limited to one session every 6 months.

SSO

SSO is an authentication process that allows users to access multiple applications after only signing in once. Snowplow supports SSO integration for the majority of identity providers provided they adhere to the OASIS SAML 2.0 protocol. Once enabled, all users of an SSO-enabled instance of Snowplow must authenticate through the identity provider.

Fine Grained User Permissions

Admin users of Snowplow’s Data Management console can configure custom permissions for others users governing level of access to the monitoring and configuration functionality available in the UI and APIs.

Outage Protection

An outage protected AWS pipeline is deployed to span multiple distinct AWS regions. One “failover pipeline” is deployed into a different region to the primary pipeline. In the event of an outage in the primary pipeline region traffic is rerouted to the failover pipeline which buffers the data collected until the outage is over at which point the buffered data is relayed back to the primary pipeline. This is available for Customers on AWS only.

Event volumes

Snowplow BDP customers purchase capacity to process a certain volume of events for each month e.g. 100M, 5Bn.

Customers can use their capacity across all their Snowplow production pipelines if they have more than one: it is up to the customer how capacity is distributed between pipelines.

Customers have the opportunity to buy additional capacity at any stage during their contract term.  In this case, the event capacity increases for that calendar month as soon as the updated contract becomes effective. The increased event capacity then remains for each subsequent calendar month for the duration of that current contract period at which point the parties have the opportunity to review the products and tiers and/or renew the current terms.

The “volume of events” in a given time period is calculated as the number of records written to the good enriched stream in that time period (UTC). For customers on the Private Managed Cloud deployment, on AWS this number is provided as a Cloudwatch Metric, on GCP it is provided as a Google Cloud Metric.

Bolt-ons

Cross cloud data delivery

Customers have the option to have their data processed in a single cloud but then replicated to a second cloud in a matter of seconds. This enables customers to access their data in e.g. AWS and Azure, or AWS and GCP.

Please note that latency SLAs do not currently apply to data replicated from one cloud environment to another.

AWS Infrastructure Security

This bundle of security features for customers on the Private Managed Cloud deployment comprises of:

  • VPC peering: As part of the Snowplow pipeline setup, a Virtual Private Cloud (VPC) housing the pipeline is set up in the Customer’s cloud account.
  • Customers that wish to enable VPC peering between any existing VPC they own and the new Snowplow VPC can choose the CIDR/IP range used in the Snowplow-setup VPC so that peering is possible.
  • Please note that this is only currently available for Customers on AWS. Customers who are interested in this feature on GCP are encouraged to discuss with their Customer Success Manager.
  • Custom tagging: Up to 5 custom tags can be defined that will be appended to every AWS resource that Snowplow deploys. If needed specific tags can be defined for VPC assets and S3 bucket assets that are not propagated to every other resource.
  • Custom security agents: For all EC2 servers that are deployed as part of the service, a customer’s custom security agents may be installed via an S3 object made available by the customer. This is run as an addendum to Snowplow’s user-data scripts and can allow customers to meet certain security compliance needs.
  • Custom IAM policy: As part of agent installation on EC2 nodes extra IAM permissions can be required (e.g. SSM agent) for correct functionality.  IAM policies attached to EC2 servers can be extended with a customer defined policy if needed.
  • SSH access control: As part of customers internal security policies Snowplow’s SSH access to the environment can be disabled.
  • Http access control: All HTTP (i.e. non-encrypted) traffic to internet facing Load Balancers deployed as part of Snowplow BDP can be disabled.
  • IAM permissions boundary: To control what IAM permissions Snowplow services are allowed to have, a IAM Permissions Boundary policy may be configured by the customer which can sandbox the service in addition or in exchange of account wide SCPs.

Custom VPC Integration (AWS only)

As part of a Private Managed Cloud deployment, Snowplow generally deploys a VPC for everything to be deployed within. With this bolt-on, the pipelines can be deployed into a pre-existing VPC to meet certain restrictions of Customer accounts.  This VPC must allow Snowplow access to the internet via a directly connected Internet Gateway (IGW) and ensure sufficient NACL rules are allowed for the deployment to function as expected in order to be signed off by the Snowplow team prior to deployment.

Kafka as a destination (AWS only)

Snowplow’s Kafka relay makes it possible to replicate the stream of good enriched data from Kinesis to the customer’s existing Kafka cluster. Data lands in a single Kafka topic in JSON format with a configurable partition message key

Service Level Agreement

Snowplow provides Service Level Agreements (SLAs) on Collector Uptime, Data Latency and Support.

Collector Uptime SLA

Snowplow BDP customers benefit from uptime SLAs given above. Uptime refers to “collector uptime” i.e. the availability of the Snowplow collector, that receives data for processing from different sources.

‘Collector Uptime’ is the % of time that the collector is available over a calendar month, and is calculated according to the following:

[total minutes in calendar month that collector is ‘up’ / total minutes in a calendar month]

The collector is defined as ‘up’ if the number of 5xx responses make up less than 5% of responses in a 1 minute period.  If there are no requests in the period the collector will also be defined as being ‘up’.

Snowplow Analytics Limited commits to providing a monthly Collector Uptime percentage to the Client as denoted by the pipeline type.

If Snowplow Analytics Limited does not meet the commitment over a calendar month, the customer will be entitled to service level credits on their Snowplow BDP monthly fee equal to the % of time that we are in breach of the SLA, up to a maximum of 20%.

The SLA will not apply to any downtime due to:

  • An AWS or GCP outage
  • A failure by AWS or GCP to scale up the collector load balancer*
  • A Client making any direct configuration change to any of the Snowplow BDP infrastructure running in their cloud account
  • A feature identified as being pilot, alpha or beta

* Snowplow is responsible for and controls the scaling of the collector application, but AWS and GCP control and are responsible for the scaling of the load balancer.

Data Latency SLA

Snowplow BDP customers benefit from Data Latency SLAs given above.

Snowplow Analytics Latency SLA is calculated as follows:

[Total time in a calendar month that the latency of the data is within the time period denoted/Total time in calendar month]

Snowplow Analytics will ensure that data is available in the destination selected within the time periods denoted by the pipeline type, 99.9% of the time each calendar month (based on UTC timestamp).

The latency of data in Redshift and Snowflake is measured at each point in time as the difference between the current time and the max collector timestamp for all events loaded into that destination.

The latency of data in BigQuery is measured by periodically sampling (e.g. every 1 second) the difference between the collector timestamp and the current time for events as they are loaded into the destination.

If Snowplow Analytics Limited does not meet the commitment over a calendar month, the customer will be entitled to service level credits on their Snowplow BDP monthly fee equal to the % of time that we are in breach of the SLA, up to a maximum of 20%.

The SLA will not apply if a failure to process and load the data into the data warehouse is due to a factor out of the control of Snowplow Analytics Limited, for example:  

  • Client-owned and managed data warehouse has run out of capacity. (So it is not possible to load data, or to load it in a performant manner.)
  • An outage of the data warehouse
  • A broader outage in AWS or GCP
  • The Client making any direct configuration change to any of the Snowplow BDP infrastructure running in their cloud account

The SLA will also not apply:  

  • For failed events e.g. events that fail to validate against the associated schemas
  • If latency is caused by features that are identified as either pilot, alpha or beta
  • For any data that does not reach the Snowplow collector to be processed (e.g. because of an issue upstream of the collector such as a network or connectivity issue)
  • Until the Kickstarter has been completed with the Client, and production level volumes are being processed by the pipeline

In order to ensure that we can honour the SLA stated, Snowplow Analytics Limited reserves the right to make periodic adjustments to the Client’s pipeline, unless otherwise agreed in writing with the Client.

Support SLA

Snowplow BDP customers benefit from Support SLAs given above. When you raise a ticket through Snowplow Support, they are prioritized as levels 1 (highest) to 4 (lowest), depending on the severity and impact of the issue or question. These are defined as follows:

Severity LevelDescription
Severity 1Production pipeline is not functioning such that (a) data is not being reliably and securely collected or (b) good data is not being reliably and securely delivered to the target destination, rendering use of the service impossible with no alternative available.
Severity 2An error manifests in data flow or processing, or data processing is unreliable; significant product functions do not work or cannot be used. A workaround might exist, but it is complex, requires significant effort or doesn’t always work. Pipeline is active but data/functionality is partially impacted, or intermittently impacted.
Severity 3General product usage questions and advice. Otherwise, a minor feature doesn’t work or fails eventually. The issue does not have significant impact on product usage. There is an easy workaround that always avoids the problem or it happens rarely.
Severity 4Usability errors, screen or report errors that do not materially affect quality and correctness of function, intended use or results.

Support SLAs are dependent on the severity of the issue and on which edition of Snowplow BDP you have purchased. Snowplow reserves the right to adjust the Severity of any Support Ticket to align with the above definitions. Snowplow will ensure that customers receive human responses to questions submitted via support tickets as detailed in the table above. Snowplow does not commit to resolution time SLAs. In the event, Snowplow Analytics does not meet its obligations under the SLA, the customer will be entitled to receive service level credits on their Snowplow BDP fee of 2% of the monthly fee per support ticket if Snowplow Analytics ultimately determines that the SLA was breached, up to a maximum of 20% per month.

The monthly fee is defined as the annual contract value ÷ 12.

Service Level Credits

Combined credits across all SLA breaches (collector uptime, data latency and support) will be capped at 20% of the monthly fee.

Updates

As our business evolves, we may change our Product Description including any associated Service Level Agreement. Clients can review the most current version of the Service Level Agreement at any time by visiting this page. last updated May 20, 2023.

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.