Concepts
Organisation
The Q-Flow Organisation represents your organisation and lets you manage licenses and billing with Q-Flow. An Organisation exists in a single region (e.g. EU, US), containing Security Principals, Users and Environments. An Organisation can have many environments.
Environment
An environment contains topics to which events are published, and subscriptions to those topics through which events are delivered. An Environment can have one or many Topics associated to it.
An Environment serves three main purposes:
- Configure and Manage the Event Registry
- Configure and Manage Topics and Subscriptions
- Logically isolate data and event processing from other Environments as Environments do not share information with each other
Topic
A Topic is a specific subject or category of events that can be subscribed to. It groups related event types, allowing subscribers to listen for and respond to relevant events within that topic. A topic could represent a specific subject, multiple categories or be limited to specific circumstances, like an individual company/tenant.
Topics ensure that Subscribers are only notified about events that pertain to their specific interests or functions, reducing unnecessary data processing or regulatory concerns.
Event
An occurrence of potential interest to subscribers. Event(s) are published to a topic.
EventType
An event type in webhooks specifies the kind of event that triggers the webhook. It defines the particular action or occurrence, such as "customer.created" or "order.completed," that initiates the webhook call. Naming an eventType with a noun followed by a verb clearly indicates what entity is affected and what action occurred, enhancing readability and understanding. For example, "customer.created" specifies that a customer was created.
Category
A Category is a logical grouping that Event Types can be associated with one another. There is no rules or limitation on category groupings, but to aid subscribers, the categories should have logical grouping. For example, all events pertaining to a customer could be conveniently grouped together. Or all Transactional events can be grouped together.
API Secret Key
An API secret key is a confidential alphanumeric string used in conjunction with an API key to authenticate and authorize requests to an application programming interface (API). It acts as a security measure to ensure that API calls are made by authorized users or applications and helps protect against unauthorized access.
Subscription
Receives events published to a topic and delivers matching event Types to an endpoint. A Subscription will only have access to the available Event Types on the Topic. You can choose which specific Event Types to filter for on the Subscription, along with the Webhook URL. A Topic can have one or many Subscriptions, allowing multiple webhook URLs to listen out for specific events for the Topic.
Subscriptions can filter for specific Event Types on a Topic, filtering for specific interests or functions, reducing data processing, data minimisation or regulatory concerns.
Publisher
A publisher in webhooks is the entity that creates and sends event notifications. It generates events based on specific actions or changes within its system and pushes these events to registered subscribers.
Subscriber
A subscriber in webhooks is the entity that receives event notifications from a publisher. It registers a webhook URL to listen for and handle events from the publisher, responding to relevant occurrences as they happen.
Webhooks
Webhooks are automated messages sent from one application to another when a specific event occurs. They are real-time notifications delivered via HTTP POST requests to a predefined URL, allowing the receiving system to instantly react to the event without continuous polling.
Webhook URL
A webhook URL is an endpoint provided by a subscriber (Subscription) where event notifications are sent by the publisher (published to a Topic). It is the address to which HTTP POST requests containing event data are delivered, enabling the subscriber (Subscription) to process and respond to these events in real-time.
Event Retries and Exponential back off
If an error is returned by the Subscriber endpoint, Q-Flow will automatically perform the retry using the cadence described below:
Q-Flow will wait 30 seconds for a response after delivering a message. After 30 seconds, if the endpoint hasn’t responded, the message is queued for a retry. Q-Flow uses an exponential backoff retry policy for event delivery to increase successful deliveries without human intervention. You can configure the maximum retry attempts within the endpoint set-up. For example, if you configure the retry attempts to the maximum, it will attempt multiple times over a 24-hour period.
- 10 seconds
- 30 seconds
- 1 minute
- 5 minutes
- 10 minutes
- 30 minutes
- 1 hour
- 3 hours
- 6 hours
- Every 12 hours up to 24 hours
Webhook Signatures
Webhooks play a crucial role in integrating Q-Flow events with external systems. To ensure secure communication, Q-Flow utilises webhook secrets and digital signatures to validate the authenticity and integrity of the incoming webhook requests. This guide explains how to validate the signature included in the webhook header, protecting against replay attacks and ensuring compliance with security best practices.
Headers Provided by Q-Flow
Each webhook sent by Q-Flow contains the following headers:
-
Qflow-Request-Id
: A unique identifier for each webhook event. -
Qflow-TimeStamp
: The Unix epoch time in milliseconds when the webhook request was created. -
Qflow-Signature
: A coma separated header value of the signed request with each secret that is active. Example:sha256=<signature>,sha256=<signature2>
The newest key being the first signature.
The signature is generated using the HMACSHA256 algorithm and consists of:
HMACSHA256(SecretKey, "{RequestId}.{Timestamp}.{RequestBody}")
Where:
SecretKey
is the webhook secret provided when creating a subscription in Q-Flow.
RequestId
and Timestamp
correspond to the values of the Qflow-Request-Id
, Qflow-TimeStamp
, while the RequestBody is a UTF8 string encoding of the request body.
Why Validate Webhook Signatures?
- Compliance and Security
Validating the Qflow-Signature ensures the webhook request originated from Q-Flow, maintaining compliance with security standards. It safeguards against potential tampering by verifying that the request was not altered during transmission.
- Protection Against Replay Attacks
To protect against replay attacks, you should validate the Qflow-TimeStamp
. If the timestamp falls outside an acceptable window (e.g., more than 5 minutes old), reject the request. The window of time that is acceptable should be based on your own policies. This measure helps ensure that an intercepted request cannot be reused maliciously.
- Additional Considerations
Always compare the computed signature with the received Qflow-Signature
using a time-safe comparison method to avoid timing attacks.
For added security, rotate your webhook secret regularly and update your subscription settings accordingly.
Verify Q-Flow Webhook Signatures
To verify the Qflow-Signature
in a C# application, you can use the following example:
using System;
using System.Security.Cryptography;
using System.Text;
public class WebhookVerifier
{
public static bool VerifySignature(string receivedSignature, string secretKey, string requestId, string eventTimestamp, string payload)
{
// Generate the expected signature
string expectedSignature = BuildExpectedSignature(secretKey, requestId, eventTimestamp, payload);
// Convert the signatures to bytes
byte[] receivedSignatureBytes = Convert.FromBase64String(receivedSignature);
byte[] expectedSignatureBytes = Convert.FromBase64String(expectedSignature);
// Use a time-safe comparison to validate the signatures
return CryptographicOperations.FixedTimeEquals(receivedSignatureBytes, expectedSignatureBytes);
}
private static string BuildExpectedSignature(string secret, string requestId, string eventTimestamp, string payload)
{
// Create the string to sign
string dataToSign = $"{requestId}.{eventTimestamp}.{payload}";
// Decode the secret from base64
byte[] secretKey = Convert.FromBase64String(secret);
// Convert the data to sign to bytes
byte[] dataBytes = Encoding.UTF8.GetBytes(dataToSign);
// Compute the HMACSHA256 hash
using (var hmac = new HMACSHA256(secretKey))
{
byte[] hashBytes = hmac.ComputeHash(dataBytes);
// Convert the hash to a base64 string
return Convert.ToBase64String(hashBytes);
}
}
}
By following these steps, you can ensure that your Q-Flow webhook integrations are secure, preventing tampering and replay attacks while complying with security best practices. Regularly validate your webhook signatures, monitor the timestamp, and use secure comparison methods to safeguard your application.
Q-Flow Advanced Query
Q-Flow Advanced Query empowers you to filter, aggregate, and transform your incoming events using intuitive SQL syntax. By writing concise SQL clauses, you can extract precisely the data you need, combine and summarize complex information, and reshape event payloads for downstream services.
- Filter: Include or exclude events based on event types or specific conditions within the payload.
- Aggregate: Summarize event data to discover trends or performance insights.
- Transforming: Restructure event payloads for simplified consumption.
Example Use cases for Q-Flow Advanced Query
We've put together a couple of example use cases to demonstrate the simple but powerful capabilities of the Q-Flow Advanced Query.
Use Case: Event Stream Optimization
Problem: Multiple events for the same entity (e.g., transaction) generate redundant data, leading to inefficiencies.
Solution: Group by transaction_id and event_type, and select only the latest event based on the timestamp to reduce event volume downstream.
Benefits:
- Reduces data load
- Improves system performance
- Prevents redundant notifications
WITH ranked_events AS (
SELECT
*,
ROW_NUMBER() OVER (
PARTITION BY transactionId, type
ORDER BY timestamp DESC
) AS rank
FROM events
)
SELECT
*
FROM ranked_events
WHERE rank = 1
ORDER BY
transactionId,
type;
Use Case: Aggregating Shopify Order Data
Problem: Shopify webhook events generate multiple notifications for each order, containing product quantities, total amounts, and taxes. This creates redundant data for downstream processing.
Solution: Aggregate Shopify orderCreated events to calculate the total products, total amount, and taxed amount for each order by grouping the events based on orderId.
Benefits:
- Reduces Data Volume: Sends only aggregated order details, reducing unnecessary data.
- Improves Efficiency: Simplifies processing by summarizing key metrics.
- Better Analytics: Provides clear insights into order totals and taxes.
WITH orderDetails AS (
SELECT
orderId,
SUM(lineItemQuantity) AS totalProducts,
SUM(orderTotal) AS totalAmount,
SUM(taxedAmount) AS taxedAmount
FROM events
CROSS JOIN UNNEST(lineItems) AS lineItem(lineItemQuantity) -- Unnest line items to get quantities
CROSS JOIN UNNEST(taxLines) AS taxLine(taxedAmount) -- Unnest tax lines to get taxed amounts
WHERE
type = 'orderCreated' -- Assuming you want the order-created events
GROUP BY
orderId
)
SELECT
orderId,
totalProducts,
totalAmount,
taxedAmount
FROM orderDetails
ORDER BY
orderId;
Use Case: Downsampling Event Metrics for a metric database like duckdb
Problem: Event metrics (e.g., transaction amounts, product quantities, etc.) are generated at high frequency, leading to high data volume in InfluxDB. Storing every single event metric may overwhelm the database and reduce query performance.
Solution: Downsample the event metrics by aggregating data at a lower frequency (e.g., hourly or daily). This reduces the amount of data stored in InfluxDB while still retaining meaningful insights by calculating averages, sums, or other aggregates over time periods.
Benefits:
- Reduces Data Storage: Stores only aggregated metrics, lowering storage requirements.
- Improves Query Performance: By reducing the volume of data, queries are faster and more efficient.
- Sufficient Granularity: Maintains useful insights with less frequent sampling of metrics (e.g., hourly summaries).
SELECT
type,
SUM(transactionAmount) AS totalTransactionAmount
FROM events
GROUP BY
type,
time(5m) -- Grouping by event type and 5-minute time buckets
ORDER BY
time DESC;
Use Case: Detect and notify Large/Unusual Transactions
You might want to flag transactions that deviate significantly from a typical purchase, such as unusually large amounts or frequent small transactions that could indicate fraud.
Trigger: Large transactions (e.g., $500 or more) that deviate from the customer’s typical purchase history.
How to Implement:
- Track the average transaction value for each customer.
- Flag transactions above a certain threshold (e.g., 2x the customer's average) as potentially fraudulent.
- Real-time flags with customer IDs and transaction amounts.
SELECT
customerId,
transactionId,
amount,
AVG(amount) OVER (
PARTITION BY customerId
ORDER BY timestamp
ROWS BETWEEN 10 PRECEDING AND CURRENT ROW
) AS moving_avg_amount,
ABS(amount - moving_avg_amount) AS anomaly_score
FROM events
WHERE
ABS(amount - moving_avg_amount) > 100; -- Flag if the deviation is greater than 100
Use Case: Detect Duplicate Payments by Transaction ID or Amount
Duplicate transactions can happen due to issues like user retries, system errors, or malicious intent. Detecting duplicates in real-time can help prevent overcharging customers or processing repeated payments.
Trigger: If the same transaction_id or payment amount appears more than once within a short period (e.g., 30 seconds), flag it as a possible duplicate.
How to Implement:
- Use a short window (e.g., 30 seconds) to track transactions with the same transaction ID or amount.
- Alert the system if a duplicate is detected.
SELECT
customerId,
transactionId,
COUNT(*) AS duplicate_count
FROM events
WHERE
timestamp > CURRENT_TIMESTAMP - INTERVAL '30 seconds'
GROUP BY
customerId,
transactionId
HAVING
duplicate_count > 1;
Use Case: Flag Cross-Border Transactions
Certain transactions are riskier due to factors like payment method (credit card vs. bank transfer), geography (cross-border payments), or rapid purchase frequency (multiple payments within a short window).
Trigger: Flag transactions made from a different country or region than usual for a given customer.
How to Implement:
- Monitor the geographical location (country or region) from where the transaction is initiated.
- If the country is different from the customer's usual region, flag the transaction for further verification.
SELECT
customerId,
transactionId,
country,
COUNT(*) AS suspicious_activity_count
FROM events
WHERE
country != customer_usual_country -- Compare to a customer's usual country
GROUP BY
customerId,
country
HAVING
suspicious_activity_count > 2; -- Flag if more than 2 suspicious transactions occur in a short time
Use Case: Track Refunds and Chargebacks
Frequent refunds or chargebacks might indicate customer dissatisfaction or even fraud, especially if the patterns are repetitive or happen shortly after purchase.
Trigger: If a customer requests a refund shortly after a purchase, this could indicate potential issues or fraud.
How to Implement:
- Monitor refund events and detect if they happen soon after a payment.
- Track chargebacks if available in the data.
SELECT
customerId,
COUNT(*) AS refund_count,
MIN(timestamp) AS first_refund_time
FROM events
WHERE
timestamp > CURRENT_TIMESTAMP - INTERVAL '24 hours'
GROUP BY
customerId
HAVING
refund_count > 2; -- Flag customers with more than 2 refunds in the last 24 hours