Out of the box, this alert notifies you of new events using the condition “an event is first seen”. When a new issue arrives in Sentry, the alert system will dispatch a notification. If you haven’t set up any Alert Services yet, you’ll receive the notification via email. If you configure an Alert Service like Slack, you’ll see the notification there instead. Given the above configuration, Sentry will only notify you about new events once every 30 minutes.
Event is First Seen vs Event is Seen
There is a key fundamental difference between
an event is *first* seen
and
an event is seen
: the “first seen” condition only applies to new, unique instances of an event, whereas the other condition applies to every instance of an event.
It’s useful to evaluate with your team to determine which is preferable. Here are a few things to consider:
first seen
is generally useful for catching new errors in real-time, but doesn’t provide information about subsequent instances of the same errors.
An event is seen
captures everything, but can generally be more noisy if other conditions are not also in place for a rule.
A Quick Note on
All
vs
Any
Sentry’s Alert Rules system is flexible, and allows you to defer to one of two settings:
When
any
of these conditions are met
When
all
of these conditions are met
What’s the main difference? Simply put, each setting holds a different emphasis on how to apply the underlying rule conditions. A rule set to
any
will take an action if any of the child conditions are met, regardless if other conditions within the rule are not. Conversely, a rule set to
all
will only take an action if all of the child conditions are met, meaning that no action will be taken if not all of the conditions are met. See the following for an expample:
Any Conditions
In the above example, we have an
any
rule set up. The rule checks conditions for incoming events, and will perform an action any time a given event is seen by more than 100 people in an hour or 100 times in an hour, or any time an event’s level is at a
warning
level or greater. In this case, Sentry will send a notification to the alert service, PagerDuty, any time one of those three conditions are met.
All Conditions
In this example, Sentry will only send a notification to PagerDuty if the level is equal to
fatal
and is seen more than 10 times in an hour-long frequency. In other words, Sentry might register an event being seen as 11 times an hour, but opt not to send an alert because the level was only set to
error
. Conversely, an event may have a level of
fatal
, but Sentry has only seen it 9 times, resulting in no notification to PagerDuty.
Which should you use?
It ultimately depends on your criteria for a given project or department. As a rule of thumb, it generally makes more sense to use
any
to dispatch alerts on a more broad level, but
all
to dispatch rules based on a more narrow configuration. For example, an on-call operations engineer might be more concerned with
fatal
-level events pouring in over a given time frequency verses other isolated conditions on their own.
Setting Thresholds
It’s rare that a team wants to hear about absolutely every instance of a specific error coming in. Often, it’s necessary to create a rule based on an frequency threshold. To set up a basic threshold, select the condition titled
An issue is seen more than {value} times in {frequency}
.
As a basic example, we set a condition of 50 times in one minute. One thing to keep in mind is that different teams will have unique criteria around what threshold is most valuable to them. For example, someone on your payments team might want to set a more aggressive threshold for checkout failures than your frontend team when there’s a timeout issue on a footer component. Use thresholds to help determine the significance of an error’s impact and escalation priority.
Understanding Attributes
The Alert Rules system in Sentry is capable of picking out all sorts of elements that live within an event payload. We call these different elements
attributes
, and there are 15 different kinds of attributes that a rule can target. Some of the most widely-used attributes are Message, Platform, and Type, but other interesting values include
user.id
,
http.method
, or
stacktrace.filename
By setting an Attribute condition in a rule, you can build out some alert logic that automatically performs an action when that attribute is detected. In this example, any event containing the message “Database Transaction Failed” will get routed directly to PagerDuty for our operations team to investigate.
In addition, you can also set up rules that account for multiple different attributes at once, and chain that logic together. In this more advanced example, we can send specific targeted alerts at an Android development team in Slack using the
message
,
platform
, and
type
values. In the below example, a notification gets routed to the Android Dev team in Slack if a RunTimeError with the message
Failed to Reload Index
comes in from the
sentry-java
SDK.
Regression Alerts
In some circumstances, your engineering team may want to be immediately notified of regressions. The
An event changes state from resolved to unresolved
condition automatically registers the state change in a Sentry issue and dispatches a notification.
If your project has multiple teams working on it, you may want to route regression alerts to the appropriate team.
Routing Alerts to Different Teams
You may find that your application has errors coming in from different features, and you’ll want to route those errors directly to the team that works with that piece of code. In our last video, we detailed how you could configure your SDK to set context tags.