Skip to content

Becoming Friends with Errors: A Journey with Sentry

Recently, I was challenged with the task to set up and try out Sentry for one of our large enterprise projects. There was no hard requirement for it, but with our go-live still more than a year away, we saw this as the perfect opportunity to explore how we could build a solid workflow around error handling — before things get real.

So, let me take you on a journey that started with “Hey, should we try using Sentry?” and ends with an appreciation for managing not just errors, but people and processes too.

Chapter 1: So, What Even Is Sentry?

Before diving into the setup, let’s talk about what Sentry actually is. In short, Sentry is an open-source error tracking and performance monitoring tool, available as both a self-hosted solution and a SaaS offering. It helps developers detect, diagnose, and fix errors in real-time. But Sentry doesn’t only show you stack traces or error logs — it tracks performance, applies lots and lots of meta data, enables end-to-end distributed tracing, captures user sessions (with replays 🤯), allows for release health monitoring, integrates with tools like Jira, Bitbucket, Slack, and much much more. You can find a detailed overview of all big Sentry features here.

But maybe the most underrated thing: Sentry is just as much a people-management tool as it is an error-management tool — it’s about visibility, accountability, and team collaboration. But more about that later.

Chapter 2: Setting the Stage

As I’ve already mentioned, we’re working in a large enterprise environment. There are three scrum teams, dozens of services, and a whole bunch of stakeholders: requirement engineers, business analysts, even some politicians, and a group called the Fachbereich (which I still can’t properly translate to English, but let’s say they’re the domain experts).

The architecture is microservice-based with two frontends. Our customer provides the infrastructure, and yes, that includes a self-hosted Sentry instance as well. Some microservices were already configured in Sentry, so I logged in one morning to poke around… and I was instantly overwhelmed by a tsunami of errors and transactions. The weekly error report I received back then, summarized the state pretty well:

Initial Sentry Report

The colors represent corresponding services, and I think you can already guess which of those our team owns. Exactly: the dark blue, purple, pink, and orange ones …

Chapter 3: Let’s Set Up Our Own Project

But I didn’t let that bring me down. The opposite was the case: I was motivated to tackle this challenge and started to get my hands dirty. Best way to learn? Do it yourself!

Frontend

So I started off by picking one of our frontend services and decided to set up Sentry from scratch. The JDK setup is actually super straightforward. Just follow the steps in the Sentry JavaScript Wizard:

npx @sentry/wizard@latest -i <your preferred integration>

And then, instrument Sentry in your main app file as the very first module:

import * as Sentry from "@sentry/react";

Sentry.init({
enabled: true,
dsn: "SENTRY_DSN",
environment: "stage", // or e.g. "test", "prod", "feature"
release: "RELEASE_VERSION", // version of your deployed code
integrations: [
Sentry.replayIntegration({
maskAllText: true, // toggles text masking
blockAllMedia: true // toggles media blocking
})
],
tracesSampleRate: 1.0, // uniform sample rate
tracePropagationTargets: [/^\/api\//], // where to pass traces to Sentry
replaysSessionSampleRate: 0.1, // rate of recorded replays per session
replaysOnErrorSampleRate: 1.0 // rate of recorded replays on errors
});

 

We defined multiple configuration files for each of our environments (“test”, “prod”, “stage”, and so on). In our “prod” environment, for example, we don’t want to capture user-related data or images, which is why we mask all text and block all media. However, the Sentry DSN stays the same for each configuration (within one service) because we can differentiate them through the environments.

And that’s it: replays, traces, and error reporting are ready to be used. 🎉

Except… this is only mostly true.

Since we use self-hosted Sentry, we quickly hit some walls. Some integrations weren’t available, especially features like Jira and Bitbucket links. That means we had to roll our own pipeline to upload releases, source maps, and versioning:

npm run build
sentry-cli sourcemaps inject ./build
sentry-cli releases new -p $SENTRY_PROJECT $SENTRY_RELEASE
sentry-cli releases set-commits $SENTRY_RELEASE - auto - ignore-missing
sentry-cli sourcemaps upload - release=$SENTRY_RELEASE ./build
sentry-cli releases finalize $SENTRY_RELEASE
sentry-cli deploys new -e $SENTRY_ENVIRONMENT


This allows errors to be automatically tagged with the correct release, along with source maps and Git commits to help identify responsibilities and improve observability.

On top of that, since we use GraphQL for Client-Server communication, we also implemented a small ApolloLink interceptor, which checks responses for errors and sends them to Sentry as breadcrumbs. We were heavily inspired by DiederikvandenB’s apollo-link-sentry.

Backend

Our Spring Boot Setup was even simpler, because we are simply reporting errors to Sentry if they were logged as errors from Logback. All you have to do is add the correct dependencies based on your desired logging framework and set your Sentry DSN and environment in your application.yml:

<dependency>
<groupId>io.sentry</groupId>
<artifactId>sentry-spring-boot-starter-jakarta</artifactId>
<version>6.28.0</version>
</dependency>
<dependency>
<groupId>io.sentry</groupId>
<artifactId>sentry-logback</artifactId>
<version>6.28.0</version>
</dependency>
sentry:
dsn: <Sentry DSN>
environment: test


Under the hood, Sentry’s Spring Boot integration auto-configures sentry.in-app-packages property with the package where @SpringBootApplication or @SpringBootConfiguration annotated class is located. Nice and easy. Any logged error is instantly reported.

Chapter 4: Getting the Team Involved

With the setup in place, I showed it to my team. We started reviewing the Sentry dashboard every morning after our daily stand-up, which quickly became part of our routine.

This is when I also introduced them to the word: triaging. In Sentry, triaging is the process of going through reported issues, deciding which ones matter, and taking action. That could mean:

  • Assigning an issue to the right developer(s) and creating a ticket in Jira
  • Resolving it, if (already) fixed (with the next release)
  • Deleting it, if it’s irrelevant
  • Snoozing it, if it’s not actionable or important yet

Since we’re not live yet, most issues don’t require emergency attention, which is a luxury. But even now, we’ve caught bugs that would’ve slipped through — sometimes before our testing team reported them. And thanks to the replays, we saw exactly what the user did before the error.

Chapter 5: Leading by Example

After a few weeks, we decided to share our learnings in a cross-team session. We demoed our Sentry setup, how we triage, how we integrate with Jenkins, and with our chat tool, where we sent custom alerts, and so on. Since we already have a pretty decent monitoring stack — consisting of Grafana, Loki, Tempo etc. -, one question was raised on how Sentry blends in into all our existing tools. This was a totally valid question — one I’ve asked myself before. Here’s how I answered it:

Those tools are amazing for debugging, but they don’t help with managing errors. They won’t tell you how often an error occurs, whether it’s a regression, or who should fix it. Sentry fills that gap.

In short: Loki is great for logs. Tempo is great for traces. Grafana is great for dashboards. Sentry ties it all together and tells you what broke where and when, and also who should and will fix it. So I proposed: let’s not replace our stack — let’s use it together with Sentry.

Chapter 6: Managing Errors 🤝 Managing People

This was also the moment I realized something big: Error handling in enterprise isn’t just about the tooling. It’s about managing people. You can have the most advanced monitoring system, but if no one looks at it, assigns the issue, or acts on it, it’s useless.

So we started crafting processes:

  • Each scrum team gets their own Sentry team.
  • Each team owns a subset of services, which they should look over in Sentry.
  • Sentry dashboards and open issues should be reviewed daily.
  • Alerts go to a shared messaging channel and should be triaged immediately.
  • Communication between teams is key.

It’s all about clarity, accountability, and communication. And we’re still refining these flows.

Chapter 7: What’s Next?

We’ve still got a long way to go. Thankfully, our go-live is over a year away, so we have time to iterate our processes and explore further. For example, right now, we have a simple frontend-backend tracing with Sentry, but a super detailed backend tracing through all our micro services with Tempo. In an ideal scenario, we want to start tracing in the frontend (not just the gateways) and pass the Sentry trace ID along to Tempo. This would allow for even greater debugging and traceability.

Chapter 8: Reflections and Takeaways

While I’m not at the end of my Sentry journey, I’ve learned many things so far:

  • Sentry is a great tool for monitoring applications.
  • Sentry is also great for debugging in testing stages.
  • However, choosing a tool is the easy part. Defining how people should use it is where the real challenge begins.
  • When handling errors, also think about how you’re integrating error monitoring into team routines and how to define ownership.
  • The good news is: Sentry is not just a debugger, it’s a collaboration tool — It helps you track, measure, and assign errors with visibility.
  • Because, once again: Managing errors is managing people.

And if you’re wondering how things are going …

Most recent Sentry Report

We’re still seeing lots of errors, sure, but they’re way better managed and their rate has stabilized. You can also see that we have more transactions than before, because through the integration of our frontend, we are sending even more events to our Sentry instance.

And yes, the purple bars are still one of our services, but for the others it’s running pretty great. 😅

Chapter 9: Sentry and &amp

That’s it for now! Thanks for joining me on my short little journey with Sentry. I hope you gained some insights for yourself, and if not, feel free to reach out to me for further questions or information.

I first shared these thoughts during one of our amptalks — our regular internal breakfasts where we share in-depth knowledge over all kinds of topics. Now, I’m glad I could share the same experience with you through this blog post. I hope you were as excited as my colleagues in this photo, I totally didn’t pressure them to act in:

The whole team having a blast at my Sentry amptalk

At &amp, we believe in experimentation. We try things, adapt quickly, and prioritize human-centered processes. We don’t aim for error-free software (because, let’s be honest, that’s impossible), but we do aim for resilient teams that handle errors quickly and confidently.

Got questions? Want to build your Sentry-included project with us?
Let’s chat! 😊