Introduction

In Uala we used Sentry since years.

We started with a standard docker installation, with only 5 containers: Web, Worker, Cron, Postgres, Redis.

With the new version (Sentry 20), we have decided to upgrade our instance, but sadly we have found that this has got really complicated due to the new Sentry structure.

So, we’ve decided to start with a new installation, and the only way was with a Helm Chart.

Looking in the default catalog, we’ve found that the existing one is deprecated, and there is a new one here.

So, we’ve added the helm repo and tried to install the chart.

Everything has gone well but the installation creates 32 containers! Really, a huge difference compared to the previous version.

After having seen that there were no issues with the deployment (all pods started and kept running) we’ve started looking for the values.yaml file, where we can set up our sentry instance.

The helm chart doesn’t have a documentation about the values and defaults, but we’ve found the default file in the repo.

The admin user and the first issue

First of all, we need a new admin user:

  user: 
    create: "true"
    email: "[email protected]"
    password: "AgoodPassword"

And here, the first issue… after this first upgrade, containers start but cannot connect to PostgreSQL.

The reason is that the password of PostgreSQL instance doesn’t persist between multiple upgrade of the helm chart, so we have to force it with:

  postgresql:
    postgresqlPassword: "your-great-password"

Ok, now everything is working again.

Routing traffic: ingress vs nginx

After that, we have to decide if we want to manage our incoming requests with Ingress or with Nginx.
After several tests, we’ve taken the decision to go for Nginx because in this way it’s possible to log the traffic:

  ingress: 
    enabled: false
  nginx: 
    enabled: true

This will create a new nginx pod that will divide traffic between the “web” pod for administration panel and the “relay” pod for all events.

Cannot route traffic to relay pod

Here, we found LOTS of issue due to configuration of the k8s services created with the helm chart.
The main reason is that the “relay” pod is created outside the helm chart (I don’t know why) and the service selector is configured as the follow:

......
spec:
  clusterIP: 
  ports:
  - name: sentry-relay
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: sentry20
    io.cattle.field/appId: sentry20
    role: relay

If we look for the deployment, we see this:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-weight: "15"
  creationTimestamp: "2020-10-06T15:26:42Z"
  generation: 1
  labels:
    app: sentry20
    chart: sentry-5.3.0
    heritage: Helm
    release: sentry20

What happend here?

The k8s service (created by the helm chart) include a specific label:
io.cattle.field/appId: sentry20
But it doesn’t exists in the deployment spec.
This causes that the service cannot match the deployment, so all requests to the “relay” cannot be routed to the correct pod.
Where does this label came from?
It’s a specific label from RANCHER, that map all the resources created from an helm chart.
I think that the helm chart creates the relay deployment out of the normal deploy, and this causes the missing label.

Unfortunately this issue took us 2 hours before understanding why sentry cannot record new events, due to the fact that most of sentry’s pods don’t log anything related to the traffic.

Another issue is about the relay mode.
In the standard way, sentry maps in a db table all relayes enabled to send traffic, but with a default value as false.
So it responses with a 401 error code at every event.
The only way is set mode: proxy in the relay section of the values.yaml.

Affinity

Anyway, now it works, but the helm chart puts pods scattered on different hosts (30 nodes!!) and we have to keep control about where pods are.
So, we have to add some affinity rules.
Unfortunately, the helm chart have lots of deployments, and we have to create an affinity for every one…..

  redis:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            [...]
  kafka:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            [...]
  clickhouse:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            [...]
  postgresql:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            [...]
  rabbitmq:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            [...]
  relay:  
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            [...]
  sentry:  
    web:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    worker:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    ingestConsumer:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    cron:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    postProcessForward:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
  snuba:  
    api:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    consumer:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    outcomesConsumer:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    sessionsConsumer:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    transactionsConsumer:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]
    replacer:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
                [...]

Maybe a global affinity on the helm chart could be easier?

Anyway, go head.

Move to last version

The helm chart start with a specific tag version, but we can force it with:

  images:
    sentry:
      tag: "20.9.0"
    snuba: 
      tag: "20.9.0"
    relay:
      tag: "20.9.0"    

Now we can start using our sentry instance, but a new issue happend.

Configure smtp settings

If we add a new user, we cannot send it a invite with email.

Well, we have to set the email params right? but you cannot do it on the sentry interface… so let’s go back to our helm chart values file and add the conf for our email provider:

  mail:
    backend: "smtp"
    useTls: true
    username: "your-username"
    password: "your-password"
    port: 587
    host: "smtp.sendgrid.net"
    from: "[email protected]"

Github integration

And now, we would like to connect Github to our sentry instance, so we can trace every commit, release and issue.

We spent a couple of hours before understand how to make it works, the documentation is a pain, lots of issues exists about it doesn’t work… but we did it, it works and this is how.

Create a new github app here: https://github.com/settings/apps

KEEP ATTENTION! if the sentry instance is for your organization, you have to create the app here:
https://github.com/organizations/[YOUR ORG]/settings/apps
or it will not see your organization repositories in sentry

When configuring the app, use the following values:

SettingValue
Homepage URL${urlPrefix}
User authorization callback URL${urlPrefix}/auth/sso/
Setup URL (optional)${urlPrefix}/extensions/github/setup/
Webhook URL${urlPrefix}/extensions/github/webhook/

When prompted for permissions, choose the following:

PermissionSetting
Repository administrationRead-only
Repository contentsRead-only
IssuesRead & write
Pull requestsRead & write
Repository webhooksRead & write

You also have to create a private key, github can do it for you on the same page.

Now, you have to set up the helm chart with github options:

github:
    appId: "your-github-app-id"
    appName: "the-name-of-the-github-app"
    clientId: ""
    clientSecret: ""
    privateKey: |
      -----BEGIN RSA PRIVATE KEY-----
      ....
      -----END RSA PRIVATE KEY-----
    webhookSecret: "a-mandatory-secret"

KEEP ATTENTION on two things here:

  1. The private key has to but put AS IS on the helm chart values file, DON’T REPLACE carriage return with \n!!
  2. the webhookSecret IS MANDATORY, sentry documentation is wrong because it says that it’s optional… it isn’t, if you don’t create it, the integration will not work.

Now you can open your sentry instance on /settings/sentry/integrations/ and add github. It will open a popup with YOUR GITHUB APP, all should go fine here.

Conclusion

Deploy a new tool on kubernetes is not always a one-click action, and a lack of documentation doesn’t help so much.
Anyway, Sentry is a great tool, and we are happy we can use the new version with lots of new features.