How are you connecting GitHub PRs to actual release confidence? #187900
Replies: 2 comments 1 reply
-
|
Our fix was treating the GitHub Release (the tag) as the source of truth, not the PR. We started using a dedicated Release Dashboard (we use a custom internal tool, but others use tools like ArgoCD or even Jira's Release Hub) that aggregates all checks from the PRs included in that tag. For audits, we automate a Release Evidence PDF via GitHub Actions that pulls every green check and manual approval comment into one file. If it's not in the PDF, it didn't happen. |
Beta Was this translation helpful? Give feedback.
-
|
Release confidence is automatically calculated when you deploy to a protected production environment and have incident tracking set up. Ensure your repo has an environment configured with deployment protection rules (like required reviewers or wait timers) and that you create deployments to that environment during releases. GitHub then correlates merged PRs with post-deployment incidents to show a confidence score on the PR. No manual linking needed — it's based on deployment metadata. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Question
Body
We run everything through GitHub. Code, PR reviews, GitHub Actions, basic issue tracking. On paper it looks clean. But I keep running into the same gap: PR merged does not equal release ready.
Here’s where it gets messy:
A PR shows green checks, but half the integration tests were skipped in a separate workflow
CI logs have artifacts and screenshots, but no consolidated view across multiple PRs in a release
Manual validation is happening, but it lives in comments, spreadsheets, or someone’s memory
When something breaks in prod, it is hard to answer “what exactly was validated before this release?”
We tried a few approaches:
Keeping everything in GitHub
We used PR templates, required checks, and Actions summaries. This worked well for automation but fell apart for cross-feature visibility. It is hard to see a release level picture when validation is scattered across dozens of PRs.
External test tracking layered on top
We still keep automation in GitHub, but push run results into a test tracking layer. We have experimented with tools like Quase and Tuskr to link test runs to releases instead of individual PRs. That gives us:
A single view of what passed and failed per release candidate
Historical comparison across releases
Clear ownership of retests after fixes
Less digging through raw CI logs
Using GitHub Projects for everything
It works for lightweight teams, but once you need structured traceability between requirements, test cases, and runs, it starts to feel stretched beyond its original purpose.
The core issue for me is this: GitHub is excellent at tracking code changes, but release confidence is a different dimension. It spans multiple PRs, multiple pipelines, and sometimes manual validation.
How are you solving this?
Are you relying purely on required status checks?
Do you aggregate results somewhere outside GitHub?
How do you answer audit style questions like “what exactly was tested before release 1.4.2?”
Would love concrete setups, not just theory.
Beta Was this translation helpful? Give feedback.
All reactions