Date: 2026-02-19
Show HN: KRS – Turn vulnerability scanner noise into prioritized action
- Best window: 14:00–16:00 UTC, Tue–Thu
- Avoid major US holiday windows and late Friday drops
- Launch with team online for at least 4–6 hours of active replies
KRS is the execution layer between vulnerability detection and remediation.
Most teams already have scanner output, but still spend hours manually deciding what to fix now vs defer. KRS ingests findings, adds context (KEV/EPSS/exposure/asset criticality), and outputs a ranked action queue with rationale and ticket-ready evidence.
What I want feedback on:
- Which integration is non-negotiable first (Jira, ServiceNow, Slack, scanner-specific API)?
- Where does your current triage loop break most (false positives, ownership mapping, or ticket churn)?
- What would make this immediately deployable for your team?
- r/netsec
- r/cybersecurity
- r/blueteamsec
- r/sysadmin
Note: tailor to each subreddit rules (self-promo limits, required flair, technical depth).
Title: Built KRS to reduce vuln triage noise — looking for practitioner feedback
We built KRS to convert scanner overload into a prioritized remediation queue.
Current flow:
- ingest findings
- add KEV/EPSS + exposure + asset criticality
- produce fix-now vs defer recommendations with evidence
- route into ticketing/alerts
Would love feedback from teams doing real vulnerability operations:
- What’s your hardest bottleneck today?
- Which integration is mandatory?
- What would you need to trust automated prioritization?
Repo: https://github.com/Operative-001/krs
Title: Security ops question: how are you prioritizing scanner findings at scale?
We’re building KRS around this problem and want honest input.
If you run Qualys/Tenable/Rapid7/Defender/etc., what actually helps reduce time-to-action?
- KEV and exploit context?
- asset criticality weighting?
- auto-generated ticket workflows?
Trying to solve this as an execution problem, not another dashboard.
Create /.github/ISSUE_TEMPLATE/user-feedback.yml:
name: User Feedback
about: Share real-world triage pain points and KRS fit gaps
title: "[Feedback] <short summary>"
labels: [feedback]
body:
- type: textarea
id: env
attributes:
label: Environment
description: Team size, tool stack, deployment context
- type: textarea
id: pain
attributes:
label: Current pain
description: What takes too long or breaks down in triage
- type: textarea
id: workflow
attributes:
label: Current workflow
description: Scanner -> triage -> ticket -> verification path
- type: textarea
id: must_have
attributes:
label: Must-have integration/features
- type: textarea
id: trust
attributes:
label: Trust blockers
description: What must be explainable/controllable before adoption- Brian Krebs — Krebs on Security
- Troy Hunt — Have I Been Pwned / blog
- Daniel Miessler — Unsupervised Learning
- SANS Internet Storm Center
- The Record by Recorded Future
- BleepingComputer (security news desk)
- CISO Series community/newsletter
- OWASP community channels
- Cloud Security Alliance community
- SecurityWeek editorial/community desk
Track weekly and post-launch day-1/day-7/day-30:
- GitHub stars
- GitHub forks
- Watchers
- Opened issues
- Unique contributors
- Feedback issues tagged
feedback - Integration requests by type (Jira/ServiceNow/Slack/scanner)
- “Must-have” feature frequency
- Time-to-first-response on issues
- Repo views/clones
- Referral sources (HN, Reddit, direct)
- CTR from launch posts
- % feedback mentioning explainability concerns
- % feedback asking for evidence/audit features
- % users indicating they would pilot in current form