How evidence-based policy controls are changing software releases [Q&A]

how-evidence-based-policy-controls-are-changing-software-releases-[q&a]
How evidence-based policy controls are changing software releases [Q&A]
Software testing checklist

Traditional software trust models have relied heavily on faith in checklists, signatures, and a patchwork of compliance artifacts. But in the wider world trust needs to be demonstrated, not taken for granted.

We spoke to Haggai Schechtman, VP of product and engineering at JFrog about how evidence-based policy controls are influencing software releases and why it’s important.

BN: How does the concept of ‘verifiable trust’ alter the traditional definition of completion in software release management?

HS: Traditionally, software was considered ‘complete’ when functionality passed testing. Verifiable trust shifts that finish line to where a release is only truly completed when its entire supply chain is proven secure, traceable, and tamper-proof. With AI models becoming a preferred attack vector, relying on manual governance is no longer viable. Leaders must now enforce a single system of record where every software component, including ML models, is verified both before deployment and in runtime environments. As recent outages such as Cloudflare has demonstrated, automated ‘shift left’ security is essential to ensure updates don’t cause more disruption than the bugs or vulnerabilities they fix.

BN: What are the long-term benefits of adopting evidence-based release practices, beyond just satisfying a compliance audit?

HS: Beyond compliance, evidence-based practices establish a single system of record that builds genuine organizational resilience. By eliminating blind spots and manual governance, organizations gain the visibility required to balance speed with control, especially as AI adoption grows.

This approach reduces developer burnout caused by vulnerability fatigue and enables faster recovery during incidents by ensuring every component is traceable. Further to this, it can also help to address quality assurance and process alignment, helping to increase developer and security team productivity. Ultimately, it transforms security and governance, regulation and compliance from potential blockers into accelerators for secure innovation.

BN: How can evidence collection be integrated into the CI/CD pipeline without significantly slowing down build or deployment times?

HS: The key is integrating evidence collection within a unified software supply chain platform rather than relying on disparate tools or manual checks. Embedding automated security scans and policy enforcement directly into the software development lifecycle eliminates the friction of manual evidence gathering. This approach ensures you maintain ‘verifiable trust’ and a complete audit trail while accelerating delivery.

BN: For non-technical stakeholders (e.g., a CEO or legal team), what’s the most effective way to communicate the assurance provided by evidence-based release controls?

HS: Evidence-based controls are vital for business success, extending beyond mere technical compliance to secure executive buy-in. They establish a ‘digital chain of custody’ in the form of a single, tamper-proof system of record for all software assets and AI models, protecting the corporate brand and providing indisputable proof of due diligence. By eliminating the liability risks of ‘Shadow AI’ and manual governance errors, AI usage is demystified.

For non-technical stakeholders, including CEOs, the assurance from evidence-based release controls is vital. It increases efficiency by preventing potential work duplication and error correction that often result from a single, end-of-process gate. This enhanced visibility assures leadership that security is keeping pace with development speed. Furthermore, the verifiable audit trail minimizes costs and accelerates recovery in the event of a crisis.

BN: Does AI/machine learning have a part to play in analyzing the vast amounts of evidence collected and identify potential risks or anomalies before a release?

HS: Absolutely. With the volume of threats manual review is impossible. Automated, intelligent analysis is critical to identifying genuine risks within vast datasets. For example, it enables teams to discern from the 12 percent of ‘critical’ CVEs that are truly exploitable from the noise of inflated scores and false positives. By integrating this analysis into the software supply chain, organizations can detect anomalies in both code and ML models both before they enter the organization, and if they’re already operating in runtime environments. AI and automated security analysis do have their pitfalls, however.

It is currently competent at identifying the low-hanging fruit risks, but for deeper bugs and vulnerabilities, more emphasis should be placed on human security research teams. As such, AI/machine learning is a tool to enhance security team capabilities, not replace them.

Image credit: Prakitta Lapphatthranan/Dreamstime.com