
In December 2020, attackers slipped malicious updates into SolarWinds' Orion software, compromising thousands of organizations, including U.S. government agencies. The breach, later traced to tampered build processes, marked a turning point: the software supply chain itself had become a primary target. Since then, incidents from Codecov to 3CX have reinforced a grim pattern—attackers can poison the code before it ever runs in production or at the customer's premises.
Now a new variable has been added. Artificial intelligence systems are writing and suggesting code on an unprecedented scale. By some estimates, more than half of new code shipped in 2025 will be generated with the help of AI assistants. The appeal is obvious: speed, efficiency, and reduced cost. The risks are equally stark. AI-generated suggestions frequently include security flaws, reviving the same class of errors engineers have spent decades trying to eradicate.
The Evidence Problem
For decades, software security was largely reactive. Vulnerabilities were discovered in production, patches followed, and audits provided reassurance months later. That model is faltering. Attackers exploit new flaws in hours, not weeks. Meanwhile, regulators are demanding proof of integrity. The United States' Executive Order 14028, the European Union's Cyber Resilience Act, and Japan's Active Cyberdefense Law all require verifiable software lineage, not just promises from vendors.
The challenge is scale. Modern development pipelines span multiple repositories, cloud environments, and third-party components. Every commit, build script, and deployment configuration is a potential attack surface. When AI begins generating vast amounts of code, the complexity compounds. Manual review and spreadsheet-based audits cannot keep up.
As Rubi Arbel, chief executive of Scribe Security, explained: "Without evidence continuously and automatically collected at each stage, no one can credibly claim to know what is running in their production systems, and without augmenting AppSec workflows with AI-driven workflows, no human can keep pace with modern application development."
Security at Commit
Scribe Security's model reflects a shift from inspection after the fact to continuous proof during development. The company's platform integrates with repositories and build systems, capturing signed evidence of software provenance as code is created. That evidence—Software Bills of Materials, scanner outputs, pipelines security posture, development context, provenance, and signatures—is encrypted, linked to identities, and stored in a tamper-proof knowledge graph.
Guardrails, expressed as code, dictate which artifacts may advance. If a build lacks a valid SBOM or if provenance is missing, the build process halts automatically. For developers, feedback appears within existing tools, reducing friction. For auditors, the system generates verifiable trails that map each artifact from commit to deployment.
This transition reflects a larger industry recognition: the point of greatest leverage is not at deployment, but at the very first commit. Once unverified code propagates, downstream remediation becomes exponentially harder.
AI as Both Risk and Remedy
While AI introduces vulnerabilities at a pace never seen before, it also offers a way to remediate them at scale. Scribe has developed a network of agents, each tasked with a specific function: triaging vulnerabilities, generating pull requests to patch insecure dependencies, hardening Docker images, or producing compliance reports. These agents operate against the signed knowledge graph, making their decisions transparent and auditable.
By using AI to remediate problems at scale, security teams, often outnumbered by developers at ratios of 1 to 100, can act with greater precision. Instead of drowning in alerts, they receive prioritized, explainable actions tied to verifiable evidence.
As Arbel noted: "AI multiplies risk when it generates code without product context or security expertise. But when you feed AI with signed, contextual evidence, and you automate it into the SDLC, it can also multiply the speed of remediation."
Market and Policy Pressures
The global cybersecurity market is expected to surpass $350 billion by 2030, with supply chain protection one of its fastest-growing segments. Regulatory demands are a key driver. The EU's Digital Operational Resilience Act will soon require financial institutions to prove the security of their software supply chains. U.S. federal contractors must already generate SBOMs. Japan's legislative push adds further pressure for accountability.
At the same time, the workforce shortage in cybersecurity remains acute. ISACA estimates 3.4 million unfilled roles worldwide. Automated evidence collection and AI-driven remediation are not just technological upgrades; they are a response to a human capacity gap.
Trust as Infrastructure
Software underpins critical services: hospitals, transportation networks, and stock exchanges all depend on code written and deployed at breakneck speed. When that code is generated by machines, the need for transparency becomes existential. A single poisoned dependency or compromised pipeline can ripple through entire economies.
For Arbel, the issue is less about innovation than about credibility. "Trust is no longer a matter of branding or reputation. It is a matter of cryptographic proof collected as development happens."
The race to adopt AI in coding is unlikely to slow. The question is whether organizations can embed trust at the same velocity. Starting at commit, not at release, may be the only way to keep software safe in an age where machines increasingly write it.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.