In the world of tech, transparency is often discussed but rarely realised.
Marc Frankel, co-founder and CEO of cybersecurity firm Manifest, believes it's time to treat software the same way an organisation would treat food, textiles or medicine – by outlining full disclosure of what's inside the product.
"Software is the only thing that we buy that you don't get to know what's in it," says Frankel. "Your shirt probably says 100% cotton. When you pour a bowl of cereal, the FDA makes sure Kellogg's tells you what's in Raisin Bran...but what's inside Zoom, right? So this holds true for weapon systems. This holds true for MRI machines. This holds true for financial services."
That lack of visibility has real-world consequences. Frankel points to the now-infamous Log4Shell vulnerability, a critical flaw in a widely used open-source Java library, disclosed in December 2021. "It was a zero-day vulnerability – the day it was announced to the community was the same day it was found to be exploitable," he explained. "Large organisations like Palo Alto Networks and the US Department of Defence really struggled to wrap their arms around 'what did we build' and 'what did we buy that had a dependency on an affected open source component?'"
Meanwhile, software supply chain vulnerabilities have skyrocketed by 1,300% over the past three years, and Log4Shell alone is estimated to have cost over $10 billion in remediation.
Frankel's comparison to food labels isn't just illustrative – it's a direct parallel. "If you look at food manufacturers' objections to the Fair Food and Drug Labelling Act of 1909, with software producer objections to software transparency today, the objections are almost identical," he says. "It's too expensive. It's too complicated...consumers aren't going to know what to do with that information. You're going to worry people unnecessarily."
Yet governments are beginning to innovate with today's tech acceleration. In the US, any software-enabled medical device must now include a Software Bill of Materials (SBOM) to gain regulatory approval. In the EU, the new Cyber Resilience Act allows fines of up to €15 million if software vendors cannot account for their dependencies.
"Hospitals deserve to know what's in their MRI machines. Banks deserve to know what's in their telecommunication solutions, and the military deserves to know what's in its weapons systems," Frankel added.
But with the ongoing AI Boom, it's not just traditional software that needs scrutiny. Manifest is also helping organisations produce AI Bill of Materials (AIBOMs), which reveal the models and datasets used in AI systems.
"With AI applications, you have traditional software dependencies," Frankel says. "But you also have AI-specific dependencies–what models does this thing use under the hood, and what data sets are those models trained on?"
Without AIBOMs, organisations have no way to check whether their AI systems contain problematic content, rely on unauthorised models, or include data from restricted countries like China.
This demand for due diligence is no longer optional, says Frankel. Major banks and government agencies are already requesting AIBOMs from vendors before approving contracts. Frankel says his company is seeing an increasing number of enterprises requiring a full inventory of models and training datasets before procurement.
Traditionally, this has been a manual, painstaking process. Frankel described how companies typically spend six to eight weeks vetting AI models. If a developer wants to use a model, the request is passed up the chain, and someone is responsible for investigating the model's developers, licensing, origin, and associated risks.
Manifest automates this entire process. By pulling data from model cards, public databases like Hugging Face and the National Vulnerability Database, and enforcing internal policies on acceptable licenses and jurisdictions, Manifest returns answers in under 90 seconds.
Frankel says security is paramount when handling model evaluations for high-security environments. Manifest recently achieved FedRAMP High authorisation, a certification that allows it to work within highly sensitive areas of the US government.
Frankel believes the urgency around this issue is only growing. "Sixty-two per cent of security professionals named AI governance as their top priority for 2025," he says. "It's rare to get two-thirds of anyone to agree on anything."
"AI is kind of the wild west right now we're going to need you to put guardrails around, like telling us what data you trained it on, you know, where it came from," says Frankel. "This isn't a nice-to-have. It's probably an overdue-to-have."