Broly: Rebuilding Code Security with Signal, Speed, and AI

If you’ve spent any time in real-world codebases, you’ve probably seen the pattern: a stack of commercial security products layered atop one another, each promising coverage, visibility, and peace of mind, yet rarely coming together into a workflow people actually trust. One platform handles secrets & dependencies, another does SAST, and then containers and Dockerfiles live somewhere else entirely. Even when these tools are individually capable, the overall experience is fragmented. Each product has its own model, its own UI, its own definitions, and its own stream of findings. And somewhere in the middle of all that, developers are expected to make sense of it, prioritize it, and keep shipping.

Many of these tools are technically strong, and much of modern security tooling, including Broly, builds on important ideas and existing open-source work. The issue is the outcome they create in practice. Findings pile up. False positives creep in. Teams spend more time triaging than fixing. Security becomes a layer of operational overhead rather than a system that developers feel aligned with. Over time, trust erodes not necessarily because the scanners are useless, but because the overall experience is noisy, fragmented, and hard to act on. The tooling is there. The coverage is there. But the confidence often isn’t.

Broly started as a reaction to that gap

I did not set out to build a tool by pretending nothing useful already existed. Quite the opposite, actually. There is a huge amount of valuable open source work in security already, and Broly borrows from that reality rather than ignoring it. The problem I wanted to explore was not whether I could invent every building block myself. It was whether those building blocks could be brought together in a way that produced a better outcome for the people actually using them.

That led to a more uncomfortable question: what if the real problem in code security is not detection, but everything around it? What if the hardest part is not finding issues, but surfacing the right signal, reducing the noise, and presenting findings in a way that developers can understand and act on? What if code security is fundamentally a workflow and trust problem just as much as it is a detection problem?

That question ended up shaping everything.

Broly is built around three principles: signal, speed, and selective use of AI. The idea sounds simple when written down: reduce noise, make results faster, and only use AI where it genuinely improves the outcome. In practice, that meant rethinking how different security surfaces could work together instead of living in separate silos. Secrets, dependencies, static analysis, Dockerfiles, and containers are usually treated as separate categories because that is how tools are sold and organized. But that is not how developers experience them. In a real engineering environment, they all show up in the same repo, in the same workflow, and often in the same pull request. Broly was built around that reality.

That is why it runs as a single fast Go binary with a unified output model. The goal was not just convenience. The goal was coherence. When everything is part of one system, the output starts making more sense. You are no longer mentally stitching together findings across multiple products and trying to normalize them yourself. You are looking at one stream of findings that already shares context. That alone changes the experience more than people often expect. Fragmentation is not just annoying. It affects how teams reason about risk. When the workflow is fragmented, trust becomes fragmented too.

It also became clear pretty quickly that detection alone is not enough.

A scanner that does not fit into the way developers actually work is a scanner that eventually gets ignored. It does not matter how sophisticated it is under the hood. If it slows teams down, re-reports the same issues endlessly, or forces people to wade through irrelevant findings, they stop paying attention. So a lot of Broly’s design ended up focusing on workflow just as much as scanning itself. That meant things like baselines to avoid repeatedly surfacing known issues, incremental scans that focus on what changed, pull request reporting that is tied to the developer loop, and SBOM output that can be used downstream instead of just being displayed and forgotten.

Those choices might look like product details from the outside, but they are not secondary. They are the difference between a tool that technically works and a tool that actually becomes part of the engineering process. Security tooling often gets evaluated on how much it can detect, but in practice, one of the more important questions is whether anyone will keep using it once the novelty wears off.

That same line of thinking shaped how I approached AI in Broly.

There is a lot of excitement around AI in security right now, but much of it feels either exaggerated or poorly bound. On one side, AI gets treated like a magic layer that can solve the hard parts automatically. On the other hand, it gets dropped in as an opaque black box that teams are expected to trust because the output sounds smart. Neither approach felt right to me.

One place this thinking mattered a lot was in SAST. I did not want Broly’s code analysis to become another rules-heavy system that flags patterns without understanding the surrounding code. Rules are useful, but they also create bias. They reflect what the author decided to model, which patterns were easy to express, and the assumptions the engine makes about risk. That can produce findings that look precise but miss the surrounding context of how the code actually works. I had read about how Codex approached this problem through semantics, repository context, and validation rather than relying only on static rule sets, and that idea stayed with me. It pushed me toward a design where SAST in Broly leans more on context and meaning, not just brittle rule matching.

Another important part of making that work was integrating Together AI into the AI workflow. I did not want Broly to be tied to one fixed model or one closed provider. With Together, the model layer stays flexible. Anyone using Broly can plug in strong open-source models and improve the quality of SAST reasoning, triage, and contextual analysis over time. That felt especially important for an open-source project, because it means the intelligence layer can evolve as the model ecosystem improves, instead of being locked into a single static choice.

More broadly, AI in Broly is used selectively. Deterministic systems handle the parts that need to be fast, repeatable, and predictable, like structured analysis, pattern matching, and dependency handling. AI comes in where judgment and context can actually improve the result: filtering likely false positives, helping with triage, reasoning about reachability, and adding context where raw findings on their own are not enough.

That balance matters. Too much determinism without context can create overwhelming amounts of output. Too much AI without structure creates something harder to trust. The useful middle ground is to be explicit about where AI belongs and where it does not. It should not be the foundation. It should help refine the output of a system that is already grounded and understandable.

Of course, none of this came together perfectly on the first pass. Or the second.

Some of the most useful lessons came from the places where things broke. Trust boundaries around AI were harder than they looked at first. Deciding what context could be safely shared, how much was actually needed, and how to make that boundary explicit instead of hand-wavy turned out to be a real design problem. Fingerprinting and deduplication also introduced problems that were easy to underestimate.

A result can be correct and still not be useful. In security, that is often where frustration starts. People do not experience findings as abstract units of truth. They experience them as interruptions, decisions, and tradeoffs inside an already busy engineering workflow. If the output is technically right but operationally confusing, the tool still creates drag.

Containers were another reminder of how messy reality gets once a tool leaves the clean environment of a demo and starts dealing with actual systems. Layer attribution, reconstructing dependency state, handling whiteouts, and correctly understanding what belongs where all turned out to be deeper problems than they appear at first glance. These were not just implementation bugs. They were a reminder that security tools themselves need the same scrutiny we apply to the systems they analyze.

That is one of the reasons open source mattered so much to this project.

Broly is not trying to pretend that one tool can solve all of AppSec. If anything, it is an attempt to push in a different direction using the best of what already exists: less fragmentation, less scanner sprawl, fewer disconnected workflows, and more attention to signal that people can actually use.

Better security does not come from piling on more dashboards, more alerts, or more AI just because it is available. It comes from building systems that are fast, understandable, and aligned with how developers actually work. It comes from reducing decision fatigue instead of adding to it. It comes from treating workflow, trust, and usability as core parts of security engineering rather than as polish added later.

Broly is still evolving, and there is still a lot to improve. But at its core, it is an attempt to make code security feel more coherent than it does today.

Github: https://github.com/Shasheen8/Broly‍ ‍

Next
Next

TeamPCP’s Supply Chain Campaign So Far..