Behind Curtain Three: Automated Code Remediation
XXXX
This blog is the last of a three-blog series. In the first part, I went through the evolution of the application security market and where it will go next (hint automatic secure code remediation). The second part discusses the technological advancements and other factors that best position us today to address the application security challenges faced by organizations and deliver on our promise.
Soon after Jonathan and I founded Mobb, we spent some time defining our five core values. Transparency is on that list, and in the spirit of transparency, I will describe how our automated remediation technology works in this blog. Well, most of it, at least. I can't fit it all into one blog.
The (not-secret-anymore) recipe
If you have read the previous two parts, you should know by now that Mobb is not a vulnerability scanner but an automatic remediation solution. In fact, we have no intention of ever investing in scanning capabilities. We leave that task to other vendors.
But to be able to fix a vulnerability, one should first know about it. As Mobb doesn't detect vulnerabilities on its own, we rely on you to provide us with the details. Second, Mobb provides actual code remediations, not generic sample codes or remediation advice. To do that, we need you to give us access to the scanned code so that we can generate the fix.
Step 1: Analyze the report
To start the code remediation process, Mobb ingests a SAST* report. Nothing fancy. Just the standard XML, JSON, or SARIF report your current SAST tool produces. You may already use this report as part of your pipeline integrations, so simply send it to Mobb.
Once you upload the report, Mobb reviews it, identifies all instances of supported issues, and extracts the information it needs to fix each of the findings automatically. Information such as the dataflow showing the trace of the user input through the different nodes with its indexes.
*Mobb currently supports the Checkmarx, GitHub (CodeQL), and Snyk SAST results. Our team is actively working on enhancing our support for additional scanners, which we expect to share soon. Let us know which SAST tool you want us to add next.
Step 2: Find the right location to implement the fix
Our research team has built and maintains a comprehensive set of Semgrep queries and Semgrep abstract syntax tree direct pattern search, which we use to identify different ways developers can make each security mistake. If you think about it, there is more than one way to write code that would be susceptible to an SQL Injection, so there is more than one way to fix it.
For example, both vulnerable code samples below do the same thing but are written slightly differently. The changes required to fix them will also need to be different.
DDFFFFFFF
try (Connection con = dataSource.getConnection()) {
Statement statement = con.createStatement();
ResultSet results = statement.executeQuery("SELECT * FROM
access_log WHERE action LIKE '%" + obj + "%'");
String query = "SELECT * FROM access_log WHERE action LIKE ?";
try (Connection con = dataSource.getConnection()) {
PreparedStatement statement = con.prepareStatement(query); statement.setString(1, '%'+(obj)+'%');
ResultSet results = statement.executeQuery();
FFFFFFF
Running these pattern searches and queries on the report allows us to obtain the vulnerability code context with the necessary information needed to create a fix. Using all this context info, we then match our pre-prepared fix algorithms to each context, and together with the user input (dev interaction), the algorithm builds the correct fix.
Step 3: Code fixing using the Mobb rules
For each pattern created to identify an instance of a security mistake, the research team also made a matching fix pattern to go with it. We refer to these as the Mobb rules. The ability to match particular findings with their fixes allows us to accurately produce a code fix that both remediates the vulnerability and adheres to the correctness of the language, eliminating the risk of introducing code defects.
Mobb does not act on the security side. It is designed to be a part of the developers' world and, as such, needs to play by developers' rules.
The challenge with providing developers with automated code remediations, especially for a security task, is that the suggested fixes need a 100% success rate. The first time a mistake happens, whether the produced fix breaks the build or worse, the build passes but the change leads to an error in the application, it is game over. If you want to enter this market, that should scare the heck out of you.
Even with the most robust set of rules, it is improbable that an automated tool can, with 100% certainty, safely fix the reported findings out of the gate. It is not a matter of technology limitations. It is simply that the provided data misses some of the required context.
For example, looking again at the first code sample.
DDFFFFFFF
String query = "SELECT * FROM table WHERE obj LIKE '%" + obj + "%'";
try (Connection con = dataSource.getConnection()) {
Statement statement = con.createStatement();
ResultSet results = statement.executeQuery(query);
DDFFFFFFF
This code is, of course, susceptible to SQL Injection. Instead of using string concatenation with executeQuery, a prepared statement should have been used. At first look, this seems like a straightforward task for an automated tool. But will the remediated code below be 100% safe?
DDFFFFFFF
String query = "SELECT * FROM access_log WHERE action LIKE ?";
try (Connection con = dataSource.getConnection()) {
PreparedStatement statement = con.prepareStatement(query); statement.setString(1, '%'+(obj)+'%');
ResultSet results = statement.executeQuery();
setString
and not setDate
for example) may only be discovered too late.Because of this, we implemented Mobb's interactive fixing feature (patent pending). We present developers with an entirely constructed fix and consult with them where context is missing. This way, Mobb still takes care of 100% of the security effort, but instead of relying on educated guesses, it relies on the developer to fill any gaps and, in doing so, invites the developer to review the code and learn the best practices. And you know what? Developers love it.
So that’s how the automatic remediation magic happens.
Are you interested to see it for yourself? Schedule your demo here. Jonathan or I will be happy to show you how the magic happens.
DDFFFFFFF
Eitan Worcel