Machine learning-assisted Fuzzer finds 31 new vulnerabilities

Hey all. This post will briefly describe how a machine learning enabled-fuzzer, NEUZZ, outperformed 10 non-ML-assisted fuzzers on 10 real programs to find 31 unknown vulnerabilities. NEUZZ was created by researchers from Columbia University, She et al in 2019. NEUZZ is open source software available on GitHub.

The increased performance and opportunities provided by machine learning is why I created a machine learning course for hackers called Security Kiwi which launched today. The course starts off beginner-friendly with the fundamentals and we quickly work our way into practical exercises, neural networks and other fun stuff.

Eventually, learners build larger machine learning projects: automated email malware collection, facial recognition for OSINT, threat prediction, ‘cyber weather’ prediction, and more. I want to encourage the use of ML and experimentation with ML in a non-academic/commercial environment, so much of the course content is free. Videos, downloads, Q&A, discussion, etc available to patreon supporters. New content is added twice a month, and weekly in January.

Anyway, onto the good part, ML-assisted fuzzing.

N.B. If there’s another security-related machine learning article (from behind a journal paywall, for example) you’re interested in and want me to discuss in detail (rather than this very brief overview), reply below or hit me up on twitter (via DM). I’m also on the 0x00Sec discord (Kris). I added a quick list of academic databases to the bottom of the page.

How NEUZZ Works

NEUZZ was created using TensorFlow & Keras, two open source machine learning libraries security kiwi will cover shortly. Most fuzzers use evolutionary algorithms behind the scenes which aim to increase the coverage of the inputs they test, in a bid to find as many vulnerabilities as possible. The authors of NEUZZ note the main limitation of evolutionary algorithms is their inability to model the structure of the optimization problem (increasing coverage is an optimization problem). Gradient descent, the mathematical concept underlying many neural network implementations doesn’t suffer from this limitation (don’t worry we won’t be diving into any math). The researchers set out to create a prototype and test the hypothesis that neural networks can be used to improve fuzzer performance.

As we can’t cover the foundations of machine learning in a paragraph or two, we will be over-simplifying and glossing over some of the internal workings discussed in the research article. NEUZZ takes initial seed inputs and generates new inputs iteratively, using the technique we mentioned earlier, gradient descent.

What is Gradient Descent?

Gradient descent is a process used to find the lowest error rate by updating model parameters and following the direction of the slope until it reaches a valley; the valley represents a low error rate and thus high accuracy. Slopes? Valleys? Complex math is often made more digestible by visualizing the problem. This type of thinking can be strange to begin with, stay with me as I explain.

Figure 1 shows gradient descent, the algorithm follows the slope of the curve until it finds the low point. Each point on the graph is after a change to internal model parameters, slowly the algorithm learns to increase accuracy (and reduce error/loss). However, you can plot in more dimensions, and this can dramatically shift your understanding of how machine learning actually works.

Figure 2 shows a 3D representation of the problem space the algorithm has to optimize. The blue dot is the representation of parameters and their change through time (the black line) as the algorithm is exposed to new data and the loss is optimized (reduced as much as possible). Projected below the 3D representation in figure 2 is another type of representation you may come across; topographic, the same as you may be familiar with from a geographic map showing the height of the terrain. Figure 2 shows the same concept as Figure 1, one is 2D and the other 3D.

An analogy may illuminate these figures further. Gradient descent can be seen as a blind-folded mountaineer attempting to find their way down a mountain only by feeling the gradient of the terrain. The security kiwi page on optimization algorithms, and gradient descent in particular, will be added on February 4th.

Back to NEUZZ

Using the process of gradient descent NEUZZ identifies the input bytes with the highest gradient values, these indicate a higher level of importance to the neural network and have a higher chance of causing a major change to the program’s behaviour. The neural network (a three-layered network) models the target program’s behaviour. The neural network is incrementally refined using observed behaviours during fuzzing (i.e. new training data is generated and learned from). When major changes in behaviour are detected, traces are recorded for manual analysis.

The security kiwi page discussing neural networks will be added on January 21st, and the whole Training Models chapter will be added shortly after.

NEUZZ Results

TDLR; 31 vulnerabilities not discovered by 10 other fuzzers including two CVEs (CVE-2018-19931 and CVE-2018-19932).

She et al, break down their evaluation into answering research questions they created at the start to guide their work.

  1. Can NEUZZ find more bugs than existing fuzzers?

NEUZZ finds the following bugs in 6 programs; out-of-memory, memory leak, assertion crash, integer overflow and heap overflow. While other fuzzers tested find up to 3 or 2 of these bug types. See the image below for detail.

  1. Can NEUZZ achieve higher coverage than existing fuzzers?

She et al ran each fuzzer against 10 different programs for 24 hours and recorded the edge coverage of each. This describes how much of the program was tested to find vulnerabilities. The image below shows this comparison after 24 hours.

The following image shows edge coverage over time. Consider NEUZZ’s speed and coverage rate versus the other fuzzers tested. The reason for its vastly superior speed and coverage is what we discussed before. The use of gradient descent and complementary neural network techniques to solve the limitation of solving it as an optimization problem.

Thanks for reading. I hope this was interesting.

As I mentioned, if you have any machine learning research papers you would like me to cover in detail, or project ideas for security kiwi please get in touch (email, twitter, discord) or reply below. Don’t expect a quick turn-around, a proper write-up and research takes time. Additionally, if you have any questions I’m happy to answer them.

Academic databases:

References

Dongdong She, Kexin Pei, Dave Epstein, Junfeng Yang, Baishakhi Ray, and Suman Jana. NEUZZ: Efficient Fuzzing with Neural Program Smoothing. https://arxiv.org/abs/1807.05620

7 Likes

Good stuff! That article is somewhat reviving the news category here… at least to a degree.
I like the idea of paper reviews/summaries! Reading a 1-2 page summary may give you enough information for determining whether or not the topic peaks your interest.

I think I skimmed over that NEUZZ paper a while back but I’m not a big fan of reading ML papers so this was a nice refresher :slight_smile:

That said, for anyone else interested in fuzzing and vulnerability research I’m compiling a list of academic papers on my GitHub as well

3 Likes

That’s a massive list, what a great resource. I’ll see if I have any quality papers to add

3 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.