Replies: 2 comments 4 replies
-
Well fuzzing with source twice to triple the speed so it is faster, it also sees way more coverage due to source instrumentation, so this is way it is way better. |
Beta Was this translation helpful? Give feedback.
-
Many thanks for your reply @vanhauser-thc, very much appreciated. I used a very "naked" approach giving no flags or optimizations at all, well, besides -Wall. Also no cmake, yocto or anything else in the way. Direct call to the compiler. I'm sorry for my English, but I do not fully understand your statement: "... fuzzing with source twice to triple the speed so it is faster, ..."
All in all, are there any typical, best to use compilation/build-flags to be used? Or, perhaps there is one particular tutorial/doc that pops up in your mind when you read my details. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I'm on my way to evaluate fuzz-testing using different fuzzers. I'm operating in an embedded environment but with rather sophisticated and powerful SoCs. The one in mind is an x86_64 based SoC for which I have setup several small applications to be fuzzed.
I of course have the source-code available and started with this source-code fuzzing approach. The results were not that promising, but that was mainly due to my misunderstanding how the seed has to given and the corpus-building seems to work.
BUT - then I used out of pure interest QEMU for running the fuzzer.
I was shocked to see that QEMU resulted in dramatically less run-time finding the fault. Even wir completely misleading seeds it was ways better.
That sounds very confusing and I do not exactly know how to interpret the results.
It's very strange situation that even I have the source code available the instrumented source-code does not identify the problem that fast as with the untouched binary run with QEMU. Even without providing any seed I could reproduced the crash within seconds, while with instrumented source code, it required a good seed and even with a good seed it could take for a few hours.
I have seen this difference not only with AFL but also with Honggfuzz. I assume source code instrumentation may have some limitations, while Qemu-mode provides the fuzzer with a "live-view" of what is going on in the binary.
However in the documents I have read, Qemu-mode is mentioned only in context of "if you do not have the source-code available". Thus, I assumed that with source-code available you should not use Qemu. Why? Perhaps it's not that good as with source-code instrumentation. But, that's pure speculation.
Perhaps I'm wrong. Either I do something wrong or I do understand wrongly.
Do you have any thought or experience about this?
One example run I have captured here:
Beta Was this translation helpful? Give feedback.
All reactions