After some months of development xsssniper has become more stable and a lot has changed since initial releases so it's about time to peek under the hood of current version: 0.8.x.
First and foremost it's important to highlight that the goal of this tool is to test an entire web application automatically with minimum human intervention (maybe xssnuker would be a better name!).
With this in mind the biggest change has been done on the injection engine. In first versions an user intervention was needed to choose wich xss payload (Y) to inject and what artifacts (Z) to check in responses:
$ python xsssniper.py --url 'X' --payload 'Y' --check 'Z'
This was pretty much like testing injections from the browser. Awful.
After a little research and testing I redesigned the engine in order to automatically inject a taint and check the response for taint's artifacts in order to deduct if an injection was correctly performed and where.
The taint is something like this:
seed:seed seed=-->seed\"seed>seed'seed>seed+seed<seed>
Where the seed
is a random alphanumeric string.
After the taint is injected the response is parsed in a finite state machine that looks for the seed and keep tracks of the logical position in the document (inside a tag attribute, inside an href, inside double quotes, inside singl equotes, etc).
If a seed is discovered in a correct position the injection is verified and reported.
This little change had a great impact on overall performances and has opened the gate to great mass scan functionalities.
In fact, before triggering the injection engine a set of crawler are run with the purpose to collect new targets to test. The crawlers are:
- An URL crawler (
--crawl
) to retrieve every local URL. - A form crawler (
--forms
) to retrieve every form on the page or, if used in conjunction with the url crawler, on the entire website. - A javascript crawler (
--dom
) used to collect javascripts, embedded and linked, to test against dom xss.
I am trying my best to detect dom xss too but unfortunately looks like that automatically testing for this vulnerability is a really difficult problem.
The solution adopted, far from being definitive, is to scan every javascript for common sources and sinks as suggested here.
This is nothing more than running a regexp to highlight possible injection points, but no automatic verification is performed so a manual inspection from the user is still needed.
This is because I still dind't find a satisfying way to statically analyze the javascript: suggestions on this point are more than welcome!
At last we have few options of common utility:
--post
and--data
to send post requests--threads
to manage the number of threads used--http-proxy
and--tor
to scan behind proxies--user-agent
to specify an user agent--random-agent
to randomize the user agent--cookie
to use a cookie
For next versions I have a little todo list with some features I'd like to implement but on top of it there is the possibility to test injections with encoded payloads/taint. I think this is vital because at now discovered injections are still pretty basic.
Oh, and HTTP response splitting! I want that too.
And, last but not least, I'd really like to improve the output format: I tried different styles but it still looks clumsy to me.
That's all for now. As usual all the code and docs are available here on my bitbucket.
If you have any suggestions, feature request, urge to contribute or just a bug to report... I want to hear from you!