Burp Suite工具使用之四-Sequencer模块介绍

来源:互联网 发布:甩手掌柜工具箱软件 编辑:程序博客网 时间:2024/05/01 01:59

hi,everyone!

I’m using a request that the Burp Suite captured when I browsed to the MSN homepage today. The request is to an ad hosting server and contains a cookie value. I’m going to use this request and its cookie value to demonstrate the Sequencer tool.

What is the Sequencer tool?

The Burp Suite is a collection of tools for web application security testing which includes the Sequencer tool (description take from the Port Swigger website):

Sequencer: Burp Sequencer is a tool for analysing the degree of randomness in an application’s session tokens or other items on whose unpredictability the application depends for its security.

Enabling the Burp Suite Proxy

To begin using the Burp Suite to test the strength of the cookie/token value we need configure our web browser to use the Burp Suite as a proxy. The Burp Suite proxy will use port 8080 by default but you can change this if you want to.

You can see in the image below that I have configured Firefox to use the Burp Suite proxy for all traffic.

When you open the Burp Suite proxy tool and check that the proxy is running by clicking on the options tab:

You can see that the proxy is using the default port:

The proxy is now running and ready to use. You can see that the proxy options tab has quite a few items that we can configure to meet our testing needs. A lot of these items are out side of the scope of this tutorial.

The Burp Suite will now begin logging the requests and responses that pass through the proxy. I have browsed to the MSN homepage and the Burp Suite proxy has captured the request and response to the ad host:

I normally start using the Sequencer tool as soon as I begin my testing work to ensure that a large amount of tokens are captured for analysis.

To start capturing tokens for analysis you need to send a request to the Sequencer tool. We have seen above that the Burp Suite has already captured a request and response for us so we need to right click this and send it to the Sequencer tool:

The Burp Suite will send the response from the ad host to the Sequencer tool so we can begin testing the Cookie value in the response for weaknesses.

The Sequencer tool has automatically found values in the response which look like cookies/tokens and added them to the cookie dropdown box:

You can click on any of the values in the dropdown box to mark it as the value to test. If the Burp Suite doesn’t find your cookie/token values automatically you can manually select them by clicking the “manual selection” radio button.

I will be testing the ajcmp value in this tutorial but before I start the tests lets take a look at the manual load and options tabs.

Manually loading values to test

We have already seen that the Sequencer tool can take responses captured by other tools in the Burp Suite and automatically identify values to test. If we want to manually load in cookie/token values for the Sequencer tool to test we can do this by clicking on the manual load tab and then clicking on load or paste:

The tokens are now loaded into the tool and we use the Sequencer tool to analyse them. I won’t use these values in this tutorial because we are going to use the live capture approach instead.

The third tab that we can click on in the Sequencer tool is the options tab. The options tab will allow us to pad short tokens with either single characters or 2 digit ASCII hex code values. You can also instruct the tool to base64 decode cookies/tokens before performing character or bit level analysis on it.

The Sequencer tool has two types of character level analysis and seven types of bit level analysis. They are all enabled by default but you can turn them on or off on the options tab:

I have included a description of each of the nine analysis types below (descriptions take from the Port Swigger website):

Character count analysis: This test analyses the distribution of characters used at each position within the token. If the sample is randomly generated, the distribution of characters employed is likely to be approximately uniform. At each position, the test computes the probability of the observed distribution arising if the tokens are random.

Character transition analysis: This test analyses the transitions between successive tokens in the sample. If the sample is randomly generated, a character appearing at a given position is equally likely to be followed in the next token by any one of the characters that is used at that position. At each position, the test computes the probability of the observed transitions arising if the tokens are random.

FIPS monobit test: This test analyses the distribution of ones and zeroes at each bit position. If the sample is randomly generated, the number of ones and zeroes is likely to be approximately equal. At each position, the test computes the probability of the observed distribution arising if the tokens are random. For each of the FIPS tests carried out, in addition to reporting the probability of the observed data occurring, Burp Sequencer also records whether each bit passed or failed the FIPS test. Note that the FIPS pass criteria are recalibrated within Burp Sequencer to work with arbitrary sample sizes, however the formal specification for the FIPS tests assumes a sample of precisely 20,000 tokens. Hence, if you wish to obtain results that are strictly compliant with the FIPS specification, you should ensure that you use a sample of 20,000 tokens.

FIPS poker test: This test divides the bit sequence at each position into consecutive, non-overlapping groups of four, and derives a four-bit number from each group. It then counts the number of occurrences of each of the 16 possible numbers, and performs a chi-square calculation to evaluate this distribution. If the sample is randomly generated, the distribution of four-bit numbers is likely to be approximately uniform. At each position, the test computes the probability of the observed distribution arising if the tokens are random.

FIPS runs tests: This test divides the bit sequence at each position into runs of consecutive bits which have the same value. It then counts the number of runs with a length of 1, 2, 3, 4, 5, and 6 and above. If the sample is randomly generated, the number of runs with each of these lengths is likely to be within a range determined by the size of the sample set. At each position, the test computes the probability of the observed runs occurring if the tokens are random.

FIPS long runs test: This test measures the longest run of bits with the same value at each bit position. If the sample is randomly generated, the longest run is likely to be within a range determined by the size of the sample set. At each position, the test computes the probability of the observed longest run arising if the tokens are random. Note that the FIPS specification for this test only records a fail if the longest run of bits is overly long. However, an overly short longest run of bits also indicates that the sample is not random. Therefore some bits may record a significance level that is below the FIPS pass level even though they do not strictly fail the FIPS test.

Spectral tests: This test performs a sophisticated analysis of the bit sequence at each position, and is capable of identifying evidence of non-randomness in some samples which pass the other statistical tests. The test works through the bit sequence and treats each series of consecutive numbers as coordinates in a multi-dimensional space. It plots a point in this space at each location determined by these co-ordinates. If the sample is randomly generated, the distribution of points within this space is likely to be approximately uniform; the appearance of clusters within the space indicates that the data is likely to be non-random. At each position, the test computes the probability of the observed distribution occurring if the tokens are random. The test is repeated for multiple sizes of number (between 1 and 8 bits) and for multiple numbers of dimensions (between 2 and 6).

Correlation test: Each of the other bit-level tests operates on individual bit positions within the sampled tokens, and so the amount of randomness at each bit position is calculated in isolation. Performing only this type of test would prevent any meaningful assessment of the amount of randomness in the token as a whole: a sample of tokens containing the same bit value at each position may appear to contain more entropy than a sample of shorter tokens containing different values at each position. Hence, it is necessary to test for any statistically significant relationships between the values at different bit positions within the tokens. If the sample is randomly generated, a value at a given bit position is equally likely to be accompanied by a one or a zero at any other bit position. At each position, this test computes the probability of the relationships observed with bits at other positions arising if the tokens are random. To prevent arbitrary results, when a degree of correlation is observed between two bits, the test adjusts the significance level of the bit whose significance level is lower based on all of the other bit-level tests.

Compression test: This test does not use the statistical approach employed by the other tests, but rather provides a simple intuitive indication of the amount of entropy at each bit position. The test attempts to compress the bit sequence at each position using standard ZLIB compression. The results indicate the proportional reduction in the size of the bit sequence when it was compressed. A higher degree of compression indicates that the data is less likely to be randomly generated.

Starting the analysis

I’m going to leave all of the analysis types enabled for the tests in this tutorial. To start analyzing the cookie/token value we saw earlier we need to click the “start capture” button on the live capture tab:

This will launch the Sequencer tools live capture test window:

The Sequencer tool will now start making requests automatically and record the cookie/token value received in the responses. I normally leave this running until at least 1000 cookie/token values have been captured but you might require more or less depending on your requirements. You will have to capture at least 100 cookie/token values before you can use any of the nine analysis types.

You can see in the image below that over 1000 cookie/token values have been captured for us to analyse:

We can save or copy the cookie/token values to analyse at a later time or click the “analyse now” button to have the captured values analysed immediately:

The Sequencer tool produces many different test results so I will only walk through a few of them in this tutorial.

The summary screen which is shown in the image above gives us a high level overview of the reliability of the tests and an overall result. In our example the reliability of the testing is deemed to be reasonable and the quality of randomness in the sample is extremely poor:

加载中...

We can see that the values are not generated with a high level of entropy at every character position.

If we look at the test results on the character level analysis tab for both the character count and transition tests we can see obvious weaknesses in the cookie/token value:

These two results show that in some character positions the same value appear too many times in the same place to be considered secure. The tokens analysed had the letter “d” at character position 3 in the token 662 times for example. The second image shows the results of the transition testing.

The next two images show the amount of different characters used at each position in the cookie/token value and the maximum entropy available at each position. This shows that some of our character positions never changed and subsequently have no entropy.

The bit level analysis testing produces too many different outputs to cover in this blog post so I won’t be explaining them here. I have included a couple of the test results below for you to peruse:

I hope you have found this blog post useful and I’m always interested in hearing any feedback you have

0 0