Oct 31, 2017

4 min read

Dgenr8’s Difficulty Adjustment Algorithm Explained

Note: This article was originally published at

In the last few weeks, several new candidates for Bitcoin Cash’s new difficulty adjustment algorithm have emerged. Some of these proposed algorithms have been incorporated into Kyuupichan’s model [1], which attempts to simulate miners moving hash power based on their economic incentives.

Deadalnix has proposed a simpler version of his algorithm [2]. It still uses chain work to calculate a difficulty based on estimated hash rate, but his new proposal replaces the two targeting windows with one 144-block window. It is labelled as “cw-144” in Kyuupichan’s model.

One of the problems with using a simple fixed sample window, like cw-144 does, is that the calculated difficulty becomes sensitive to the particularities of the endpoints of the sample window. This has the potential to accentuate oscillations as unusually long (or short) blocks are produced, and then exit the sample window 144 blocks later.

A new proposal, from dgenr8 (Tom Harding, maintainer of Bitcoin XT) attempts to solve this problem. The algorithm is called wt-144. As can be guessed from the name, it has some similarities to deadalnix’s updated proposal, including a 144-block sample window. But it also has some important and interesting differences.

Design Approach

The interesting part of wt-144, is that these inter-block times are not all treated equally. They are weighted by two factors:

  1. The inter-block time is weighted by the target of the block produced. This means that inter-block times mined at lower difficulty blocks are multiplied by a larger number (since lower difficulty corresponds to a higher “target”). This makes sense intuitively, as we can imagine the times being “normalized” to the difficulty of the last block. It is interesting to note that this calculation seems strangely similar to Deadalnix’s chain work formula. It uses the same terms, but they are summed differently. It would interesting to analyze the theory behind the similarities and differences of the two approaches.
  2. The times are weighted by the recency. The most recent block is weighted highest, with weights decreasing linearly back to the start of the 144-block sample window. This recency weighting makes the targeting responsive, while also providing some stability based on block history. It also means that there is no sudden boundary to the sample window, which can cause sudden changes as blocks enter or leave the window, which can lead to oscillations.

Because the wt-144 uses every timestamp in the interval for its calculation, manipulated timestamps can affect the target. But since each individual timestamp has a small effect on the calculated target, the impact is limited.


  1. If the timestamp of the block is less than prior_timestamp, set timestamp equal to prior_timestamp (used to deal with negative inter-block times).
  2. Set time_i equal to timestamp — prior_timestamp (this is the inter-block time for this block).
  3. Set prior_timestamp equal to timestamp (for the next time through the loop).
  4. Multiply the inter-block time by the ratio of current block difficulty target to last_target
  5. Multiply this number by the recency weight. Recency weight starts at 1, and is incremented by 1 each time through the loop.
  6. Add the weighted inter-block time to “timespan”.
  7. End the loop when it reaches the most recent block.

After looping through all 144 blocks, normalize timespan by multiplying it by 2, and dividing by the number of blocks in the sample (144) plus 1.

Then, the difficulty target is calculated as the last target, multiplied by “timespan” and divided by the number of blocks in the sample times 600 seconds (which is the “expected” timespan for the sample blocks)


Although it is very important to fix Bitcoin Cash’s difficulty algorithm quickly, it should be replaced with the best algorithm possible. It should be an algorithm that performs well in varied and difficult circumstances, so that it can serve Bitcoin Cash for the long term. Perhaps Dgenr8’s algorithm has the characteristics needed to fill that role.


[2] cw-144 implementation:

[3] wt-144 implementation:

[4] worst case simulation: