The scaled diff is just difficulty times 100 so details appear in the graph.
https://docs.google.com/spreadsheet/pub?key=0ArV2MHjf7HT0dFhtQmRBV2FWQ0syQVR3TTRLcWpya0E&single=true&gid=1&output=html
The issue is that the graph here is in relation to time...not blocks (thanks for the graph though by the way -- appreciate the effort.) A graph with network speed/blocks/difficulty would be a better comparison. While I'm struggling to make sense of the difficulty calculation formula, my guess is that a speed/blocks/difficulty would be consistent with history.
The intended target is to find ~24 blocks/hour. Instead, due to the difficulty lag and the reduction in network hash power, we're closer to finding 1 block/hour. As a result, all graphs based on time are going to be a bit misleading. The time helps to show the frustration factor, sure, but all of the calculations are based on blocks. Me labeling the formula as "flawed" earlier was probably not the best choice of wording -- the formula just needs to better factor in the behavior of everyone on the network -- especially bringing multipools into the equation.
Does anybody know for sure what is being used to calculate the difficulty? Even if the previous 200 blocks were being used the difficulty should be around 3 right now after the latest retarget...and I can't imagine more than 200 blocks would factor into the equation. It must be using a combination of various factors; some transparency here would be appreciated so we can at least estimate where we are at and where we are heading. Obviously we could put a bunch of additional time into it and analyze data, sort through the source, etc...but nothing would be more accurate (nor more simple) than Kimoto just explaining it to us.