Comments on: Please wait, pausing… Yeah, that's another point worth remembering, and one I'd completely forgotten about, I have to confess! Thanks! I don't think there's much you can do about it in the case of general seek times (I guess in <i>theory</i> you could predict the pattern and factor it into your data layout, but in practice I think other random variables like the drive age/make/etc would throw you off), but back in the PS2 days I did get a noticeable speed-up by optimising the way that successive read requests were sent to the drive - if you read (say) two 32K chunks one immediately after the other, there was a pretty big performance gap between the case where the drive could simply read through all 64K of data, and the one where it had to wait a bit longer for the next command, and hence missed the "read window" and was stalled until the next revolution of the disc. Yeah, that’s another point worth remembering, and one I’d completely forgotten about, I have to confess! Thanks!

I don’t think there’s much you can do about it in the case of general seek times (I guess in theory you could predict the pattern and factor it into your data layout, but in practice I think other random variables like the drive age/make/etc would throw you off), but back in the PS2 days I did get a noticeable speed-up by optimising the way that successive read requests were sent to the drive – if you read (say) two 32K chunks one immediately after the other, there was a pretty big performance gap between the case where the drive could simply read through all 64K of data, and the one where it had to wait a bit longer for the next command, and hence missed the “read window” and was stalled until the next revolution of the disc.

]]>
By: Luke Hutchinson/2011/06/01/please-wait-pausing/#comment-5136 Luke Hutchinson Sat, 04 Jun 2011 09:59:20 +0000 Oh, that's interesting... I'd never spotted that the PS3 was using CLV. Many thanks for the correction! Oh, that’s interesting… I’d never spotted that the PS3 was using CLV.
Many thanks for the correction!

]]>
By: Drew Thaler/2011/06/01/please-wait-pausing/#comment-5099 Drew Thaler Thu, 02 Jun 2011 22:41:16 +0000 A pitch demo I was working on about 6-7 years ago for PS2 suffered from around 2 minute level load times off the DVD. Having no real experience shipping anything at the time, all development to that point had been done off the devkit HDD, so these load times was naturally not noticed until we began the process of prepping the demo disc. We were fortunate that we weren't using our entire DVD capacity, and deadline was looming, so I ended up implementing some special modes in our file system. In "normal" mode, the system loaded data from individual files, performed seeks, file length queries etc. When put in "record" mode, all data that was returned via the ordinary file apis was streamed to a single flat file. When put in "replay" mode, all file requests were ignored, and data was streamed into a memory buffer that satisfied all file API requests, performing buffer refill requests when it crossed a refill threshold. A special debugging mode inserted id tags in the flat-file stream that indicated what kind of requests the flat-file was recorded with. This proved invaluable for debugging determinism problems, but also helped with highlighting problem systems that entered !feof() spin loops to grab one character at a time. yeah. :/ We were shocked and thrilled that elimination of seeks, simultaneous open files, random reads, file size queries brought load time from DVD into the 10-20 second range. We'd record a flatfile per level, but since our level geometry was practically unique anyway this didn't really increase disk capacity too much. It was the texture data and duplication of movable entity geometry that ended up bloating the DVD image size. A pitch demo I was working on about 6-7 years ago for PS2 suffered from around 2 minute level load times off the DVD. Having no real experience shipping anything at the time, all development to that point had been done off the devkit HDD, so these load times was naturally not noticed until we began the process of prepping the demo disc. We were fortunate that we weren’t using our entire DVD capacity, and deadline was looming, so I ended up implementing some special modes in our file system.

In “normal” mode, the system loaded data from individual files, performed seeks, file length queries etc. When put in “record” mode, all data that was returned via the ordinary file apis was streamed to a single flat file. When put in “replay” mode, all file requests were ignored, and data was streamed into a memory buffer that satisfied all file API requests, performing buffer refill requests when it crossed a refill threshold. A special debugging mode inserted id tags in the flat-file stream that indicated what kind of requests the flat-file was recorded with. This proved invaluable for debugging determinism problems, but also helped with highlighting problem systems that entered !feof() spin loops to grab one character at a time. yeah. :/
We were shocked and thrilled that elimination of seeks, simultaneous open files, random reads, file size queries brought load time from DVD into the 10-20 second range.

We’d record a flatfile per level, but since our level geometry was practically unique anyway this didn’t really increase disk capacity too much. It was the texture data and duplication of movable entity geometry that ended up bloating the DVD image size.

]]>
By: Rob/2011/06/01/please-wait-pausing/#comment-5062 Rob Wed, 01 Jun 2011 16:48:22 +0000