1. 19 Sep, 2010 1 commit
  2. 16 Sep, 2010 1 commit
  3. 08 Sep, 2010 1 commit
  4. 31 Aug, 2010 1 commit
  5. 26 Aug, 2010 2 commits
  6. 28 Jul, 2010 1 commit
  7. 23 Jul, 2010 1 commit
  8. 22 Jul, 2010 2 commits
  9. 21 Jul, 2010 5 commits
  10. 19 Jul, 2010 6 commits
  11. 28 May, 2010 1 commit
  12. 27 May, 2010 2 commits
  13. 26 May, 2010 1 commit
  14. 25 May, 2010 3 commits
  15. 21 May, 2010 2 commits
  16. 18 May, 2010 1 commit
  17. 13 May, 2010 1 commit
  18. 10 May, 2010 1 commit
  19. 07 May, 2010 2 commits
  20. 06 May, 2010 3 commits
    • Vincent Povirk's avatar
    • Vincent Povirk's avatar
      ole32: Rewrite transacted storage to be more lazy. · d07a4868
      Vincent Povirk authored
      When creating a new transacted storage object (or reverting an
      existing one), rather than copy the original storage, we simply create
      a "stub directory entry" for the root. As stub entries are accessed,
      we fill in their data from the parent and create new stubs for any
      linked entries. The streams have copy on write semantics - reads are
      from the original entry until a change is made, then we make a copy in
      the scratch file.
      
      When committing transacted storages, we have to create a new tree with
      the new data so that the storage entry can be modified in one step,
      but unmodified sections of the tree can now be shared between the new
      tree and the old. An entry can be shared if it and all entries
      reachable from it are unmodified. In the trivial case where nothing
      has been modified, we don't have to make a new tree at all.
      d07a4868
    • Nikolay Sivov's avatar
      56fdbc22
  21. 05 May, 2010 2 commits
    • Vincent Povirk's avatar
      ole32: Store the location of all blocks in a big block chain in memory. · 42550953
      Vincent Povirk authored
      A big block chain is a linked list, and we pretty much need random
      access to them. This should theoretically make accessing a random
      point in the chain O(log2 n) instead of O(n) (with disk access scaling
      based on the size of the read/write, not its location). It
      theoretically takes O(n) memory based on the size, but it can do
      better if the chain isn't very fragmented (which I believe will
      generally be the case for long chains). It also involves fetching all
      the big block locations when we open the chain, but we already do that
      anyway (and it should be faster to read it all in one go than
      piecemeal).
      42550953
    • Vincent Povirk's avatar