Some ILockBytes objects will not really write changes until their Flush
method is called. Also, further optimizations to the storage implementation
will involve caching writes, which will have to be flushed at times.
In some storage files, the size of this stream is not a multiple of the big
block size. This means that we may need to enlarge the stream even when we
don't really have to allocate more space for it.
CreateStubEntry can change the value of This->entries, in which case the
assignment can go to the wrong place. So instead, assign to a temporary
variable, and copy the data back after all CreateStubEntry calls are finished.
When creating a new transacted storage object (or reverting an
existing one), rather than copy the original storage, we simply create
a "stub directory entry" for the root. As stub entries are accessed,
we fill in their data from the parent and create new stubs for any
linked entries. The streams have copy on write semantics - reads are
from the original entry until a change is made, then we make a copy in
the scratch file.
When committing transacted storages, we have to create a new tree with
the new data so that the storage entry can be modified in one step,
but unmodified sections of the tree can now be shared between the new
tree and the old. An entry can be shared if it and all entries
reachable from it are unmodified. In the trivial case where nothing
has been modified, we don't have to make a new tree at all.
A big block chain is a linked list, and we pretty much need random
access to them. This should theoretically make accessing a random
point in the chain O(log2 n) instead of O(n) (with disk access scaling
based on the size of the read/write, not its location). It
theoretically takes O(n) memory based on the size, but it can do
better if the chain isn't very fragmented (which I believe will
generally be the case for long chains). It also involves fetching all
the big block locations when we open the chain, but we already do that
anyway (and it should be faster to read it all in one go than
piecemeal).
This value is stored in the storage file header. We currently hard-code it to
0x1000. I don't expect to see files in the wild with other values, but
according to MS this is a valid configuration. For now, just fail if we see
another value.
I've also upgraded the message for unexpected values in storage file headers
to a fixme, since they are valid according to MS.
We can't determine the correct block size until we read the header, and we
can't create the BigBlockFile until we know the correct block size. To
solve this dilemma, have StorageImpl calculate the file size it needs instead
of asking the BigBlockFile to "ensure a big block exists".
In simple mode, StreamWriteAt would assume that StreamSetSize uses the size
it asks for, but in some cases the size would be pushed above the small block
limit. StreamWriteAt would then attempt to write using a small block chain,
even though a big block chain was created.
Opening a storage when it has already been opened now fails with
STG_E_ACCESSDENIED. If we attempt to copy a storage to its own child, this
will happen during the copy.
TransactedSnapshotImpl acts as a proxy between the user and the storage
interfaces that modify the file directly (or another transacted storage).
Currently, it does not change the operations.