If an application switches between render targets of a different size, but
with the same depth/stencil surface it'll typically clear the depth/stencil
surface before drawing. However, in case of the smaller render target that
wouldn't be a full clear, so we'd have to do a depth copy if we also switched
between onscreen and offscreen rendering. Keeping track of which part of the
depth/stencil surface is current for onscreen/offscreen allows us to avoid
most of these kinds of copies. The current scheme requires the current/dirty
rectangle to have an origin at (0,0). This could be extended to an arbitrary
rectangle, but the bookkeeping becomes somewhat more complex in that case, and
it's not clear that there would be much of a benefit at this point.
This allows the swapchain to know what depth format its window contexts have to see if the
requested depth format is compatible or a FBO fallback is needed, and it will be needed to
set the onscreen format to the requested auto depth stencil format instead of the
let's-hope-it-fits D24_UNORM_S8_UINT format.
Unfortunately there are plenty of other places left. Perhaps we should
consider creating our own window when the context becomes invalid and making
the context current on that instead.
For example, interpolating palette indices doesn't have the desired result.
Should we really want filtering for these cases we could implement it inside
the relevant shaders, after the fixup, but I doubt it's worth the effort.
This causes a small performance hit when multiple GL contexts are used. As an
optimization we could use ARB_sync to only wait for the last draw call instead
of all GL commands.
Currently callers of this function are responsible for setting the draw buffer
correctly, but they don't do a very good job:
- swapchain_init() sets the draw buffer to GL_BACK if there's a back buffer,
even though the context's target is always the front buffer.
- swapchain_create_context_for_thread() depends on (eventually) being called
by FindContext().
- create_primary_opengl_context() and
IWineD3DSwapChainImpl_SetDestWindowOverride() don't bother setting a draw
buffer at all.
Just marking the draw buffer dirty lets the context management sort it all
out, and is much simpler.
The idea here is that we can restore the thread's current GL context on
context_release() if it doesn't correspond to the current wined3d context on
context_acquire().
This prevents for example a d3d9 depth stencil from being destroyed when it
has no external references but is still in use by the device/stateblock. A
nice side effect is that it simplifies handling of "implicit" surfaces like
the frontbuffer and backbuffers, as well as the forwarding of reference counts
for surfaces that are part of a texture.