This is a first step towards cleaning up the fog mess. The fog
parameter is added to the pixelshader compile args structure. That way
multiple pshaders are compiled for different fog settings, and the
pixel shader can remove the fog line if fog is not enabled. That way
we don't need special fog start and end settings, and this allows us
to implement EXP and EXP2 fog in the future too.
Based on a patch by Stefan Dösinger. This is more flexible, and allows
the shader backend implementation to be simpler, since it doesn't have
to know about specific formats. The next patch makes use of this.
Some stateblock parameters have to be compiled into the GL pixel
shader code, like lines for pixelformat fixups. This leads to problems
when applications switch those settings, requiring a recompilation of
the shader. This patch enables wined3d to have multiple GL shaders for
a D3D shader(pixel shaders only so far) to handle this more
efficiently.
This was suggested by Ivan quite a while ago, and we need it to better
handle conflicting texture format corrections and similar stateblock
value changes which until now required a recompilation of the entire
shader
A number of considerations contribute to this:
1) The shader backend knows best which shader(s) it needs. GLSL needs
both, arb only one
2) The shader backend may pass some parameters to the compilation
code(e.g. which pixel format fixup to use)
3) The structures used in (2) are different in vs and ps, so a
baseshader::Compile won't work
4) The structures in (2) are wined3d-private structures, so
having a public method in the vtable won't work(its a bad idea
anyway).
The code here skipped constant loading when a pixel shader was in use,
and only reloaded them on ffp use if the shader implementation used
ARB too. This way a e.g. texfactor change could get lost if GLSL
shaders are used, and the texfactor changed while a pixel shader was
in use.
GL_ATI_envmap_bumpmap provides two things: Signed V8U8 pixel formats,
and bump mapping. The extension is only supported on fglrx, and this
driver also supports GL_ARB_fragment_program. Thus the bump mapping
code is never used on any driver out there. Furthermore, if it is
used, it tends to crash the driver
The signed pixel format is used, as it can be used by pixel shaders or
the ARBfp replacement. However, the format is broken in fglrx, and
negative values are clamped to 0.0. This results in test
failures. WineD3D has an alternative codepath using scale+bias to
enable V8U8 using a standard signed RGB which works correctly on
fglrx.
The current code properly enabled/disabled GL_ARB_fragment_program
after a depth blit, but it did not restore the bound fragment program
properly. This leads to problems if a depth blit was done between two
draws without any change of the fragment processing settings.
This is the prefered format of many codecs, and for some codecs this
is the only supported output format. As usual I try to handle all the
conversion in the GPU and keep the CPU involvement minimal to gain the
full performance of PBO transfers.
GL_ARB_fragment_program and GL_ATI_fragment_shader can disable
projected textures properly, and they can also handle
D3DTTFF_PROJECTED | D3DTTFF_COUNT3 properly.
If we're heading out of the pixelshader handler early, and a pixel
shader is in use, the pixel shader may not be compiled. The vertex
shader handler then checks if the pixel shader is dirty, and calls the
shader backend to apply the shader if it isn't. Thus, in the case of
GLSL, the shader code could attempt to link an uncompiled shader into
the program. This isn't much of a problem because when the fog is
applied, the pixel shader is compiled and the program re-linked.
If a format is not supported natively by opengl, a shader may be able
to convert it. Up to now, CheckDeviceFormat had magic knowldge which
GL extensions lead to which supported format. This patch adds
functions that allow CheckDeviceFormat to ask the actual
implementation for its capabilities.
Fixed function processing can only deal with values between 0 and 1
generally. Clamp the results of instructions that could produce bigger
or smaller values.
This is an ATI specific format designed for compressed normal maps,
and quite a few games check for its existence. While it is an
ATI-specific "extension" in d3d9, it is a core part of
D3D10(DXGI_FORMAT_BC5), and supported on Geforce 8 cards.
It isn't related to the shader backend any longer. The nvts_enable in
the ffp code isn't quite right as well, it should be moved away once
there is a dedicated nvts fragment pipeline replacement
Calling shader_select() from inside depth_blt() isn't necessarily
safe. shader_select() assumes CompileShader() has been called for the
current shaders, but that depends on STATE_VSHADER / STATE_PIXELSHADER
being applied. That isn't always true when depth_blt() gets called,
with the result that sometimes GLSL programs could be created with no
shader objects attached.
Constant numbers start at 0, and the loading loop has a for(i; i <
dirtyconsts; i++). This means that the highest dirty constant isn't
loaded correctly. Rather than replacing the < with <=, which would
make it impossible to have no dirty constant, add 1 to the dirty
constant counter.
The previous logic assumed that if NVTS or ATIFS are available they
will be used. This happens to be true for NVTS, but ATIFS is only used
if neither ARBFP nor GLSL are supported. This breaks fixed function
fragment processing on ATI r300 and newer cards
The whole control structures in directx.c get terribly confusing with
the various codepaths for texturing and different shader
implementations. It is also hard to reflect the shader model
decisions this way too. This patch moves the shader specific parts of
the caps code into the shader backend where we can set our caps
dependent of the shader model decisions and without complex caps flag
checks.
Generating the shader ID and parts of the shader prolog and epilog was
done by the common vertexshader.c / pixelshader.c, which is ugly.
This patch doesn't get rid of all the uglyness, somewhen we'll still
have to sort out the relationship of [arb|glsl]_generate_shader and
[arb|glsl]_generate_declarations.
Add a new property of the shader backend which indicates whether the
shader backend is able to dirtify single constants rather than
dirtifying vshader and pshader constants as a whole. Depending on this
a different Set*ConstantF implementation is used which marks constants
dirty. The ARB shader backend uses this and marks constants clean
after uploading.
Plain OpenGL does not provide any signed pixel formats, so the
unsigned GL_RGB is used for loading perturbation data into pixel
shaders that use texbem. For correct loading, the signedness has to be
considered.
ARB shaders need a swizzle for the RSQ and RCP instructions, while d3d
shaders do not. The DirectX sdk says that the x component is used if
no swizzle is given.
- Replace gl_FragColor with gl_FragData[0] for GLSL pixel shader output.
- Subtract 1 more constant from total GLSL allowed float constants to
accommodate the PROJECTION matrix row that we reference.
- move DEF, DEFI, DEFB handling into the register counting pass
- keep track of defined constants as a linked list (because there's a
few of them)
- apply immediate constants after global constants in the constant
loading function
- both types of constants now get loaded with array notation in the
shader (into the same array)
- Separate the declaration phase of the shader string generator into
the arb and glsl specific files.
- Add declarations and recognition for application-sent constant
integers and booleans (locally defined ones will follow).
- Standardize capitilization of pixel/vertex specific variable names.
- Moves GLSL constant loading code into glsl_shader.c and out of the
over-populated drawprim.c.
- Creates a new file named arb_program_shader.c which will hold code
specific to ARB_vertex_program & ARB_fragment_program.
- Remove the constant loading calls from drawprim.c