Sweden-Number/documentation/winedev-otherdebug.sgml

686 lines
25 KiB
Plaintext

<chapter id="otherdebug">
<title>Other debugging techniques</title>
<sect1 id="hardware-trace">
<title>Doing A Hardware Trace</title>
<para>
The primary reason to do this is to reverse engineer a
hardware device for which you don't have documentation, but
can get to work under Wine.
</para>
<para>
This lot is aimed at parallel port devices, and in particular
parallel port scanners which are now so cheap they are
virtually being given away. The problem is that few
manufactures will release any programming information which
prevents drivers being written for Sane, and the traditional
technique of using DOSemu to produce the traces does not work
as the scanners invariably only have drivers for Windows.
</para>
<para>
Presuming that you have compiled and installed wine the first
thing to do is is to enable direct hardware access to your
parallel port. To do this edit <filename>config</filename>
(usually in <filename>~/.wine/</filename>) and in the
ports section add the following two lines
</para>
<programlisting>
read=0x378,0x379,0x37a,0x37c,0x77a
write=0x378,x379,0x37a,0x37c,0x77a
</programlisting>
<para>
This adds the necessary access required for SPP/PS2/EPP/ECP
parallel port on LPT1. You will need to adjust these number
accordingly if your parallel port is on LPT2 or LPT0.
</para>
<para>
When starting wine use the following command line, where
<literal>XXXX</literal> is the program you need to run in
order to access your scanner, and <literal>YYYY</literal> is
the file your trace will be stored in:
</para>
<programlisting>
WINEDEBUG=+io wine XXXX 2&gt; &gt;(sed 's/^[^:]*:io:[^ ]* //' &gt; YYYY)
</programlisting>
<para>
You will need large amounts of hard disk space (read hundreds
of megabytes if you do a full page scan), and for reasonable
performance a really fast processor and lots of RAM.
</para>
<para>
You will need to postprocess the output into a more manageable
format, using the <command>shrink</command> program. First
you need to compile the source (which is located at the end of
this section):
<programlisting>
cc shrink.c -o shrink
</programlisting>
</para>
<para>
Use the <command>shrink</command> program to reduce the
physical size of the raw log as follows:
</para>
<programlisting>
cat log | shrink &gt; log2
</programlisting>
<para>
The trace has the basic form of
</para>
<programlisting>
XXXX &gt; YY @ ZZZZ:ZZZZ
</programlisting>
<para>
where <literal>XXXX</literal> is the port in hexadecimal being
accessed, <literal>YY</literal> is the data written (or read)
from the port, and <literal>ZZZZ:ZZZZ</literal> is the address
in memory of the instruction that accessed the port. The
direction of the arrow indicates whether the data was written
or read from the port.
</para>
<programlisting>
&gt; data was written to the port
&lt; data was read from the port
</programlisting>
<para>
My basic tip for interpreting these logs is to pay close
attention to the addresses of the IO instructions. Their
grouping and sometimes proximity should reveal the presence of
subroutines in the driver. By studying the different versions
you should be able to work them out. For example consider the
following section of trace from my UMAX Astra 600P
</para>
<programlisting>
0x378 &gt; 55 @ 0297:01ec
0x37a &gt; 05 @ 0297:01f5
0x379 &lt; 8f @ 0297:01fa
0x37a &gt; 04 @ 0297:0211
0x378 &gt; aa @ 0297:01ec
0x37a &gt; 05 @ 0297:01f5
0x379 &lt; 8f @ 0297:01fa
0x37a &gt; 04 @ 0297:0211
0x378 &gt; 00 @ 0297:01ec
0x37a &gt; 05 @ 0297:01f5
0x379 &lt; 8f @ 0297:01fa
0x37a &gt; 04 @ 0297:0211
0x378 &gt; 00 @ 0297:01ec
0x37a &gt; 05 @ 0297:01f5
0x379 &lt; 8f @ 0297:01fa
0x37a &gt; 04 @ 0297:0211
0x378 &gt; 00 @ 0297:01ec
0x37a &gt; 05 @ 0297:01f5
0x379 &lt; 8f @ 0297:01fa
0x37a &gt; 04 @ 0297:0211
0x378 &gt; 00 @ 0297:01ec
0x37a &gt; 05 @ 0297:01f5
0x379 &lt; 8f @ 0297:01fa
0x37a &gt; 04 @ 0297:0211
</programlisting>
<para>
As you can see there is a repeating structure starting at
address <literal>0297:01ec</literal> that consists of four io
accesses on the parallel port. Looking at it the first io
access writes a changing byte to the data port the second
always writes the byte <literal>0x05</literal> to the control
port, then a value which always seems to
<literal>0x8f</literal> is read from the status port at which
point a byte <literal>0x04</literal> is written to the control
port. By studying this and other sections of the trace we can
write a C routine that emulates this, shown below with some
macros to make reading/writing on the parallel port easier to
read.
</para>
<programlisting>
#define r_dtr(x) inb(x)
#define r_str(x) inb(x+1)
#define r_ctr(x) inb(x+2)
#define w_dtr(x,y) outb(y, x)
#define w_str(x,y) outb(y, x+1)
#define w_ctr(x,y) outb(y, x+2)
/* Seems to be sending a command byte to the scanner */
int udpp_put(int udpp_base, unsigned char command)
{
int loop, value;
w_dtr(udpp_base, command);
w_ctr(udpp_base, 0x05);
for (loop=0; loop &lt; 10; loop++)
if ((value = r_str(udpp_base)) & 0x80)
{
w_ctr(udpp_base, 0x04);
return value & 0xf8;
}
return (value & 0xf8) | 0x01;
}
</programlisting>
<para>
For the UMAX Astra 600P only seven such routines exist (well
14 really, seven for SPP and seven for EPP). Whether you
choose to disassemble the driver at this point to verify the
routines is your own choice. If you do, the address from the
trace should help in locating them in the disassembly.
</para>
<para>
You will probably then find it useful to write a script/perl/C
program to analyse the logfile and decode them futher as this
can reveal higher level grouping of the low level routines.
For example from the logs from my UMAX Astra 600P when decoded
further reveal (this is a small snippet)
</para>
<programlisting>
start:
put: 55 8f
put: aa 8f
put: 00 8f
put: 00 8f
put: 00 8f
put: c2 8f
wait: ff
get: af,87
wait: ff
get: af,87
end: cc
start:
put: 55 8f
put: aa 8f
put: 00 8f
put: 03 8f
put: 05 8f
put: 84 8f
wait: ff
</programlisting>
<para>
From this it is easy to see that <varname>put</varname>
routine is often grouped together in five successive calls
sending information to the scanner. Once these are understood
it should be possible to process the logs further to show the
higher level routines in an easy to see format. Once the
highest level format that you can derive from this process is
understood, you then need to produce a series of scans varying
only one parameter between them, so you can discover how to
set the various parameters for the scanner.
</para>
<para>
The following is the <filename>shrink.c</filename> program:
<programlisting>
/* Copyright David Campbell &lt;campbell@torque.net&gt; */
#include &lt;stdio.h&gt;
#include &lt;string.h&gt;
int main (void)
{
char buff[256], lastline[256] = "";
int count = 0;
while (!feof (stdin))
{
fgets (buff, sizeof (buff), stdin);
if (strcmp (buff, lastline))
{
if (count &gt; 1)
printf ("# Last line repeated %i times #\n", count);
printf ("%s", buff);
strcpy (lastline, buff);
count = 1;
}
else count++;
}
return 0;
}
</programlisting>
</para>
</sect1>
<sect1 id="undoc-func">
<title>Understanding undocumented APIs</title>
<para>
Some background: On the i386 class of machines, stack entries are
usually dword (4 bytes) in size, little-endian. The stack grows
downward in memory. The stack pointer, maintained in the
<literal>esp</literal> register, points to the last valid entry;
thus, the operation of pushing a value onto the stack involves
decrementing <literal>esp</literal> and then moving the value into
the memory pointed to by <literal>esp</literal>
(i.e., <literal>push p</literal> in assembly resembles
<literal>*(--esp) = p;</literal> in C). Removing (popping)
values off the stack is the reverse (i.e., <literal>pop p</literal>
corresponds to <literal>p = *(esp++);</literal> in C).
</para>
<para>
In the <literal>stdcall</literal> calling convention, arguments are
pushed onto the stack right-to-left. For example, the C call
<function>myfunction(40, 20, 70, 30);</function> is expressed in
Intel assembly as:
<screen>
push 30
push 70
push 20
push 40
call myfunction
</screen>
The called function is responsible for removing the arguments
off the stack. Thus, before the call to myfunction, the
stack would look like:
<screen>
[local variable or temporary]
[local variable or temporary]
30
70
20
esp -> 40
</screen>
After the call returns, it should look like:
<screen>
[local variable or temporary]
esp -> [local variable or temporary]
</screen>
</para>
<para>
To restore the stack to this state, the called function must know how
many arguments to remove (which is the number of arguments it takes).
This is a problem if the function is undocumented.
</para>
<para>
One way to attempt to document the number of arguments each function
takes is to create a wrapper around that function that detects the
stack offset. Essentially, each wrapper assumes that the function will
take a large number of arguments. The wrapper copies each of these
arguments into its stack, calls the actual function, and then calculates
the number of arguments by checking esp before and after the call.
</para>
<para>
The main problem with this scheme is that the function must actually
be called from another program. Many of these functions are seldom
used. An attempt was made to aggressively query each function in a
given library (<filename>ntdll.dll</filename>) by passing 64 arguments,
all 0, to each function. Unfortunately, Windows NT quickly goes to a
blue screen of death, even if the program is run from a
non-administrator account.
</para>
<para>
Another method that has been much more successful is to attempt to
figure out how many arguments each function is removing from the
stack. This instruction, <literal>ret hhll</literal> (where
<symbol>hhll</symbol> is the number of bytes to remove, i.e. the
number of arguments times 4), contains the bytes
<literal>0xc2 ll hh</literal> in memory. It is a reasonable
assumption that few, if any, functions take more than 16 arguments;
therefore, simply searching for
<literal>hh == 0 && ll &lt; 0x40</literal> starting from the
address of a function yields the correct number of arguments most
of the time.
</para>
<para>
Of course, this is not without errors. <literal>ret 00ll</literal>
is not the only instruction that can have the byte sequence
<literal>0xc2 ll 0x0</literal>; for example,
<literal>push 0x000040c2</literal> has the byte sequence
<literal>0x68 0xc2 0x40 0x0 0x0</literal>, which matches
the above. Properly, the utility should look for this sequence
only on an instruction boundary; unfortunately, finding
instruction boundaries on an i386 requires implementing a full
disassembler -- quite a daunting task. Besides, the probability
of having such a byte sequence that is not the actual return
instruction is fairly low.
</para>
<para>
Much more troublesome is the non-linear flow of a function. For
example, consider the following two functions:
<screen>
somefunction1:
jmp somefunction1_impl
somefunction2:
ret 0004
somefunction1_impl:
ret 0008
</screen>
In this case, we would incorrectly detect both
<function>somefunction1</function> and
<function>somefunction2</function> as taking only a single
argument, whereas <function>somefunction1</function> really
takes two arguments.
</para>
<para>
With these limitations in mind, it is possible to implement
more stubs
in Wine and, eventually, the functions themselves.
</para>
</sect1>
<sect1>
<title>How to do regression testing using CVS</title>
<para>
A problem that can happen sometimes is 'it used to work
before, now it doesn't anymore...'. Here is a step by step
procedure to try to pinpoint when the problem occurred. This
is <emphasis>NOT</emphasis> for casual users.
</para>
<orderedlist>
<listitem>
<para>
Get the <quote>full CVS</quote> archive from winehq. This
archive is the CVS tree but with the tags controlling the
versioning system. It's a big file (> 40 meg) with a name
like full-cvs-&lt;last update date> (it's more than 100mb
when uncompressed, you can't very well do this with
small, old computers or slow Internet connections).
</para>
</listitem>
<listitem>
<para>
untar it into a repository directory:
<screen>
cd /home/gerard
tar -zxf full-cvs-2003-08-18.tar.gz
mv wine repository
</screen>
</para>
</listitem>
<listitem>
<para>
extract a new destination directory. This directory must
not be in a subdirectory of the repository else
<command>cvs</command> will think it's part of the
repository and deny you an extraction in the repository:
<screen>
cd /home/gerard
mv wine wine_current (-> this protects your current wine sandbox, if any)
export CVSROOT=/home/gerard/repository
cvs -d $CVSROOT checkout wine
</screen>
</para>
<para>
Note that it's not possible to do a checkout at a given
date; you always do the checkout for the last date where
the full-cvs-xxx snapshot was generated.
</para>
<para>
Note also that it is possible to do all this with a direct
CVS connection, of course. The full CVS file method is less
painful for the WineHQ CVS server and probably a bit faster
if you don't have a very good net connection.
</para>
</listitem>
<listitem>
<para>
you will have now in the <filename>~/wine</filename>
directory an image of the CVS tree, on the client side.
Now update this image to the date you want:
<screen>
cd /home/gerard/wine
cvs update -PAd -D "2004-08-23 CDT"
</screen>
</para>
<para>
The date format is <literal>YYYY-MM-DD HH:MM:SS</literal>.
Using the CST date format ensure that you will be able to
extract patches in a way that will be compatible with the
wine-cvs archive
<ulink url="http://www.winehq.org/hypermail/wine-cvs">
http://www.winehq.org/hypermail/wine-cvs</ulink>
</para>
<para>
Many messages will inform you that more recent files have
been deleted to set back the client cvs tree to the date
you asked, for example:
<screen>
cvs update: tsx11/ts_xf86dga2.c is no longer in the repository
</screen>
</para>
<para>
<command>cvs update</command> is not limited to upgrade to
a <emphasis>newer</emphasis> version as I have believed for
far too long :-(
</para>
</listitem>
<listitem>
<para>
Now proceed as for a normal update:
</para>
<screen>
./configure
make depend && make
</screen>
<para>
If any non-programmer reads this, the fastest method to
get at the point where the problem occurred is to use a
binary search, that is, if the problem occurred in 1999,
start at mid-year, then is the problem is already here,
back to 1st April, if not, to 1st October, and so on.
</para>
<para>
If you have lot of hard disk free space (a full compile
currently takes 400 Mb), copy the oldest known working
version before updating it, it will save time if you need
to go back. (it's better to <command>make
distclean</command> before going back in time, so you
have to make everything if you don't backup the older
version)
</para>
<para>
When you have found the day where the problem happened,
continue the search using the wine-cvs archive (sorted by
date) and a more precise cvs update including hour,
minute, second:
<screen>
cvs update -PAd -D "2004-08-23 15:17:25 CDT"
</screen>
This will allow you to find easily the exact patch that
did it.
</para>
</listitem>
<listitem>
<para>
If you find the patch that is the cause of the problem,
you have almost won; report about it to
<ulink url="http://bugs.winehq.org/">Wine Bugzilla</ulink>
or subscribe to wine-devel and post it there. There is a
chance that the author will jump in to suggest a fix; or
there is always the possibility to look hard at the patch
until it is coerced to reveal where is the bug :-)
</para>
</listitem>
</orderedlist>
</sect1>
<sect1>
<title>Which code has been tested?</title>
<para>
Deciding what code should be tested next can be a difficult
decision. And in any given project, there is always code that
isn't tested where bugs could be lurking. This section goes
over how to identify these sections using a tool called gcov.
</para>
<para>
To use gcov on wine, do the following:
</para>
<orderedlist>
<listitem>
<para>
In order to activate code coverage in the wine source code,
when running <command>make</command> set
<literal>CFLAGS</literal> like so <command>make
CFLAGS="-fprofile-arcs -ftest-coverage"</command>. Note that
this can be done at any directory level. Since compile
and run time are significantly increased by these flags, you
may want to only use these flags inside a given dll directory.
</para>
</listitem>
<listitem>
<para>
Run any application or test suite.
</para>
</listitem>
<listitem>
<para>
Run gcov on the file which you would like to know more
about code coverage.
</para>
</listitem>
</orderedlist>
<para>
The following is an example situation when using gcov to
determine the coverage of a file could be helpful. We'll use
the <filename>dlls/lzexpand/lzexpand_main.c.</filename> file.
At one time the code in this file was not fully tested (as it
may still be). For example at the time of this writing, the
function <function>LZOpenFileA</function> had the following
lines in it:
<screen>
if ((mode&~0x70)!=OF_READ)
return fd;
if (fd==HFILE_ERROR)
return HFILE_ERROR;
cfd=LZInit(fd);
if ((INT)cfd <= 0) return fd;
return cfd;
</screen>
Currently there are a few tests written to test this function;
however, these tests don't check that everything is correct.
For instance, <constant>HFILE_ERROR</constant> may be the wrong
error code to return. Using gcov and directed tests, we can
validate the correctness of this line of code. First, we see
what has been tested already by running gcov on the file.
To do this, do the following:
<screen>
cvs checkout wine
mkdir build
cd build
../wine/configure
make depend && make CFLAGS="-fprofile-arcs -ftest-coverage"
cd dlls/lxexpand/tests
make test
cd ..
gcov ../../../wine/dlls/lzexpand/lzexpand_main.c
0.00% of 3 lines executed in file ../../../wine/include/wine/unicode.h
Creating unicode.h.gcov.
0.00% of 4 lines executed in file /usr/include/ctype.h
Creating ctype.h.gcov.
0.00% of 6 lines executed in file /usr/include/bits/string2.h
Creating string2.h.gcov.
100.00% of 3 lines executed in file ../../../wine/include/winbase.h
Creating winbase.h.gcov.
50.83% of 240 lines executed in file ../../../wine/dlls/lzexpand/lzexpand_main.c
Creating lzexpand_main.c.gcov.
less lzexpand_main.c.gcov
</screen>
Note that there is more output, but only output of gcov is
shown. The output file
<filename>lzexpand_main.c.gcov</filename> looks like this.
<screen>
9: 545: if ((mode&~0x70)!=OF_READ)
6: 546: return fd;
3: 547: if (fd==HFILE_ERROR)
#####: 548: return HFILE_ERROR;
3: 549: cfd=LZInit(fd);
3: 550: if ((INT)cfd <= 0) return fd;
3: 551: return cfd;
</screen>
<command>gcov</command> output consists of three components:
the number of times a line was run, the line number, and the
actual text of the line. Note: If a line is optimized out by
the compiler, it will appear as if it was never run. The line
of code which returns <constant>HFILE_ERROR</constant> is
never executed (and it is highly unlikely that it is optimized
out), so we don't know if it is correct. In order to validate
this line, there are two parts of this process. First we must
write the test. Please see <xref linkend="testing"> to
learn more about writing tests. We insert the following lines
into a test case:
<screen>
INT file;
/* Check for nonexistent file */
file = LZOpenFile("badfilename_", &amp;test, OF_READ);
ok(file == LZERROR_BADINHANDLE,
"LZOpenFile succeeded on nonexistent file\n");
LZClose(file);
</screen>
Once we add in this test case, we now want to know if the line
in question is run by this test and works as expected. You
should be in the same directory as you left off in the previous
command example. The only difference is that we have to remove
the <filename>*.da</filename> files in order to start the
count over (if we leave the files than the number of times the
line is run is just added, e.g. line 545 below would be run 19 times)
and we remove the <filename>*.gcov</filename> files because
they are out of date and need to be recreated.
</para>
<screen>
rm *.da *.gcov
cd tests
make
make test
cd ..
gcov ../../../wine/dlls/lzexpand/lzexpand_main.c
0.00% of 3 lines executed in file ../../../wine/include/wine/unicode.h
Creating unicode.h.gcov.
0.00% of 4 lines executed in file /usr/include/ctype.h
Creating ctype.h.gcov.
0.00% of 6 lines executed in file /usr/include/bits/string2.h
Creating string2.h.gcov.
100.00% of 3 lines executed in file ../../../wine/include/winbase.h
Creating winbase.h.gcov.
51.67% of 240 lines executed in file ../../../wine/dlls/lzexpand/lzexpand_main.c
Creating lzexpand_main.c.gcov.
less lzexpand_main.c.gcov
</screen>
<para>
Note that there is more output, but only output of gcov is
shown. The output file
<filename>lzexpand_main.c.gcov</filename> looks like this.
</para>
<screen>
10: 545: if ((mode&~0x70)!=OF_READ)
6: 546: return fd;
4: 547: if (fd==HFILE_ERROR)
1: 548: return HFILE_ERROR;
3: 549: cfd=LZInit(fd);
3: 550: if ((INT)cfd <= 0) return fd;
3: 551: return cfd;
</screen>
<para>
Based on gcov, we now know that
<constant>HFILE_ERROR</constant> is returned once. And since
all of our other tests have remain unchanged, we can assume
that the one time it is returned is to satisfy the one case we
added where we check for it. Thus we have validated a line of
code. While this is a cursory example, it demostrates the
potential usefulness of this tool.
</para>
<para>
For a further in depth description of gcov, the official gcc
compiler suite page for gcov is <ulink
url="http://gcc.gnu.org/onlinedocs/gcc-3.2.3/gcc/Gcov.html">
http://gcc.gnu.org/onlinedocs/gcc-3.2.3/gcc/Gcov.html</ulink>.
There is also an excellent article written by Steve Best for
Linux Magazine which describes and illustrates this process
very well at
<ulink url="http://www.linux-mag.com/2003-07/compile_01.html">
http://www.linux-mag.com/2003-07/compile_01.html</ulink>.
</para>
</sect1>
</chapter>
<!-- Keep this comment at the end of the file
Local variables:
mode: sgml
sgml-parent-document:("wine-devel.sgml" "set" "book" "part" "chapter" "")
End:
-->